id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2306.14975
The Underlying Scaling Laws and Universal Statistical Structure of Complex Datasets
We study universal traits which emerge both in real-world complex datasets, as well as in artificially generated ones. Our approach is to analogize data to a physical system and employ tools from statistical physics and Random Matrix Theory (RMT) to reveal their underlying structure. We focus on the feature-feature covariance matrix, analyzing both its local and global eigenvalue statistics. Our main observations are: (i) The power-law scalings that the bulk of its eigenvalues exhibit are vastly different for uncorrelated normally distributed data compared to real-world data, (ii) this scaling behavior can be completely modeled by generating Gaussian data with long range correlations, (iii) both generated and real-world datasets lie in the same universality class from the RMT perspective, as chaotic rather than integrable systems, (iv) the expected RMT statistical behavior already manifests for empirical covariance matrices at dataset sizes significantly smaller than those conventionally used for real-world training, and can be related to the number of samples required to approximate the population power-law scaling behavior, (v) the Shannon entropy is correlated with local RMT structure and eigenvalues scaling, is substantially smaller in strongly correlated datasets compared to uncorrelated ones, and requires fewer samples to reach the distribution entropy. These findings show that with sufficient sample size, the Gram matrix of natural image datasets can be well approximated by a Wishart random matrix with a simple covariance structure, opening the door to rigorous studies of neural network dynamics and generalization which rely on the data Gram matrix.
Noam Levi, Yaron Oz
2023-06-26T18:01:47Z
http://arxiv.org/abs/2306.14975v3
# The Underlying Scaling Laws and Universal Statistical Structure of Complex Datasets ###### Abstract We study universal traits which emerge both in real-world complex datasets, as well as in artificially generated ones. Our approach is to analogize data to a physical system and employ tools from statistical physics and Random Matrix Theory (RMT) to reveal their underlying structure. We focus on the feature-feature covariance matrix, analyzing both its local and global eigenvalue statistics. Our main observations are: _(i)_ The power-law scalings that the bulk of its eigenvalues exhibit are vastly different for uncorrelated random data compared to real-world data, _(ii)_ this scaling behavior can be completely recovered by introducing long range correlations in a simple way to the synthetic data, _(iii)_ both generated and real-world datasets lie in the same universality class from the RMT perspective, as chaotic rather than integrable systems, _(iv)_ the expected RMT statistical behavior already manifests for empirical covariance matrices at dataset sizes significantly smaller than those conventionally used for real-world training, and can be related to the number of samples required to approximate the population power-law scaling behavior, _(v)_ the Shannon entropy is correlated with local RMT structure and eigenvalues scaling, and substantially smaller in strongly correlated datasets compared to uncorrelated synthetic data, and requires fewer samples to reach the distribution entropy. These findings can have numerous implications to the characterization of the complexity of data sets, including differentiating synthetically generated from natural data, quantifying noise, developing better data pruning methods and classifying effective learning models utilizing these scaling laws. ## 1 Introduction In recent years, very large models have achieved state-of-the-art performance on both supervised and unsupervised learning tasks. These models often lie outside the regimes of classical machine learning (under-parameterized) and deep learning (over-parameterized), but rather in the limit where both the number of parameters and the number of samples are extremely large. While the behavior of these complex models can be challenging to predict, it has been observed that successively scaling up the number of parameters, dataset size, and compute power, continuously improve performance according to a power-law scaling, as long as the model isn't bottlenecked by any of the other two. Several studies have been conducted to understand the origin of this power-law scaling behavior [1; 2], deducing that it may lie in the way that neural networks transform features in the data, which in turn extends the power-law behavior from the data to the loss function. In this work, we continue the investigation of the relationship between neural networks and scaling laws. We focus on the source of the scaling laws that manifest in the data itself, rather than on how neural networks extend them. Concretely, we explore the underlying properties that seem to be universal across many datasets, regardless of origin and complexity. To achieve this goal, we analyze the empirical feature-feature covariance matrix spectrum and attempt to answer the following questions: * Is power-law scaling a universal property across real-world datasets?; what determines the scaling exponent and what properties should a synthetic dataset have, in order to follow the same scaling? * What are the universal properties of datasets that can be gleaned from the empirical covariance matrix and how are they related to local and global statistical properties of RMT? * How to quantify the extent to which an empirical covariance matrix characterize complex data well? * What, if any, are the relations between datasets scaling, entropy and statistical chaos diagnostics? We follow two complementary approaches: first, we explore the scaling behavior of the bulk of non-vanishing eigenvalues, and second, we study the distributional properties of the spectrum using tools from statistical physics and Random Matrix Theory (RMT). RMT is a powerful tool for describing the spectral statistics of certain complex systems. It is particularly useful for systems that are chaotic but also have certain coherent properties. The theory predicts universal statistical structures, provided that the underlying matrix ensemble is large enough to sufficiently fill the space of all matrices with a given symmetry, a property known as ergodicity [3]. Ergodicity has been observed in a variety of systems, including chaotic quantum systems [4; 5; 6], financial markets, nuclear physics and many others [7; 8; 9]. To demonstrate that a similar universal structure is also observed for correlation matrices resulting from datasets, we will employ several diagnostic tools widely used in the field of quantum chaos. We will analyze the global and local statistics of empirical covariance matrices generated from two classes of datasets: (i) Synthetic, normally distributed samples, with either uncorrelated or correlated features, (ii) Real-world datasets composed of images, at varying levels of complexity and resolution. Our primary contributions are: 1. We find that power-law scaling appears across various datasets. It is governed by a single scaling exponent \(\alpha\), and its origin is the strength of correlations in the underlying population matrix. We accurately recover the behavior of the eigenvalue bulk of real-world datasets using Wishart matrices with a Toeplitz covariance matrix [10]. We dub these _Synthetic Analogues_. 2. We show that generically, the bulk of eigenvalues' statistical properties are well described by RMT predictions, verified by diagnostic tools typically used for quantum chaotic systems. 3. We find that the effective convergence of the empirical covariance matrix as a function of the number of samples correlates with the corresponding RMT description becoming a good description of the statistics and the eigenvalues scaling. 4. The Shannon entropy is correlated with the local RMT structure and the eigenvalues scaling, and is substantially smaller in strongly correlated datasets compared to uncorrelated synthetic data. Additionally, it requires fewer samples to reach the distribution entropy. Our findings offer a new perspective on the properties of both real-world and synthetic datasets and provide a foundation for future work exploring the fundamental properties of these systems. By understanding the underlying properties of the data, we can gain new insights into the behavior of large neural networks, ultimately leading to more efficient and effective models. ## 2 Background and Related Work Neural Scaling LawsNeural scaling laws are a set of empirical observations that describe the relationship between the size of a neural network, dataset, compute power, and its performance. These laws were first proposed by Kaplan et al. [1] and have since been confirmed by a number of other studies [2; 11] and studied further in [12; 13; 14; 15; 16; 17]. The main finding of neural scaling laws is that the test loss of a neural network scales as a power-law with the number of parameters in the network. This means that doubling the number of parameters roughly reduces the test loss by \(2^{\alpha}\). However, this relationship does not persist indefinitely, and there is a point of diminishing returns beyond which increasing the number of parameters does not lead to significant improvements in performance. One of the key challenges in understanding neural scaling laws is the complex nature of the networks themselves. The behavior of a neural network is governed by a large number of interacting parameters, making it difficult to identify the underlying mechanisms that give rise to the observed scaling behavior, and many advances have been made by appealing to the RMT framework. Random Matrix TheoryRMT is a branch of mathematics that was originally developed to study the properties of large matrices with random entries. It is particularly suited to studying numerous realizations of the same system, where the number of realizations \(M\to\infty\), the dimensions of the system \(d\to\infty\), and the ratio between the two tends to a constant \(d/M\to const.\leq 1\). Results from RMT calculations have been applied to a wide range of problems in Machine Learning (ML), beyond the scope of neural scaling laws, including the study of nonlinear ridge regression [18], random Fourier feature regression [19], the Hessian spectrum [20], and weight statistics [21]. For a review of some of the recent developments, we refer the reader to Couillet and Liao [22] and references therein. Internal Statistics of DatasetsUnderstanding the statistical properties of synthetic and real-world datasets has become a subject of growing interest in recent years. As images generated from state of the art models [23; 24; 25] become increasingly more accurate and detailed, the ability to make statistical statements regarding the fidelity of data is ever more important [26][27][28]. Some of the methods commonly employed to distinguish the internal statistics of images include Principal Component Analysis (PCA) [29; 30], Nonlinear dimensionality reduction (NLDR) [31; 32; 33] and Statistical hypothesis testing [34; 35] have led to insights on the role of edges and textures [36], color [37], noise [38],lighting [39] and gradients [40] for image datasets. ## 3 Correlations and power-law Scaling In this section, we analyze the feature-feature covariance matrix for datasets of varying size, complexity, and origin. We consider real-world as well as synthetic datasets, establish a power-law scaling of their eigenvalues, and relate it to a correlation length. ### Feature-Feature Empirical Covariance Matrix Let \(\{x_{a}\}_{a=1}^{M}\subseteq\mathbb{R}^{d}\) be a set of \(M\) data samples composed of \(d\) features. We define the data matrix \(X_{ai}\in\mathbb{R}^{M\times d}\), constructed of \(M\) rows, each corresponding to a single sample. In this work, we focus on the empirical feature-feature covariance matrix, defined as \[\Sigma_{ij,M}=\frac{1}{M}\sum_{a=1}^{M}X_{ia}X_{aj}\in\mathbb{R}^{d\times d}. \tag{1}\] Intuitively, the correlations between the different input features, \(X_{ia}\), should be the leading order characteristic of the dataset. For instance, if the \(X_{ia}\) are pixels of an image, we may expect that different pixels will vary similarly across similar images. Conversely, the mean value of an input feature is uninformative, and so we will assume that our data is centered in a pre-processing stage. A random matrix ensemble is a probability distribution on the set of \(d\times d\) matrices that satisfy specific symmetry properties, such as invariance under rotations or unitary transformations. In order to study Eq. (1) using the RMT approach, we define \(\Sigma_{ij,a}\) as a single sample realization of the population random matrix ensemble \(\Sigma_{ij}\), and thus \(\Sigma_{ij,M}\) is the empirical ensemble average, i.e. \(\Sigma_{M}=\langle\Sigma_{a}\rangle_{a\in M}=\frac{1}{M}\sum_{a=1}^{M}\Sigma_{a}\) approximating the limits of \(M\to\infty,d\to\infty\). If \(M\) and \(d\) are sufficiently large, the statistical properties of \(\Sigma_{M}\) will be determined entirely by the underlying symmetry of the ensemble. ### Data Exploration We study the following real-world datasets: MNIST [41], FMNIST [42], CIFAR10 [43], Tiny-IMAGENET [44], and CelebA [45] (downsampeld to \(109\times 89\) in grayscale). We proceed to center and normalize all the datasets in the pre-processing stage, to remove the uninformative mean contribution. To generate uncorrelated synthetic data, we draw \(M\) samples \(\{\vec{X}_{a}\in\mathbb{R}^{d},a=1\dots M\}\) from a jointly normal distribution \(\vec{X}_{a}\sim\mathcal{N}(0,I_{d\times d})\), and construct the empirical covariance matrix \(\Sigma_{M}=\frac{1}{M}\sum_{a=1}^{M}\vec{X}_{a}\otimes\vec{X}_{a}\in\mathbb{R}^{d \times d}\). We refer to these datasets as _Synthetic Noise_. To generate correlated synthetic data, we repeat the same process, changing the distribution such that \(\vec{X}_{a}\sim\mathcal{N}(0,\Sigma_{d\times d})\), where we choose a specific form for \(\Sigma\) which produces feature-feature correlations, as \[\Sigma_{ij}^{\rm{Toe}}=\mathbb{1}_{ij}+c|i-j|^{\alpha},\qquad\alpha,c\in \mathbb{R}. \tag{2}\] The matrix \(\Sigma_{ij}^{\rm{Toe}}\) is a full-band Toeplitz matrix. The sign of \(\alpha\) dictates whether correlations decay (negative) or intensify (positive) with distance along a one-dimensional feature space1. We refer to these datasets as _Synthetic Analogues_. Footnote 1: Correlation strength which grows with distance is a feature commonly found in some one-dimensional physical systems, such as the Coulomb and Riesz gases [46; 47], which display an inverse power-law repulsion, while decaying correlations are common in the 1-d Ising model [48]. ### Correlations Determine the Noise to Data Transition We begin by reproducing and extending some of the results from Maloney et al. [2]. In Fig. 1, we show the \(\Sigma_{ij,M}\) eigenvalue scaling for the different classes of data (_i.e._ real-world, synthetic noise and synthetic analogues). We find that for all datasets, the bulk of the eigenvalues scales as a power-law \[\lambda_{i}\propto i^{-1-\alpha},\quad\alpha\in\mathbb{R},\qquad i=10,\ldots d _{\rm bulk}\,, \tag{3}\] where \(i=10\) is approximately where the scaling law behavior begins and \(d_{\rm bulk}\) is the effective bulk size, where the power-law abruptly terminates. We stress that this behavior repeats across all datasets, regardless of origin and complexity. The value of \(\alpha\) can be readily explained in terms of correlations within our synthetic analogue construction. Taking the Fourier Transform of the second term in Eq. (2), the bulk spectrum is given by Appendix A as \[\lambda_{i}^{\rm bulk}=\frac{c}{2}\sqrt{\frac{2}{\pi}}\sin\Big{(}\frac{\pi \alpha}{2}\Big{)}\Gamma(\alpha+1)\left|i\right|^{-1-\alpha}\,, \tag{4}\] where \(\Gamma(x)\) is the Gamma function. This implies that the value of \(\alpha\) determines the strength of correlations in the original data covariance matrix. For real-world data, we consistently find that \(\alpha>0\), which corresponds to increasing correlations between different features. In contrast, for synthetic noise, the value of \(\alpha\sim-1\), and the power-law behavior vanishes. Interpolating between synthetic noise, and real-world-data, the synthetic analogue produces a power-law scaling, which can be tuned from \(-1<\alpha\leq 0\), in the case of decaying long range correlations, or \(0\leq\alpha<\infty\) for increasing correlations, to match any real-world dataset we examined. Lastly, we can extend this statement further and verify the transition from correlated to uncorrelated features by corrupting a real-world dataset (FMNIST) and observing the continuous deterioration of the power-law from \(\alpha\sim 1/4\) to \(\alpha=-1\), implying that the synthetic analogue can mimic the bulk behavior of both clean and corrupted data. Figure 1: **Left:** Scree plot of \(\Sigma_{ij,M}\) for several different vision datasets, as well as for synthetic noise and a synthetic analogue with fixed \(\alpha\). Here, the number of samples is taken to be the entire dataset for each real-world dataset, and \(M=50000\) for the synthetic data, where we set \(c=1\). Different colors indicate different \(\alpha\) values, given in the legend. We see a clear scaling law for the eigenvalue bulk as \(\lambda_{i}\propto i^{-1-\alpha}\) where all real-world datasets display \(\alpha\leq 1/2\). **Right:** Interpolation of the power-law scaling parameter \(\alpha\) value from \(\alpha=1/4\) **(purple)** to \(\alpha=-1\) **(green)** by corrupting the FMNIST dataset with a varying amount of normally distributed noise. Global and Local Statistical Structure ### Random Matrix Theory In this section, we begin by describing the RMT diagnostic tools, often used to characterize chaotic quantum systems, with which we obtain our main results. We define the matrix ensemble under investigation, then provide an overview of each diagnostic, concluding with a summary of results for the specific matrix ensemble relevant to both real-world and synthetic datasets. While we provide an overview of each diagnostic, we refer the reader to Tao [49], Kim et al. [50] for a more complete review. We then apply these tools to gain insights into the statistical structure of the datasets. #### 4.1.1 Spectral Density The empirical spectral density of a matrix \(\Sigma\) is defined as, \[\rho_{\Sigma}(\lambda)=\frac{1}{n}\sum_{i=1}^{n}\delta(\lambda-\lambda_{i}( \Sigma)), \tag{5}\] where \(\delta\) is the Dirac delta function, and the \(\lambda_{i}(\Sigma),i=1,...,n,\) denote the \(n\) eigenvalues of \(\Sigma\), including multiplicity. The limiting spectral density is defined as the limit of Eq. (5) as \(n\rightarrow\infty\). #### 4.1.2 Level Spacing Distribution and \(r\)-statistics The level spacing distribution measures the probability density for two adjacent eigenvalues to be in the spectral distance \(s\), in units of the mean level spacing \(\Delta\). The procedure for normalizing all distances in terms of the local mean level spacing is often referred to as unfolding. We unfold the spectrum of the empirical covariance matrix \(\Sigma(\rho)\) by standard methods [50], reviewed in Appendix A. Ultimately, the transformation \(\lambda_{i}\to e_{i}=\tilde{\rho}(\lambda_{i})\) is performed such that \(e_{i}\) shows an approximately uniform distribution with unit mean level spacing. Once unfolded, the level spacing is given by \(s_{i}=e_{i+1}-e_{i}\), and its probability density function \(p(s)\) is measured. The level spacing distribution captures information about the short-range spectral correlations, demonstrating the presence of level repulsion, _i.e._, whether \(p(s)\to 0\) as \(s\to 0\), which is a common trait of RMT ensembles, as the probability of two eigenvalues being exactly degenerate is zero. The level spacing distribution \(p(s)\) for certain systems is known. For integrable systems, it follows the Poisson distribution \(p(s)=e^{-s}\), while for chaotic systems, it is given by the Wigner surmise \[p_{\beta}(s)=Z_{\beta}s^{\beta}e^{-b_{\beta}s^{2}}, \tag{6}\] where \(\beta\), \(Z_{\beta}\), and \(b_{\beta}\) depend on which universality class of random matrices the covariance matrix belongs to [51]. In this work, we consider matrices that fall under the universality class of the Gaussian Orthogonal Ensemble (GOE), for which \(\beta=1\). While the level spacing distribution depends on unfolding the eigenspectrum, which is only heuristically defined and has some arbitrariness, it is useful to have additional diagnostics of chaotic behavior that bypass the unfolding procedure. The \(r\)-statistics, first introduced in [52], is such a diagnostic tool for short-range correlations, defined without the necessity to unfold the spectrum. Given the level spacings \(s_{i}\), defined as the differences between adjacent eigenvalues \(\cdots<\lambda_{i}<\lambda_{i+1}<\cdots\) without unfolding, one defines the following ratios: \[r_{i}=\frac{\mathrm{Min}(s_{i},\,s_{i+1})}{\mathrm{Max}(s_{i},\,s_{i+1})}\, \qquad 0\leq r_{i}\leq 1. \tag{7}\] The expectation value of the ratios \(r_{i}\) takes very specific values if the energy levels are the eigenvalues of random matrices: for matrices in the GOE the ratio is \(\langle r\rangle\approx 0.53590\). The value becomes typically smaller for integrable systems, approaching \(\langle r\rangle\approx 0.38629\) for a pure Poisson process [53]. #### 4.1.3 Spectral Form Factor The spectral form factor (SFF) is a long-range observable that probes the agreement of a given unfolded spectrum with RMT at energy scales much larger than the mean level spacing. It can be used to detect spectral rigidity, which is a signature of quantum chaos. The SFF is defined as the Fourier transform of the spectral two-point correlation function [54; 55] \[K(\tau)=|Z(\tau)|^{2}/Z(0)^{2}\simeq\frac{1}{Z}\mathbb{E}\left\langle|\sum_{i} \rho(e_{i})e^{-i2\pi e_{i}\tau}|^{2}\right\rangle\,, \tag{8}\] where \(Z(\tau)=\text{Tr}e^{-i\tau\Sigma(\rho\varepsilon)}\). The second equality is the numerically evaluated SFF [56], where \(e_{i}\) is the unfolded spectrum, and \(Z=\sum_{i}|\rho(e_{i})|^{2}\) is chosen to ensure that \(\lim_{\tau\to\infty}K(\tau)\approx 1\). The SFF has been computed analytically for the GOE, and it reads \[K_{\rm GOE}(\tau)=2\tau-\tau\mathrm{ln}(1+2\tau)\text{ for }\;0<\tau<1,\quad K_{ \rm GOE}(\tau)=1\text{ for }\;1\leq\tau\;. \tag{9}\] Several universal features occur in chaotic RMT ensembles, manifesting in Eq. (9) and discussed in detail in Liu [55], Kim et al. [50]. We mention here only two: (i) The constancy of \(K(\tau)\) for \(\tau\geq 1\) is simply a consequence of the discreteness of the spectrum. (ii) The existence of a timescale that characterizes the ergodicity of a dynamical system. It is defined as the time when the SFF of the dynamical system converges to the universal RMT computation. More concretely, it is indicated by the onset of the universal linear ramp as in (9), which is absent in non-ergodic systems. ### Insights from the Global and Local Statistical Structure #### 4.2.1 Eigenvalues Distributions While the scaling behavior of the bulk of eigenvalues is certainly meaningful, it is not the only piece of information that can be extracted from the empirical covariance matrix. Particularly, it is natural to inquire whether the origin of the power-law scaling determines also the degeneracy of each eigenvalue. We can test this hypothesis by comparing the global and local statistics of the bulk between real-world data and their synthetic analogue counterparts. For the synthetic datasets we generate, there are known predictions for the spectral density, level spacing distribution, \(r\)-statistics and spectral form factor. In these special cases, the empirical covariance matrix in Eq. (1) is known as a Wishart matrix [57]: \(\Sigma_{ij,M}\sim\mathcal{W}_{d}(\Sigma,M)\). For a Wishart matrix, the spectral density \(\rho(\lambda)\) is given by the generalized Marcenko-Pastur (MP) law [22], which depends on the details of \(\Sigma\) and given in Appendix B. For \(\Sigma=\sigma^{2}I_{d}\), the spectral density is given explicitly by the MP distribution as \[\rho(\lambda)=\frac{1}{2\pi\sigma^{2}}\frac{\sqrt{(\lambda_{\text{max}}- \lambda)(\lambda-\lambda_{\text{min}})}}{\gamma\lambda}\quad\text{for }\lambda\in[\lambda_{\text{min}},\lambda_{\text{max}}]\text{ and 0 otherwise}\;, \tag{10}\] where \(\lambda_{\text{max}/\text{min}}=\sigma^{2}(1\pm\sqrt{\gamma})^{2}\), \(\gamma\equiv d/M\) and \(d,M\to\infty\). In Fig. 2, we show that the synthetic analogues capture not only the scaling behavior of the eigenvalue bulk, but also the spectral density and the distribution of eigenvalues, for ImageNet, CIFAR10, and FMNIST, measured by the Kullback-Leibler divergence (KL) [58]. We further emphasize this point by contrasting the distributions with the MP distribution, which accurately captures the spectral density of the synthetic noise datasets. This measurement alone is not sufficient to determine that the system is well approximated by RMT, and we must study several other statistical diagnostics. #### 4.2.2 Level Spacing Diagnostics RMT predicts that certain local and global statistical properties are determined uniquely by symmetry. Therefore, the empirical covariance matrix must lie either in the GOE ensemble if it is akin to a quantum chaotic system2 or in the Poisson ensemble, if it corresponds to an integrable system. Footnote 2: Large random real symmetric matrices belong in the orthogonally invariant class. Both the level spacing and \(r\) statistics (the ratio of adjacent level spacings) probability distribution functions for a Wishart matrix in the limit of \(d,M\to\infty\) and \(d/M=\gamma\), are given by the GOE universality class: \[p_{\rm GOE}(s)=\frac{\pi}{2}se^{-\frac{\pi}{2}s^{2}},\quad p_{\rm GOE}(r)= \frac{16}{27}\frac{(r+r^{2})}{(1+r+r^{2})^{5/2}}\Theta(1-r),\quad\langle r \rangle_{\rm GOE}=4-2\sqrt{3}, \tag{11}\] while the SFF is given by Eq. (9). In Fig. 3, we demonstrate that the bulk of eigenvalues for various real-world datasets behaves as the energy eigenvalues of a quantum chaotic system described by the GOE universality class. This result is matched by both the synthetic noise and the synthetic analogues, as is expected of a Wishart matrix. Here, the dataset size is taken to be \(M=50000\) samples, and the results show that this sample size is sufficient to provide a proper sampling of the underlying ensemble. #### 4.2.3 Effective Convergence Having confirmed that synthetic analogues provide a good proxy for the bulk structure for a large fixed dataset size, we may now ask how the statistical results depend on the number of samples. As discussed in Section 3.1, \(\Sigma_{M}\) can be interpreted as an ensemble average over single realizations of the true population covariance matrix \(\Sigma\). As the number of realizations \(M\) increases, a threshold value of \(M_{\text{crit}}\) is expected to appear when the space of matrices that matches the effective dimension of the true population matrix is fully explored. The specific value of \(M_{\text{crit}}\) can be approximated without knowing the true effective dimension by considering two different evaluation metrics. Firstly, convergence of the local statistics of \(\Sigma_{M}\), given by the point at which its level spacing distribution and \(r\) value approximately match their respective RMT ensemble expectations. Secondly, convergence of the global spectral statistics, both of \(\Sigma_{M}\) to that of \(\Sigma\) and of the empirical parameter \(\alpha_{M}\) to its population expectation \(\alpha\). Figure 3: The \(r\) probability density (**left**), the unfolded level spacing distribution (**center**) and the spectral form factor (**right**) of \(\Sigma_{M}\) for FMNIST, CIFAR10, their synthetic analogues, and synthetic noise, obtained with \(M=50000\). Black curves indicate the RMT predictions for the GOE distribution from Eq. (11). These results indicate that the bulk of real-world data eigenvalues belongs to the GOE universality class, and that system has enough statistics to converge to the RMT predictions. Figure 2: **Top row:** Scree plot of \(\Sigma_{ij,M}\) for several different configurations and datasets. We show the eigenvalues of the population covariance matrix \(\Sigma^{\text{Toe}}\), the eigenvalues of the empirical covariance using the same \(\Sigma^{\text{Toe}}\), with \(M=d\), and finally the eigenvalues for the empirical covariance of the full real-world dataset with \(M=50000\). The datasets used here are (left to right): FMNIST, CIFAR10, ImageNet. **Bottom row:** Spectral density for the bulk of eigenvalues for the same datasets, as well as a comparison against synthetic noise of the same dimensions. The \(\bar{\lambda}\) indicates normalization over the maximal eigenvalue among the bulk. We also provide the KL divergence between the synthetic analogues and the real-world data distributions. Here, we define these metrics and measure them for different datasets, obtaining analytical expectations for the synthetic analogues, which accurately mimic their real-world counterparts. We can deduce \(M_{\rm crit}\) from the local statistics by measuring the difference between the empirical average \(r\) value and the theoretical one given by \[|r_{M}-r_{\rm RMT}|=\delta(M)r_{\rm RMT}, \tag{12}\] where \(r_{\rm RMT}\) is known for the GOE to be \(r_{\rm GOE}=4-2\sqrt{3}\simeq 0.5359\) and \(r_{\rm PE}\simeq 0.3863\) for the Poisson ensemble. Next, we compare the results obtained for \(M_{\rm crit}\) from \(\delta(M)\) to the one obtained from the global statistics by using a spectral distance measure for the eigenvalue bulk given by \[|\alpha_{M}-\alpha|=\Delta(M), \tag{13}\] where \(\alpha_{M}\) is the measured value obtained by fitting a power-law to the bulk of eigenvalues for a fixed dataset size \(M\), while \(\alpha\) represents the convergent value including all samples from a dataset. Lastly, we compare the full empirical covariance matrix with the convergent result obtained using the full dataset by taking \[|\Sigma_{M}-\Sigma|=\epsilon(M)|\Sigma|, \tag{14}\] where \(|A|\) is the spectral norm of \(A\), and \(\epsilon(M)\) will be our measure of the distance between the two covariance matrices. In Fig. 4, we show the results for each of these metrics separately as a function of the number of samples \(M\). We find that the \(\delta(M)\) parameter, which is a measure of local statistics, converges to the expected GOE value at roughly the same \(M_{\rm crit}\) as the entirely independent \(\Delta(M)\) parameter, which measures the scaling exponent \(\alpha\). The combination of these two metrics confirms empirically that the system has become ergodic at sample sizes roughly \(M_{\rm crit}\sim d\), which is much smaller than the typical size of the datasets. #### 4.2.4 Datasets Entropy The Shannon entropy [59] of a random variable a measure of information, inherent to the variable's possible outcomes [60], given by \(H=-\sum_{i=1}^{n}p_{i}\log(p_{i})\) where \(p_{i}\) is the probability of a given outcome and \(n\) is the number of possible states. For covariance matrices, we define \(p_{i}\) given the spectrum as \(p_{i}=\lambda_{i}/\sum_{i=1}^{n_{\rm bulk}}\lambda_{i}\), where \(n_{\rm bulk}\) is the number of bulk eigenvalues. In Fig. 5 (left) we plot the Shannon entropies of real and synthetic datasets as a function of the number of samples. The entropies grow linearly and reach a plateau whose value is related to the correlation strength, with strong correlation corresponding to low entropy. We see the same entropy for both the synthetic and real datasets that have the same scaling exponent, implying that they also share the same eigenvalues degeneracy. Figure 4: **Left:** The \(r\) distance metric \(\delta(M)\) for the bulk of eigenvalues. **Center:** The \(\alpha\) distance metric \(\Delta(M)\) for the bulk of eigenvalues. **Right:** The full matrix comparison metric \(\epsilon(M)\). We show the results for CIFAR10, FMNIST, synthetic noise, and the FMNIST synthetic analogue as a function of the number of samples. The results show that the bulk distances decrease as \(1/M\), where \(M\) is the number of samples, asymptoting to a constant value at similar values of \(M_{\rm crit}\sim d\) (**black dashed**), where \(d\) is the number of features. #### 4.2.5 Entropy, Scaling and r-Statistics In Fig. 5 (left), we see that the entropy saturation is correlated with the effective convergence in Fig. 4 as a function of the number of samples, while the middle and right plots show the correlation between the convergence of the entropy, the scaling exponent and the r-statistics, respectively. We see that real data and synthetic data with the same scaling exponent exhibit similar convergence behaviour. ## 5 Conclusions In this paper, we have presented fundamental results regarding the universal structure of real-world and synthetic datasets. Our findings indicate that these datasets exhibit scaling laws, and the distribution of eigenvalues predominantly aligns with the GOE universality class of RMT. While our empirical results indicate that real-world data displays chaotic properties, the exact source is not evident. We believe that further work is necessary to determine whether it is due to the underlying strongly correlated structure that is manifest in real-world data, or if it stems from the chaotic sampling process that generates noise, which is captured in the finer details encoded in the eigenvalue bulk. Our approach, based on synthetic analogues to real-world data, lays the groundwork for considering neural scaling laws at the distribution level, capturing higher moments and eigenvalue distributions beyond mean scaling factors. This comprehensive perspective provides deeper insights into the underlying dynamics. With these observations in hand, we can further enhance our understanding of these scaling laws and their connections to fundamental principles. Though primarily focused on image datasets, extending research to language datasets offers valuable insights into the universality of scaling laws across modalities. Additionally, the interplay between eigenvectors and eigenvalues in neural networks merits further exploration, as both components likely play crucial roles in the way neural networks process information. Studying the generation of synthetic eigenvectors and eigenvalues from RMT ensembles provides avenues to understand network behavior and dynamics. Practically, our study reveals the range of real scalings observed when examining different \(\alpha\) values for corrupted and uncorrupted data, enhancing our knowledge of model vulnerabilities and robustness against adversarial perturbations. Furthermore, our work introduces a novel approach to assess the fidelity and perceptual quality of artificially-generated images through scaling analysis, contributing to the evaluation of generative model qualities. In summary, we have presented original results on the underlying scaling laws appearing in datasets and their implications, extending beyond traditional scaling factors to encompass more general statistical properties and the distribution of eigenvalues. The exploration of language data and extension of neural scaling laws, understanding the chaotic nature of the sampling processes, and the relationship between eigenvectors and eigenvalues via RMT represents exciting avenues for future research. Additionally, our findings have practical applications in the context of adversarial attacks and the evaluation of artificially-generated images. By advancing our understanding in these areas, we can enhance the robustness, interpretability, and performance of neural networks across domains. Figure 5: Convergence of the various metrics in Eqs. (12) to (14) in relation to entropy for the bulk of eigenvalues. **Left:** The Shannon entropy \(H_{M}\) as a function of the dataset size \(M\). **Center:** Convergence of the normalized \(\alpha\) metric \(1-\Delta_{M}/\Delta\) to its asymptotic value as a function of the normalized entropy \(H_{M}/H\). **Right:** Convergence of the normalized \(r\) statistics metric \(1-\delta_{M}/\delta\) to its asymptotic value as a function of the normalized entropy \(H_{M}/H\). We show the results for CIFAR10, FMNIST, MNIST, synthetic noise, and the FMNIST synthetic analogue4. Acknowledgements We would like to thank Yohai Bar-Sinai, Marat Freytsis, Alex Maloney, Dan Roberts, and Jamie Sully for useful discussions and comments. N.L. would like to thank the Milner Foundation for the award of a Milner Fellowship. The work of Y.O. is supported in part by Israel Science Foundation Center of Excellence. This work was performed in part at Aspen Center for Physics, which is supported by the U.S. National Science Foundation grant PHY-2210452.
2307.05589
Betti numbers of the tangent cones of monomial space curves
Let $H = \langle n_1, n_2, n_3\rangle$ be a numerical semigroup. Let $\tilde H$ be the interval completion of $H$, namely the semigroup generated by the interval $\langle n_1, n_1+1, \ldots, n_3\rangle$. Let $K$ be a field and $K[H]$ the semigroup ring generated by $H$. Let $I_H^*$ be the defining ideal of the tangent cone of $K[H]$. In this paper, we describe the defining equations of $I_H^*$. From that, we establish the Herzog-Stamate conjecture for monomial space curves stating that $\beta_i(I_H^*) \le \beta_i(I_{\tilde H}^*)$ for all $i$, where $\beta_i(I_H^*)$ and $\beta_i(I_{\tilde H}^*)$ are the $i$th Betti numbers of $I_H^*$ and $I_{\tilde H}^*$ respectively.
Nguyen P. H. Lan, Nguyen Chanh Tu, Thanh Vu
2023-07-10T16:48:54Z
http://arxiv.org/abs/2307.05589v1
# Betti numbers of the tangent cones of monomial space curves ###### Abstract. Let \(H=\langle n_{1},n_{2},n_{3}\rangle\) be a numerical semigroup. Let \(\tilde{H}\) be the interval completion of \(H\), namely the semigroup generated by the interval \(\langle n_{1},n_{1}+1,\ldots,n_{3}\rangle\). Let \(K\) be a field and \(K[H]\) the semigroup ring generated by \(H\). Let \(I_{H}^{*}\) be the defining ideal of the tangent cone of \(K[H]\). In this paper, we describe the defining equations of \(I_{H}^{*}\). From that, we establish the Herzog-Stamate conjecture for monomial space curves stating that \(\beta_{i}(I_{H}^{*})\leq\beta_{i}(I_{\tilde{H}}^{*})\) for all \(i\), where \(\beta_{i}(I_{H}^{*})\) and \(\beta_{i}(I_{\tilde{H}}^{*})\) are the \(i\)th Betti numbers of \(I_{H}^{*}\) and \(I_{\tilde{H}}^{*}\) respectively. Key words and phrases:Betti numbers; Tangent cone; Monomial space curves 2010 Mathematics Subject Classification: 13D02, 13D05, 13H99 ## 1. Introduction Let \(K\) be an arbitrary field. For each sequence \(\mathbf{a}=(a_{1}<\cdots<a_{n})\) of positive integers, denote \(C(\mathbf{a})\) the affine monomial curve in \(\mathbb{A}^{n}\) parametrized by \((t^{a_{1}},\ldots,t^{a_{n}})\). In [6], Herzog proved that the number of defining equations of affine monomial space curves is at most three. The situation changes dramatically when we consider monomial curves in higher dimensions. Bresinsky [1] gave an example of a family of monomial curves in \(\mathbb{A}^{4}\) whose numbers of defining equations are unbounded. Despite this, Herzog and Srinivasan conjectured, and the third author [19] proved that all the Betti numbers of \(C(\mathbf{a})\) are bounded by a constant depending only on the width, \(\operatorname{wd}(\mathbf{a})=a_{n}-a_{1}\), of the sequence. Unfortunately, the proof of [19] does not reveal an explicit bound on the Betti numbers. In subsequent work, Herzog and Stamate [9] proved that when \(a_{1}\) is large enough, the Betti numbers of the tangent cone of \(C(\mathbf{a})\) are equal to those of the curve. Denote \(H=\langle\mathbf{a}\rangle\) the numerical semigroup generated by \(\mathbf{a}\) and \(\tilde{H}\) the numerical semigroup generated by the interval completion of \(\mathbf{a}\), namely, \(\tilde{H}=\langle a_{1},a_{1}+1,\ldots,a_{n}\rangle\). Denote \(I_{H}\) the defining ideal of the semigroup ring \(K[H]=K[t^{a_{1}},\ldots,t^{a_{n}}]\) and \(I_{H}^{*}\) the defining ideal of the tangent cone of \(K[H]\) which is the associated graded ring \(\operatorname{gr}_{\mathfrak{m}}K[H]\). Herzog and Stamate conjectured that **Conjecture A** (Herzog-Stamate).: Let \(H=\langle a_{1},a_{2},\ldots,a_{n}\rangle\) be a numerical semigroup. Denote \(\tilde{H}=\langle a_{1},a_{1}+1,\ldots,a_{n}\rangle\). Then \[\beta_{i}(I_{H}^{*})\leq\beta_{i}(I_{\tilde{H}}^{*}),\] for all \(i\), where \(\beta_{i}(I)\) denotes the \(i\)th Betti number of the ideal \(I\). When \(\mathbf{a}\) is a generalized arithmetic sequence, namely, \(a_{i}=ua_{1}+(i-1)h\) for \(i\geq 2\) and some positive integers \(u\) and \(h\), Sharifan and Zaare-Nahandi [13] described a minimal free resolution of the tangent cone of \(C(\mathbf{a})\). The conjecture is also known for some other special numerical semigroups [2, 14]. As mentioned earlier, the number of defining equations of affine monomial space curves is at most three. Nevertheless, Shibuta [5, Example 5.5] gave a family of monomial space curves whose numbers of defining equations of the tangent cones go to infinity (see [9] for a treatment of tangent cone of variation of Shibuta semigroups). In this paper, we prove the Herzog-Stamate conjecture for monomial space curves. **Theorem 1.1**.: _Let \(H=\langle n_{1},n_{2},n_{3}\rangle\) be a numerical semigroup generated by three positive integers \(n_{1}<n_{2}<n_{3}\). The defining ideal of the tangent cone of \(K[H]\) is denoted by \(I_{H}^{*}\). Let \(\operatorname{wd}(\tilde{H})=\min(n_{1}-1,n_{3}-n_{1})\) be the width of \(\tilde{H}\). Then_ 1. \(\beta_{0}(I_{H}^{*})=\mu(I_{H}^{*})\leq\operatorname{wd}(\tilde{H})+1\)_,_ 2. \(\beta_{1}(I_{H}^{*})\leq 3\operatorname{wd}(\tilde{H})-3,\)__ 3. \(\beta_{2}(I_{H}^{*})\leq 2\operatorname{wd}(\tilde{H})-3\)_._ _In particular, \(\beta_{i}(I_{H}^{*})\leq\beta_{i}(I_{\tilde{H}}^{*})\) for all \(i\)._ Note that our bound on the Betti numbers of \(I_{H}^{*}\) is stronger than the conjectural bound. We now outline the idea of proof of the main theorem. 1. By the result of Herzog [6], the lattice \(\Lambda=\{\mathbf{v}\in\mathbb{Z}^{3}\mid\mathbf{v}\cdot\mathbf{a}=0\}\), where \(\mathbf{a}=(n_{1},n_{2},n_{3})\) is generated by two vectors. We first note that \(I_{H}^{*}=(f_{\mathbf{v}}^{*}\mid\mathbf{v}\in\Lambda)\). From that, we describe the minimal generators of \(I_{H}^{*}\) in Section 3 and Section 4 depending on whether \(I_{H}\) is a complete intersection or not. 2. We show that the minimal generators of \(I_{H}^{*}\) form a Grobner basis for \(I_{H}^{*}\) with respect to the reverse lexicographic order. By [12, Theorem 6.29], it suffices to prove the bound for \(\mu(I_{H}^{*})\). 3. From our description of \(I_{H}^{*}\), we show that \(\mu(I_{H}^{*})\leq\operatorname{wd}(\tilde{H})+1\). In [7, 18], the authors proved that the Cohen-Macaulay property of the tangent cone in embedding dimension three can be determined from its defining equations. Shen [15] further showed that this can be determined from the equations of \(I_{H}\). We now describe the organization of the paper. In Section 2, we set up the notation, recall the result of Herzog [6], and prove that \(I_{H}^{*}\) is generated by the initial forms of binomials in \(I_{H}\). In Section 3, we describe \(I_{H}^{*}\) when \(I_{H}\) is a complete intersection. In Section 4, we describe \(I_{H}^{*}\) when \(I_{H}\) is not a complete intersection. In Section 5, we prove our main theorem. ## 2. Preliminaries Let \(H=\langle a_{1},\dots,a_{n}\rangle\) and \(S=K[x_{1},\dots,x_{n}]\) the polynomial in \(n\) variables over a field \(K\). Let \(\Lambda\subset\mathbb{Z}^{n}\) be defined by \(\Lambda=\{\mathbf{v}\in\mathbb{Z}^{n}\mid\mathbf{v}\cdot\mathbf{a}=0\}\), where \(\mathbf{a}=(a_{1},\dots,a_{n})\). For each \(\mathbf{b}=(b_{1},\ldots,b_{n})\in\mathbb{N}^{n}\), we denote by \(x^{\mathbf{b}}\) the monomial \(x_{1}^{b_{1}}\cdots x_{n}^{b_{n}}\). For each vector \(\mathbf{v}\in\mathbb{Z}^{n}\), denote \(f_{\mathbf{v}}\) the binomial \(x^{\mathbf{v}_{+}}-x^{\mathbf{v}_{-}}\) where \(\mathbf{v}_{+}\) and \(\mathbf{v}_{-}\) are defined by \[(\mathbf{v}_{+})_{i} =\max(\mathbf{v}_{i},0)\] \[(\mathbf{v}_{-})_{i} =\max(-\mathbf{v}_{i},0).\] For a polynomial \(f\) in \(S\), \(f^{*}\) denotes the initial form of \(f\), namely the homogeneous component of the smallest degree of \(f\). First, we have **Lemma 2.1**.: _We have_ \[I_{H}^{*}=(f_{\mathbf{v}}^{*}\mid\mathbf{v}\in\Lambda).\] Proof.: By definition we have \(I_{H}^{*}=(f^{*}|f\in I_{H})\). By [8, Theorem 3.2], \(I_{H}=(f_{\mathbf{v}}\mid\mathbf{v}\in\Lambda)\). By [3, Theorem 15.28], \(I_{H}^{*}=(f^{*}\mid f\) in a Grobner basis of \(I_{H})\). By [8, Theorem 3.6], \(I_{H}\) has a Grobner basis consisting of binomials of the form \(f_{\mathbf{v}}\) with \(\mathbf{v}\in\Lambda\). The conclusion follows. Now, we come back to our situation of monomial space curves. We may assume that \(n_{1},n_{2},n_{3}\) minimally generate the numerical semigroup \(H=\langle n_{1},n_{2},n_{3}\rangle\). In particular, \(\gcd(n_{1},n_{2},n_{3})=1\). Denote \(a=n_{2}-n_{1}\), \(b=n_{3}-n_{1}\), and \(d=\gcd(a,b)\). Since we are working with monomial space curves, to avoid indexing, we use the variables \(x,y,z\) instead of \(x_{1},x_{2},x_{3}\). Throughout the paper, we also use the following terminology from Herzog [6]. Let \(\mathbf{v}\in\mathbb{Z}^{3}\) be a vector. We say that 1. \(\mathbf{v}\) and \(f_{\mathbf{v}}\) is of type 1 if \(f_{\mathbf{v}}\) is of the form \(\pm(x^{a_{1}}-y^{a_{2}}z^{a_{3}})\). In this case, \(f_{\mathbf{v}}^{*}=y^{a_{2}}z^{a_{3}}\). 2. \(\mathbf{v}\) and \(f_{\mathbf{v}}\) is of type 2 if \(f_{\mathbf{v}}\) is of the form \(\pm(y^{a_{2}}-x^{a_{1}}z^{a_{3}})\). 3. \(\mathbf{v}\) and \(f_{\mathbf{v}}\) is of type 3 if \(f_{\mathbf{v}}\) is of the form \(\pm(z^{a_{3}}-x^{a_{1}}y^{a_{2}})\). In this case, \(f_{\mathbf{v}}^{*}=z^{a_{3}}\). When it is clear from the context, we denote by \(I\) the ideal \(I_{H}\). Let \[J=(f\mid f\text{ is homogeneous in }I)\] and call it the homogeneous part of \(I\). The ideal \(J\) plays an important role in studying the Betti numbers of the shifted family \(I(\mathbf{a}+k)\)[19]. By [17], it also has a nice interpretation for the complete intersection property of the ideals in the shifted family \(I(\mathbf{a}+k)\). **Lemma 2.2**.: _Let \(H=\langle n_{1},n_{2},n_{3}\rangle\). Let \(J\) be the homogeneous part of \(I_{H}\). Then \(J=(y^{b/d}-x^{(b-a)/d}z^{a/d})\), where \(a=n_{2}-n_{1},b=n_{3}-n_{1}\) and \(d=\gcd(a,b)\)._ Proof.: Assume that \(y^{u+v}-x^{u}z^{v}\) is a homogeneous binomial in \(I_{H}\). Then we have \[(u+v)(n_{1}+a)=un_{1}+v(n_{1}+b).\] Hence, \(ua=v(b-a)\). The conclusion follows. We now recall the result of Herzog [6] describing the generators of the lattice \(\Lambda\) associated with \(H\). Let \(c_{1},c_{2},c_{3}\) be the smallest positive integers such that \(\langle n_{2},n_{3}\rangle\), \(c_{2}n_{2}\in\langle n_{1},n_{3}\rangle\) and \(c_{3}n_{3}\in\langle n_{1},n_{2}\rangle\). Hence, there exist non-negative integers \(r_{ij}\) satisfying the equations \[c_{1}n_{1} =r_{12}n_{2}+r_{13}n_{3} \tag{2.2}\] \[c_{2}n_{2} =r_{21}n_{1}+r_{23}n_{3}\] (2.3) \[c_{3}n_{3} =r_{31}n_{1}+r_{32}n_{2}. \tag{2.1}\] Denote \(\mathbf{v}_{1}=(c_{1},-r_{12},-r_{13})\), \(\mathbf{v}_{2}=(-r_{21},c_{2},-r_{23})\) and \(\mathbf{v}_{3}=(-r_{31},-r_{32},c_{3})\). Herzog proved that if \(r_{ij}=0\) for some \(i,j\), then two of the vectors are negated of each other, and \(I_{H}\) is a complete intersection. If \(r_{ij}\neq 0\) for all \(i,j\), then \(\mathbf{v}_{1}+\mathbf{v}_{2}+\mathbf{v}_{3}=0\). In either case, \(\Lambda\) is generated by two of the three vectors, called \(\mathbf{w}_{1}\) and \(\mathbf{w}_{2}\). Hence, **Lemma 2.3**.: _Let \(H=\langle n_{1},n_{2},n_{3}\rangle\) and \(\Lambda\) be the associated lattice which is generated by \(\mathbf{w}_{1}\) and \(\mathbf{w}_{2}\). Then_ \[I_{H}^{*}=(f_{\mathbf{w}_{1}}^{*},f_{\mathbf{w}_{2}}^{*},f_{a_{1}\mathbf{w}_{1 }+a_{2}\mathbf{w}_{2}}^{*},f_{b_{1}\mathbf{w}_{1}-b_{2}\mathbf{w}_{2}}^{*}\mid a _{1},a_{2},b_{1},b_{2}>0,(a_{1},a_{2})=1\text{ and }(b_{1},b_{2})=1).\] Proof.: Follows from Lemma 2.1 and the fact that \(f_{\mathbf{v}}=-f_{-\mathbf{v}}\) and \(f_{k\mathbf{v}}^{*}\in(f_{\mathbf{v}}^{*})\). Finally, we introduce the following notation that arises frequently in the sequence. For a real number \(r\), denote \(\lceil r\rceil\) the least integer at least \(r\), \(\lfloor r\rfloor\) the largest integer at most \(r\). We also define a function \(\varphi_{r}:\mathbb{N}\to\mathbb{N}\) as follows. \[\varphi_{r}(t)=\begin{cases}rt+1&\text{ if }rt\in\mathbb{N},\\ \lceil rt\rceil&\text{ otherwise}.\end{cases}\] Let \(\frac{a}{b}<\frac{c}{d}\) be two positive rational numbers. Let \(f\) be the smallest positive integer such that there exists an \(h\) satisfying \(\frac{a}{b}<\frac{h}{f}\leq\frac{c}{d}\). We then let \(e\) be the smallest such \(h\). We denote \(\frac{e}{f}\) by \(M(\frac{a}{b},\frac{c}{d})\). Note that when \(f\) is determined then \(e=\varphi_{r}(f)\), where \(r=a/b\). The fraction \(\frac{e}{f}\) can be found from the continued fractions of \(\frac{a}{b}\) and \(\frac{c}{d}\) and is tightly related to the Farey sequence. ## 3. Complete intersection monomial space curves We keep the notation as in Section 2. In this section, we describe \(I_{H}^{*}\) when \(I_{H}\) is a complete intersection. There are three possibility, namely, either \(r_{12}=r_{32}=0\), \(r_{13}=r_{23}=0\), or \(r_{21}=r_{31}=0\). **Lemma 3.1**.: _Assume that \(r_{13}=0\). Then \(I^{*}=(y^{c_{2}},z^{c_{3}})\)._ Proof.: Since \(r_{13}=0\), we know that the lattice \(\Lambda\) is generated by \(\mathbf{w}_{1}=(c_{1},-c_{2},0)\) and \(\mathbf{w}_{2}=(r_{31},r_{32},-c_{3})\). In particular, \(y^{c_{2}},z^{c_{3}}\in I^{*}\). Let \(L=(y^{c_{2}},z^{c_{3}})\). By Lemma 2.3, it suffices to prove that for any \(a_{1},a_{2}>0\) and all \(b_{1},b_{2}>0\), \(f_{\mathbf{w}}^{*}\in L\), where \(\mathbf{w}=a_{1}\mathbf{w}_{1}+a_{2}\mathbf{w}_{2}\) or \(\mathbf{w}=b_{1}\mathbf{w}_{1}-b_{2}\mathbf{w}_{2}\). Since, \(a_{2},b_{2}>0\), \(w_{3}\) is a multiple of \(c_{3}\). Hence, if \(\mathbf{w}\) is of type 1 or type 3 then \(f_{\mathbf{v}}^{*}\in(z^{c_{3}})\subset L\). Now assume that \(\mathbf{w}\) is of type 2. There are three cases. **Case 1.**\(f_{\mathbf{w}}^{*}=x^{w_{1}}z^{w_{3}}\), in this case, \(f_{\mathbf{w}}^{*}\in(z^{c_{3}})\). **Case 2.**\(f_{\mathbf{w}}^{*}=y^{w_{2}}\). Since \(c_{2}\) is minimal such that \(c_{2}n_{2}\in\langle n_{1},n_{3}\rangle\), we have \(w_{2}\geq c_{2}\). In particular, \(f_{\mathbf{w}}^{*}\in(y^{c_{2}})\). **Case 3.**\(f_{\mathbf{w}}\) is homogeneous. With the same reasoning as in Case 2, we have \(w_{2}\geq c_{2}\). Furthermore, \(w_{3}\) is a multiple of \(c_{3}\), both components of \(f\) belong to \(L\), hence \(f_{\mathbf{w}}\in L\). **Example 3.2**.: Let \(H=\langle 20,30,37\rangle\). Then \(I_{H}=(x^{3}-y^{2},x^{2}y^{11}-z^{10})\) and \(I_{H}^{*}=(y^{2},z^{10})\). **Remark 3.3**.: The defining ideal of the tangent cone is the stabilized partial elimination ideal of the projectivization of \(I_{H}\). That is how we compute it in Macaulay2 [11]. See [10] for more information about the partial elimination ideals and their depth and regularity in comparison to the depth and regularity of the projectivization of \(I_{H}\). Now, consider the case \(r_{21}=r_{31}=0\). By Lemma 3.1, we may assume that \(r_{13}>0\). Thus, the lattice \(\Lambda\) is generated by \(\mathbf{w}_{1}=(c_{1},-r_{12},-r_{13})\) and \(\mathbf{w}_{2}=(0,c_{2},-c_{3})\). If \(r_{13}\geq c_{3}\), then we can replace \(r_{12}\) by \(r_{12}+c_{2}\) and \(r_{13}\) by \(r_{13}-c_{3}\). Thus, we may assume that \(0<r_{13}<c_{3}\). We fix the following notation. Let \(\delta_{1}=c_{1}-(r_{12}+r_{13})\) and \(\delta_{2}=c_{2}-c_{3}\). Since \(\mathbf{w}=(-(b-a)/d,b/d,-a/d)\in\Lambda\), there exist \(\alpha\) and \(\beta\) such that \(\mathbf{w}=\alpha\mathbf{w}_{1}+\beta\mathbf{w}_{2}\). In particular, \(\alpha c_{1}=-(b-a)/d\) and \(\alpha\delta_{1}+\beta\delta_{2}=0\). Denote \(\alpha_{h}=-\alpha\) and \(\beta_{h}=\beta\). Then \(\alpha_{h},\beta_{h}>0\) and \[\begin{cases}\alpha_{h}c_{1}&=(b-a)/d\\ \alpha_{h}r_{12}+\beta_{h}c_{2}&=b/d\\ -\alpha_{h}r_{13}+\beta_{h}c_{3}&=a/d.\end{cases} \tag{3.1}\] First, we claim **Lemma 3.4**.: _Assume that \(r_{21}=0\). Then \(\frac{\delta_{2}}{\delta_{1}}<\frac{c_{3}}{r_{13}}\)._ Proof.: By (3.1), \(-\alpha_{h}r_{13}+\beta_{h}c_{3}=a/d>0\). Hence, \[\frac{\delta_{2}}{\delta_{1}}=\frac{\alpha_{h}}{\beta_{h}}<\frac{c_{3}}{r_{13}}.\] The conclusion follows. Let \(\frac{\eta}{\epsilon}=M(\frac{\delta_{2}}{\delta_{1}},\frac{c_{3}}{r_{13}})\), i.e., \(\eta\) and \(\epsilon\) are smallest positive integers such that \(\frac{\delta_{2}}{\delta_{1}}<\frac{\eta}{\epsilon}\leq\frac{c_{3}}{r_{13}}\). We now define the following two sets. Let \(\alpha_{0}=1,\beta_{0}=0\). For each \(i\geq 0\), consider the system of inequalities \[\begin{cases}\alpha r_{13}-\beta c_{3}&>0\\ \alpha r_{13}-\beta c_{3}&<\alpha_{i}r_{13}-\beta_{i}c_{3}\\ \beta&>\beta_{i}.\end{cases} \tag{3.2}\] When the system (3.2) has solutions, let \(\beta_{i+1}\) be the smallest solution among \(\beta\). Then \(\alpha_{i+1}=\varphi_{r}(\beta_{i+1})\) where \(r=c_{3}/r_{13}\). Note that there can be only finitely many such \(\beta_{i}\) as the sequence \(\{\alpha_{i}r_{13}-\beta_{i}c_{3}\}\) is a strictly decreasing sequence of positive integers. Let \(t\) be the largest index of \(i\) such that \(\beta_{t}<\epsilon\), and \(A=\{(\alpha_{i},\beta_{i})\ |\ i=1,\ldots,t\}\). Since \(\alpha_{0}r_{13}-\beta_{0}c_{3}=r_{13}\), \(|A|\leq r_{13}\). For each \(\alpha,\beta\in\mathbb{N}\) such that \(\alpha r_{13}-\beta c_{3}\geq 0\), let \(p_{\alpha,\beta}=y^{\alpha r_{12}+\beta c_{2}}z^{\alpha r_{13}-\beta c_{3}}\). We have **Lemma 3.5**.: _Assume that \(\alpha^{\prime}>0\) and \(\beta^{\prime}\geq 0\) be such that \(\alpha^{\prime}r_{13}-\beta^{\prime}c_{3}\geq 0\). Let \(g=y^{pr_{12}+\epsilon c_{2}}\). Then \(p_{\alpha^{\prime},\beta^{\prime}}\in L=(g,p_{\alpha,\beta}\mid(\alpha,\beta) \in A)\)._ Proof.: If \(\beta^{\prime}=0\) then \(p_{\alpha^{\prime},\beta^{\prime}}\) is divisible by \(p_{1,0}\). Assume that \(\beta^{\prime}\geq\epsilon\). Since \(\frac{\alpha^{\prime}}{\beta^{\prime}}\geq\frac{c_{3}}{r_{13}}\geq\frac{ \eta}{\epsilon}\), we deduce that \(\alpha^{\prime}\geq\eta\). Thus \(p_{\alpha^{\prime},\beta^{\prime}}\) is divisible by \(g\). Now, assume that \(0<\beta^{\prime}<\epsilon\). Let \(i\) be the largest index such that \(\beta_{i}<\beta^{\prime}\). Let \(r=\frac{c_{3}}{r_{13}}\). Since \(\frac{\alpha^{\prime}}{\beta^{\prime}}\geq r\) and \(\alpha_{i}=\varphi_{r}(\beta_{i})\) we deduce that \(\alpha^{\prime}\geq\alpha_{i}\). If \(\alpha^{\prime}r_{13}-\beta^{\prime}c_{3}\geq\alpha_{i}r_{13}-\beta_{i}r_{23}\) then \(p_{\alpha^{\prime},\beta^{\prime}}\) is divisible by \(p_{\alpha_{i},\beta_{i}}\). If \(\alpha^{\prime}r_{13}-\beta^{\prime}c_{3}<\alpha_{i}r_{13}-\beta_{i}r_{23}\) then \((\alpha^{\prime},\beta^{\prime})\) is a solution to the system (3.2). By our assumption, we must have \(\beta^{\prime}=\beta_{i+1}\). Hence \(\alpha^{\prime}\geq\alpha_{i+1}\) and \(p_{\alpha^{\prime},\beta^{\prime}}\) is divisible by \(p_{\alpha_{i+1},\beta_{i+1}}\). The conclusion follows. Let \(\gamma_{0}=0,\sigma_{0}=1\). For each \(i\geq 0\), consider the system of inequalities \[\begin{cases}-\gamma\delta_{1}+\sigma\delta_{2}&>0\\ -\gamma r_{13}+\sigma c_{3}&<-\gamma_{i}r_{13}+\sigma_{i}c_{3}\\ \gamma&>\gamma_{i}.\end{cases} \tag{3.3}\] When the system (3.3) has solutions, let \(\gamma_{i+1}\) be the smallest solution among \(\gamma\). Then \(\sigma_{i+1}=\varphi_{r}(\gamma_{i+1})\) where \(r=\delta_{1}/\delta_{2}\). There can be only finitely many such \(\gamma_{i}\) as the sequence \(\{-\gamma_{i}r_{13}+\sigma_{i}c_{3}\}\) is a strictly decreasing sequence of positive integers. Let \(u\) be the largest index of \(i\) and \(B=\{(\gamma_{i},\sigma_{i})\mid i=0,\ldots,u\}\). Since \(-\gamma_{0}r_{13}+\sigma_{0}c_{3}=c_{3}\), \(|B|\leq c_{3}\). For each \(\gamma,\sigma\in\mathbb{N}\) such that \(-\gamma\delta_{1}+\sigma\delta_{2}>0\), let \(q_{\gamma,\sigma}=x^{\gamma c_{1}}z^{-\gamma r_{13}+\sigma c_{3}}\). We have **Lemma 3.6**.: _Assume that \(\gamma^{\prime},\sigma^{\prime}\in\mathbb{N}\) be such that \(-\gamma^{\prime}\delta_{1}+\sigma^{\prime}\delta_{2}>0\). Then \(q_{\gamma^{\prime},\sigma^{\prime}}\in L=(q_{\gamma,\sigma}\mid(\gamma,\sigma) \in B)\)._ Proof.: If \(\gamma^{\prime}=0\) then \(q_{\gamma^{\prime},\sigma^{\prime}}\) is divisible by \(q_{0,1}\). Thus, we may assume that \(\gamma^{\prime}>0\). Let \(i\) be the largest index such that \(\gamma_{i}<\gamma^{\prime}\). Since \(\sigma_{i}=\varphi_{r}(\gamma_{i})\) and \(\frac{\sigma^{\prime}}{\gamma^{\prime}}>r\) where \(r=\delta_{1}/\delta_{2}\) we deduce that \(\sigma^{\prime}\geq\sigma_{i}\). If \(-\gamma^{\prime}r_{13}+\sigma^{\prime}c_{3}\geq-\gamma_{i}r_{13}+\sigma_{i}c_{3}\) then \(q_{\gamma^{\prime},\sigma^{\prime}}\) is divisible by \(q_{\gamma_{i},\sigma_{i}}\). If \(-\gamma^{\prime}r_{13}+\sigma^{\prime}c_{3}<-\gamma_{i}r_{13}+\sigma_{i}c_{3}\) then \((\gamma^{\prime},\sigma^{\prime})\) is a solution to the system (3.3). By our assumption, we must have \(\gamma^{\prime}=\gamma_{i+1}\). Hence \(\sigma^{\prime}\geq\sigma_{i+1}\) and \(q_{\gamma^{\prime},\sigma^{\prime}}\) is divisible by \(q_{\gamma_{i+1},\sigma_{i+1}}\). The conclusion follows. **Lemma 3.7**.: _Assume that \(r_{21}=0\). Let \(r_{12},r_{13}\) be such that \(c_{1}n_{1}=r_{12}n_{2}+r_{13}n_{3}\) with \(0<r_{13}<c_{3}\). Let \(g=y^{\eta r_{12}+\epsilon c_{2}}\) and \(h=y^{b/d}-x^{(b-a)/d}z^{a/d}\). Then_ \[I^{*}=(g,h)+(p_{\alpha,\beta}\mid(\alpha,\beta)\in A)+(q_{\gamma,\sigma}\mid( \gamma,\sigma)\in B). \tag{3.4}\] _Furthermore,_ 1. _If_ \(\beta_{h}>\epsilon\) _then_ \(I^{*}=(g)+(p_{\alpha,\beta}\mid(\alpha,\beta)\in A)+(q_{\gamma,\sigma}\mid( \gamma,\sigma)\in B)\)_._ 2. _If_ \(\beta_{h}\leq\epsilon\) _then_ \(I^{*}=(h)+(p_{\alpha,\beta}\mid(\alpha,\beta)\in A,\beta<\beta_{h})+(q_{\gamma, \sigma}\mid(\gamma,\sigma)\in B)\)_._ 3. _The minimal generators of_ \(I^{*}\) _form a Grobner basis of_ \(I^{*}\) _with respect to the reverse lexicographic order._ _._ (4) \(\mu(I^{*})\leq c_{3}+2\)_._ Proof.: Recall that \({\bf w}_{1}=(c_{1},-r_{12},-r_{13})\) and \({\bf w}_{2}=(0,c_{2},-c_{3})\). Let \[L=(g,h)+(p_{\alpha,\beta}\mid(\alpha,\beta)\in A)+(q_{\alpha,\beta}\mid(\alpha, \beta)\in B).\] For clarity, we divide the proof into several steps. **Step 1.**\(L\subseteq I^{*}\). Indeed, by the choice of \(\eta\) and \(\epsilon\) we have \(-\eta r_{13}+\epsilon c_{3}\geq 0\) and \(\eta c_{1}+(-\eta r_{13}+\epsilon c_{3})>\eta r_{12}+\epsilon c_{2}\). Hence, \(f^{*}_{\eta{\bf w}_{1}-\epsilon{\bf w}_{2}}=y^{\eta r_{12}+\epsilon c_{2}}=g \in I^{*}\). From the definition of \(A\) and \(B\), we see the following. For \((\alpha,\beta)\in A\), \(f^{*}_{\alpha{\bf w}_{1}-\beta{\bf w}_{2}}=y^{\alpha r_{12}+\beta c_{2}}z^{ \alpha r_{13}-\beta c_{3}}=p_{\alpha,\beta}\). For \((\gamma,\sigma)\in B\), \(f^{*}_{\gamma{\bf w}_{1}-\sigma{\bf w}_{2}}=x^{\gamma c_{1}}z^{-\gamma r_{13} +\sigma c_{3}}=q_{\gamma,\sigma}\). **Step 2.**\(I^{*}\subseteq L\). First, note that \((1,0)\in A\) and \((0,1)\in B\) and we have \(y^{r_{12}}z^{r_{13}},z^{c_{3}}\in I^{*}\). Now, we prove that for any \(\alpha,\beta>0\), \(f^{*}_{\alpha{\bf w}_{1}+\beta{\bf w}_{2}}\in(z^{c_{3}})\). Indeed, we have \(\alpha{\bf w}_{1}+\beta{\bf w}_{2}=(\alpha c_{1},-\alpha r_{12}+\beta c_{2},- \alpha r_{13}-\beta c_{3})\). Thus, it is either of type 1 or type 3, hence \(f^{*}_{\alpha{\bf w}_{1}+\beta{\bf w}_{2}}\) is divisible by \(z^{\alpha r_{13}+\beta c_{3}}\). By Lemma 2.3, it suffices to prove that for any \(\alpha,\beta>0\), \(f^{*}_{\bf w}\in L\) where \({\bf w}=\alpha{\bf w}_{1}-\beta{\bf w}_{2}=(\alpha c_{1},-\alpha r_{12}-\beta c _{2},-\alpha r_{13}+\beta c_{3})\). There are two cases. **Case 1.**\(-\alpha r_{13}+\beta c_{3}<0\). We have \(f^{*}_{\bf w}=p_{\alpha,\beta}\in L\) by Lemma 3.5. **Case 2.**\(-\alpha r_{13}+\beta c_{3}\geq 0\). There are three subcases. Case 2.a. \(\alpha\delta_{1}>\beta\delta_{2}\). In particular, \(\frac{\delta_{2}}{\delta_{1}}<\frac{\alpha}{\beta}\leq\frac{c_{3}}{r_{13}}\). By definition of \(\eta,\epsilon\) we have \(\beta\geq\epsilon\) and \(\alpha\geq\eta\). Hence, \(f^{*}_{\bf w}\in(g)\). Case 2.b. \(\alpha\delta_{1}<\beta\delta_{2}\). Then \(f^{*}_{\bf w}=x^{\alpha c_{1}}z^{-\alpha r_{13}+\beta c_{3}}=q_{\alpha,\beta} \in L\) by Lemma 3.6. Case 2.c. \(\alpha\delta_{1}=\beta\delta_{2}\). Then \(f^{*}_{\bf w}\in(h)\). **Step 3.** Assume that \(\beta_{h}>\epsilon\). Then \(h\in L=(g)+(p_{\alpha,\beta}\mid(\alpha,\beta)\in A)+(q_{\gamma,\sigma}\mid( \gamma,\sigma)\in B)\). Since \(\beta_{h}>\epsilon\) and \(\eta=\varphi_{r}(\epsilon)\) where \(r=\delta_{2}/\delta_{1}=\alpha_{h}/\beta_{h}\), we deduce that \(\alpha_{h}\geq\eta\). Thus, it suffices to prove that \(q_{\alpha_{h},\beta_{h}}\in L\). Let \(\beta_{h}=k\epsilon+\beta\) for \(0\leq\beta<\beta_{h}\). Then we also have \(\alpha_{h}\geq k\eta\). If \(\beta=0\) then \(k>1\); let \(\gamma=\alpha_{h}-(k-1)\eta\) and \(\sigma=\beta_{h}-(k-1)\epsilon\). If \(\beta>0\), let \(\gamma=\alpha_{h}-k\eta\) and \(\sigma=\beta_{h}-k\epsilon\). Then \(\alpha_{h}/\beta_{h}>\gamma/\sigma\). Furthermore, \[(-\alpha_{h}r_{13}+\beta_{h}c_{3})-(-\gamma r_{13}+\sigma c_{3})=\ell(\epsilon c _{3}-\eta r_{13})\geq 0,\] where \(\ell=k-1\) if \(\beta=0\) and \(k\) if \(\beta>0\). Thus, \(q_{\alpha_{h},\beta_{h}}\) is divisible by \(q_{\gamma,\sigma}\) which belongs to \(L\) by Lemma 3.6. **Step 4.** Assume that \(\beta_{h}\leq\epsilon\). Let \(L=(h)+(p_{\alpha,\beta}\mid(\alpha,\beta)\in A,\beta<\beta_{h})+(q_{\gamma, \sigma}\mid(\gamma,\sigma)\in B)\). Then \(g\) and \(p_{\alpha,\beta}\) with \((\alpha,\beta)\in A\), \(\beta\geq\beta_{h}\) belongs to \(L\). For \(g\): Let \(\epsilon=k\beta_{h}+\beta\) where \(0\leq\beta<\beta_{h}\). First, assume that \(\beta=0\). Since \(\eta=\varphi_{r}(\epsilon)\) where \(r=\alpha_{h}/\beta_{h}\), we deduce that \(\eta=k\alpha_{h}+1\). Thus, \[\epsilon c_{3}=k\beta_{h}c_{3}\geq\eta r_{13}=(k\alpha_{h}+1)r_{13}.\] Hence, \[g-y^{(\eta-k\alpha_{h})r_{12}}f_{k(-\alpha_{h}{\bf w}_{1}+\beta_{h}{\bf w}_{2} )}=y^{(\eta-k\alpha_{h})r_{12}}x^{k\alpha_{h}c_{1}}z^{-k\alpha_{h}r_{13}+k\beta _{h}c_{3}}\in(y^{r_{12}}z^{r_{13}}).\] Now, assume that \(\beta>0\), let \(\alpha=\eta-k\alpha_{h}\). Then \(\alpha/\beta>\alpha_{h}/\beta_{h}\). By the choice of \(\eta\) and \(\epsilon\) we must have \(\alpha/\beta>c_{3}/r_{13}\). Thus, \[g-y^{\alpha r_{12}+\beta c_{2}}f_{k(-\alpha_{h}{\bf w}_{1}+\beta_{h}{\bf w}_{2 })}=x^{k\alpha_{h}c_{1}}y^{\alpha r_{12}+\beta c_{2}}z^{-k\alpha_{h}r_{13}+k \beta_{h}c_{3}}.\] Furthermore, \[(-k\alpha_{h}r_{13}+k\beta_{h}c_{3})-(\alpha r_{13}-\beta c_{3})=\epsilon c_{3 }-\eta r_{13}\geq 0.\] Thus \(g-y^{\alpha r_{12}+\beta c_{2}}f_{k(-\alpha_{h}{\bf w}_{1}+\beta_{h}{\bf w}_{2 })}\) is divisible by \(p_{\alpha,\beta}\) which belongs to \(L\) by Lemma 3.5. For \(p_{\alpha,\beta}\) with \((\alpha,\beta)\in A\) and \(\beta\geq\beta_{h}\). We proceed similarly to the previous case. Let \(\beta=k\beta_{h}+\beta^{\prime}\). Since \(\frac{\alpha}{\beta}>\frac{\alpha_{h}}{\beta_{h}}\), \(\alpha>k\alpha_{h}\). Let \(\alpha^{\prime}=\alpha-k\alpha_{h}\). There are two cases. **Case 1.**\(\beta^{\prime}=0\). We have \[p_{\alpha,\beta}-y^{\alpha^{\prime}r_{12}}z^{\alpha r_{13}-\beta c_{3}}f_{k(- \alpha_{h}{\bf w}_{1}+\beta_{h}{\bf w}_{2})}=x^{k\alpha_{h}c_{1}}y^{\alpha^{ \prime}r_{12}}z^{\alpha^{\prime}r_{13}}\in(y^{r_{12}}z^{r_{13}}).\] **Case 2.**\(\beta^{\prime}>0\). Then \(\alpha^{\prime}/\beta^{\prime}>\alpha_{h}/\beta_{h}\). By the definition of \(\epsilon\), we must have \(\alpha^{\prime}/\beta^{\prime}>c_{3}/r_{13}.\) Hence, \[p_{\alpha,\beta}-y^{\alpha^{\prime}r_{12}+\beta^{\prime}c_{2}}z^{\alpha r_{13 }-\beta c_{3}}f_{k(-\alpha_{h}{\bf w}_{1}+\beta_{h}{\bf w}_{2})}=x^{k\alpha_{h }c_{1}}y^{\alpha^{\prime}r_{12}+\beta^{\prime}c_{2}}z^{\alpha^{\prime}r_{13}- \beta^{\prime}c_{3}}\] is divisible by \(p_{\alpha^{\prime},\beta^{\prime}}\) which belongs to \(L\) by Lemma 3.5. **Step 5.** Assume that \(\beta_{h}\leq\epsilon\). Then \(\{h\}\cup\{p_{\alpha,\beta}\mid(\alpha,\beta)\in A,\beta<\beta_{h}\}\cup\{q_{ \gamma,\sigma}\mid(\gamma,\sigma)\in B\}\) form a Grobner basis of \(I^{*}\) with respect to the reverse lexicographic order. By Buchberger's criterion [3, Theorem 15.8], it suffices to prove that for all \((\alpha,\beta)\in A\) with \(\beta<\beta_{h}\), the \(S\)-pair of \(h\) and \(p_{\alpha,\beta}\) reduces to \(0\). First, we claim that \(\alpha\leq\alpha_{h}\). Assume that \(\alpha>\alpha_{h}\). Since \(\alpha=\varphi_{r}(\beta)\) where \(r=\frac{c_{3}}{r_{13}}\), we deduce that \[\frac{\alpha_{h}}{\beta_{h}}<\frac{\alpha_{h}}{\beta}\leq\frac{\alpha-1}{\beta }<\frac{c_{3}}{r_{13}}.\] Thus, \(\beta\geq\epsilon\). But that is a contradiction as \(\beta<\beta_{h}\leq\epsilon\). We have \[S(p_{\alpha,\beta},h)=x^{\alpha_{h}c_{1}}z^{-(\alpha_{h}-\alpha)r_{13}+(\beta _{h}-\beta)c_{3}}. \tag{3.5}\] In particular, \(S(p_{\alpha,\beta},h)\) is divisible by \(q_{\alpha_{h}-\alpha,\beta_{h}-\beta}\). Furthermore, \((\alpha_{h}-\alpha)/(\beta_{h}-\beta)<\alpha_{h}/\beta_{h}\); thus, \(q_{\alpha_{h}-\alpha,\beta_{h}-b}\) is divisible by \(q_{\gamma_{i},\sigma_{i}}\) for some \((\gamma_{i},\sigma_{i})\in B\) by Lemma 3.6. The conclusion follows. **Step 6.**\(\mu(I^{*})\leq c_{3}+2\). Let \(c_{3}=kr_{13}+s_{13}\). If \(s_{13}=0\), then \(\alpha r_{13}-\beta c_{3}\) is divisible by \(r_{13}\). Thus, the system (3.2) has no solution for \(i\geq 0\). In other words, \(|A|=1\). Furthermore, by Step 3 and Step 4, \(h\) and \(g\) cannot be both minimal. Hence, \(\mu(I^{*})\leq 1+1+c_{3}=c_{3}+2\). Now assume that \(s_{13}>0\). If \(\frac{\delta_{2}}{\delta_{1}}\leq 1\), then the system (3.3) has no solutions. In particular, \(|B|=1\). Thus \(\mu(I^{*})\leq 1+|A|+1\leq 1+r_{13}+1\leq c_{3}+1\). If \(\frac{\delta_{2}}{\delta_{1}}>1\), then \((1,1)\in B\) and \(q_{1,1}=x^{c_{1}}z^{c_{1}-r_{13}}\). Hence, \(|B|\leq c_{1}-r_{13}+1\). Furthermore, \((k+1,1)\in A\), and \(p_{k+1,1}=y^{(k+1)r_{12}+c_{2}}z^{r_{13}-s_{13}}\). Hence \(|A|\leq r_{13}-s_{13}+1\). Thus, \[\mu(I^{*})\leq 1+|A|+|B|\leq 1+(c_{1}-r_{13}+1)+(r_{13}-s_{13}+1)\leq c_{3}+2.\] That concludes the proof of the lemma. **Example 3.8**.: Let \(H=\langle 332,345,450\rangle\). Then \(I_{H}=(x^{15}-y^{4}z^{8},y^{30}-z^{23})\). We have \(\delta_{1}=3\), \(\delta_{2}=7\), \(c_{3}=23\), \(r_{13}=8\). The smallest \(\eta,\epsilon\) such that \(\frac{\delta_{2}}{\delta_{1}}<\frac{\eta}{\epsilon}\leq\frac{c_{3}}{r_{13}}\) are \(5,2\). We have \(I_{H}^{*}=(z^{23},y^{4}z^{8},y^{42}z,y^{80},x^{15}z^{15},x^{30}z^{7})\). **Lemma 3.9**.: _Assume that \(r_{12}=0\) and \(c_{2}<r_{21}+r_{23}\). Then \(I^{*}=(y^{c_{2}},z^{c_{3}})\)._ Proof.: Clearly, by assumption, \(y^{c_{2}},z^{c_{3}}\in I^{*}\). The lattice \(\Lambda\) is generated by \(\mathbf{w}_{1}=(c_{1},0,-c_{3})\) and \(\mathbf{w}_{2}=(-r_{21},c_{2},-r_{23})\). By Lemma 2.1, it suffices to prove that \(f_{\mathbf{w}}^{*}\in J=(y^{c_{2}},z^{c_{3}})\) for all \(\mathbf{w}\in\Lambda\). By Lemma 2.3, there are two cases. **Case 1.**\(\mathbf{w}=\alpha\mathbf{w}_{1}+\beta\mathbf{w}_{2}=(\alpha c_{1}-\beta r_{21},\beta c_{2},-\alpha c_{3}-\beta r_{23})\). Since \(y^{\beta c_{2}},z^{\alpha c_{3}+\beta r_{23}}\in J\), \(f_{\mathbf{w}}^{*}\in J\). **Case 2.**\(\mathbf{w}=\alpha\mathbf{w}_{1}-\beta\mathbf{w}_{2}=(\alpha c_{1}+\beta r_{21},-\beta c_{2},-\alpha c_{3}+\beta r_{23})\). If \(-\alpha c_{3}+\beta r_{23}<0\), then \(f_{\mathbf{w}}^{*}\) is divisible by \(y^{c_{2}}\). If \(-\alpha c_{3}+\beta r_{23}>0\) then \(\alpha c_{1}+\beta r_{21}-\alpha c_{3}+\beta r_{23}>\beta c_{2}\), hence \(f_{\mathbf{w}}^{*}=y^{\beta c_{2}}\in J\). The conclusion follows. **Example 3.10**.: Let \(H=\langle 20,23,30\rangle\). Then \(I_{H}=(x^{3}-z^{2},y^{10}-x^{10}z)\). Hence, \(I_{H}^{*}=(y^{10},z^{2})\). **Lemma 3.11**.: _Assume that \(r_{12}=0\) and \(c_{2}=r_{21}+r_{23}\). Then \(I^{*}=(y^{c_{2}}-x^{r_{21}}z^{r_{23}},z^{c_{3}})\)._ Proof.: The lattice \(\Lambda\) is generated by \(\mathbf{w}_{1}=(c_{1},0,-c_{3})\) and \(\mathbf{w}_{2}=(-r_{21},c_{2},-r_{23})\). First, we have \(f_{\mathbf{w}_{1}}^{*}\) and \(f_{\mathbf{w}_{2}}=f_{\mathbf{w}_{2}}^{*}\in I^{*}\). Let \(J=(y^{c_{2}}-x^{r_{21}}z^{r_{23}},z^{c_{3}}).\) We will now prove that \(I^{*}\subseteq J\). By Lemma 2.3, there are two cases. **Case 1.**\(\mathbf{w}=\alpha\mathbf{w}_{1}+\beta\mathbf{w}_{2}=(\alpha c_{1}-\beta r_{21},\beta c_{2},-\alpha c_{3}-\beta r_{23})\). There are two subcases. Case 1.a. \(\alpha c_{1}-\beta r_{21}<0\). Then we have \(\beta r_{21}-\alpha c_{1}+\alpha c_{3}+\beta r_{23}<\beta c_{2}\). Hence, \(f_{\mathbf{w}}^{*}=x^{\beta r_{21}-\alpha c_{1}}z^{\alpha c_{3}+\beta r_{23} }\in(z^{c_{3}})\). Case 1.b. \(\alpha c_{1}-\beta r_{21}>0\). Then \(f^{*}=z^{\alpha c_{3}+\beta r_{23}}\in(z^{c_{3}})\). **Case 2.**\(\mathbf{w}=\alpha\mathbf{w}_{1}-\beta\mathbf{w}_{2}=(\alpha c_{1}+\beta r_{21},-\beta c_{2},-\alpha c_{3}+\beta r_{23}).\) There are two subcases. Case 2.a. \(-\alpha c_{3}+\beta r_{23}\geq 0\). Since \(c_{2}=r_{21}+r_{23}\), we have \(f_{\mathbf{w}}^{*}=y^{\beta c_{2}}\). Let \(c_{3}=kr_{23}+s_{23}\) with \(0\leq s_{23}<r_{23}\). We have \[y^{\beta c_{2}}-f_{\beta\mathbf{w}_{2}}=x^{\beta r_{21}}z^{\beta r_{23}}.\] Since \(\beta r_{23}\geq\alpha c_{3}\geq c_{3}\), we deduce that \(f_{w}^{*}\in J\). Case 2.b. \(-\alpha c_{3}+\beta r_{23}<0\). Then \(f_{\mathbf{w}}^{*}=y^{\beta c_{2}}z^{\alpha c_{3}-\beta r_{23}}\). We have \[f_{\mathbf{w}}^{*}-f_{\beta\mathbf{w}_{2}}=x^{\beta r_{21}}z^{\alpha c_{3}}\in( z^{c_{3}}).\] The conclusion follows. **Example 3.12**.: This is the typical case of shifted semigroup which is a complete intersection with large enough shift. For example, consider the example in the previous case shifted by \(140\), namely \(H=\langle 160,163,170\rangle\). Then \(I=(x^{17}-z^{16},y^{10}-x^{7}z^{3})\) and \(I^{*}=(z^{16},y^{10}-x^{7}z^{3})\). Finally, consider the case \(r_{12}=r_{32}=0\) and \(c_{2}>r_{21}+r_{23}\). From the previous cases, we may assume that \(r_{21},r_{23}>0\). If \(r_{23}\geq c_{3}\), then we can replace \(r_{21}\) by \(r_{21}+c_{1}\) and \(r_{23}\) by \(r_{23}-c_{3}\). Thus, we may assume that \(0<r_{23}<c_{3}\). Then, the lattice \(\Lambda\) is generated by \(\mathbf{w}_{1}=(c_{1},0,-c_{3})\) and \(\mathbf{w}_{2}=(-r_{21},c_{2},-r_{23})\). We fix the following notation. Let \(\delta_{1}=c_{1}-c_{3}\) and \(\delta_{2}=c_{2}-r_{21}-r_{23}\). Since \(\mathbf{w}=(-(b-a)/d,b/d,-a/d)\in\Lambda\), there exist \(\alpha\) and \(\beta\) such that \(\mathbf{w}=\alpha\mathbf{w}_{1}+\beta\mathbf{w}_{2}\). In particular, \(\beta c_{2}=b/d\) and \(\alpha\delta_{1}+\beta\delta_{2}=0\). Denote \(\alpha_{h}=-\alpha\) and \(\beta_{h}=\beta\). Then we have \[\begin{cases}\alpha_{h}c_{1}+\beta_{h}r_{21}&=(b-a)/d\\ \beta_{h}c_{2}&=b/d\\ -\alpha_{h}c_{3}+\beta_{h}r_{23}&=a/d.\end{cases} \tag{3.6}\] First, we claim **Lemma 3.13**.: _Assume that \(r_{12}=0\) and \(c_{2}>r_{21}+r_{23}\). Then \(\frac{\delta_{2}}{\delta_{1}}<\frac{r_{23}}{c_{3}}\)._ Proof.: By (3.6), \(\beta_{h}r_{23}-\alpha_{h}c_{3}=a/d>0\). Hence, \[\frac{\delta_{2}}{\delta_{1}}=\frac{\alpha_{h}}{\beta_{h}}<\frac{r_{23}}{c_{3 }}.\] The conclusion follows. Let \(\frac{\eta}{\epsilon}=M(\frac{\delta_{2}}{\delta_{1}},\frac{r_{23}}{r_{3}})\), i.e., \(\eta\) and \(\epsilon\) are the smallest positive integers such that \(\frac{\delta_{2}}{\delta_{1}}<\frac{\eta}{\epsilon}\leq\frac{r_{23}}{r_{3}}.\) We now define the following two sets. Let \(\alpha_{0}=1,\beta_{0}=0\). For each \(i\geq 0\), consider the system of inequalities \[\begin{cases}\alpha c_{3}-\beta r_{23}&>0\\ \alpha c_{3}-\beta r_{23}&<\alpha_{i}c_{3}-\beta_{i}r_{23}\\ \beta&>\beta_{i}.\end{cases} \tag{3.7}\] When the system (3.7) has solutions, let \(\beta_{i+1}\) be the smallest solution among \(\beta\). Then \(\alpha_{i+1}=\varphi_{r}(\beta_{i+1})\) where \(r=r_{23}/c_{3}\). There can be only finitely many such \(\beta_{i}\) as the sequence \(\{\alpha_{i}c_{3}-\beta_{i}r_{23}\}\) is a strictly decreasing sequence of positive integers. Let \(t\) be the largest index of \(i\) such that \(\beta_{t}<\epsilon\) and \(A=\{(\alpha_{i},\beta_{i})\mid i=0,\dots,t\}\). Since \(\alpha_{0}c_{3}-\beta_{0}r_{23}=c_{3}\), \(|A|\leq c_{3}\). For each \(\alpha,\beta\in\mathbb{N}\) such that \(\alpha c_{3}-\beta r_{23}\geq 0\), let \(p_{\alpha,\beta}=y^{\beta c_{2}}z^{\alpha c_{3}-\beta r_{23}}\). We have **Lemma 3.14**.: _Assume that \(\alpha^{\prime}>0\) and \(\beta^{\prime}\geq 0\) be such that \(\alpha^{\prime}c_{3}-\beta^{\prime}r_{23}\geq 0\). Let \(g=y^{\epsilon c_{2}}\). Then \(p_{\alpha^{\prime},\beta^{\prime}}\in L=(g,p_{\alpha,\beta}\mid(\alpha,\beta) \in A)\)._ Proof.: The proof is similar to that of Lemma 3.5. Let \(\gamma_{0}=0\), \(\sigma_{0}=1\). For each \(i\geq 0\), consider the system of inequalities \[\begin{cases}-\gamma\delta_{1}+\sigma\delta_{2}&>0\\ -\gamma c_{3}+\sigma r_{23}&<\gamma_{i}c_{3}+\sigma_{i}r_{23}\\ \gamma&>\gamma_{i}.\end{cases} \tag{3.8}\] When the system (3.8) has solutions, let \(\gamma_{i+1}\) be the smallest solution among \(\gamma\). Then \(\sigma_{i+1}=\varphi_{r}(\gamma_{i+1})\) where \(r=\delta_{1}/\delta_{2}\). There can be only finitely many such \(\gamma_{i}\) as the sequence \(\{-\gamma_{i}c_{3}+\sigma_{i}r_{23}\}\) is a strictly decreasing sequence of positive integers. Let \(u\) be the largest index of \(i\) and \(B=\{(\gamma_{i},\sigma_{i})\mid i=0,\ldots,u\}\). Since \(-\gamma_{0}c_{3}+\sigma_{0}r_{23}=r_{23}\), \(|B|\leq r_{23}\). For each \(\gamma,\sigma\geq 0\) such that \(-\gamma\delta_{1}+\sigma\delta_{2}>0\), let \(q_{\gamma,\delta}=x^{\gamma c_{1}+\sigma r_{21}}z^{-\gamma c_{3}+\sigma r_{23}}\). We have **Lemma 3.15**.: _Assume that \(\gamma^{\prime},\sigma^{\prime}\) be such that \(-\gamma^{\prime}\delta_{1}+\sigma^{\prime}\delta_{2}>0\). Then \(q_{\gamma^{\prime},\sigma^{\prime}}\in L=(q_{\gamma,\sigma}\mid(\gamma,\sigma )\in B)\)._ Proof.: The proof is similar to that of Lemma 3.6. **Lemma 3.16**.: _Assume that \(r_{12}=0\). Let \(r_{21},r_{23}\) with \(0<r_{23}<c_{3}\) be such that \(c_{2}n_{2}=r_{21}n_{1}+r_{23}n_{3}\). Assume that \(c_{2}>r_{21}+r_{23}\). Let \(g=y^{\epsilon c_{2}}\) and \(h=y^{b/d}-x^{(b-a)/d}z^{a/d}\). Then_ \[I^{*}=(g,h)+(p_{\alpha,\beta}\mid(\alpha,\beta)\in A)+(q_{\gamma,\sigma}\mid( \gamma,\sigma)\in B). \tag{3.9}\] _Furthermore,_ 1. _If_ \(\beta_{h}>\epsilon\)_, then_ \(I^{*}=(g)+(p_{\alpha,\beta}\mid(\alpha,\beta)\in A)+(q_{\gamma,\sigma}\mid( \gamma,\sigma)\in B)\)_._ 2. _If_ \(\beta_{h}\leq\epsilon\) _then_ \(I^{*}=(h)+(p_{\alpha,\beta}\mid(\alpha,\beta)\in A,\beta<\beta_{h})+(q_{\gamma,\sigma}\mid(\gamma,\sigma)\in B)\)_._ 3. _The minimal generators of_ \(I^{*}\) _form a Grobner basis of_ \(I^{*}\) _with respect to the reverse lexicographic order._ 4. \(\mu(I^{*})\leq c_{3}+2\)_._ Proof.: The lattice \(\Lambda\) is generated by \(\mathbf{w}_{1}=(c_{1},0,-c_{3})\) and \(\mathbf{w}_{2}=(-r_{21},c_{2},-r_{23})\). Let \[L=(g,h)+(p_{\alpha,\beta}\mid(\alpha,\beta)\in A)+(q_{\gamma,\sigma}\mid( \gamma,\sigma)\in B).\] For clarity, we divide the proof into several steps. The first two steps can be proved in similar manners as those of Lemma 3.7. **Step 1.**\(L\subseteq I^{*}\). **Step 2.**\(I^{*}\subseteq L\). **Step 3.** Assume that \(\beta_{h}>\epsilon\). Then \(h\in L=(g)+(p_{\alpha,\beta}\mid(\alpha,\beta)\in A)+(q_{\gamma,\sigma}\mid( \gamma,\sigma)\in B)\). Since \(\beta_{h}>\epsilon\) and \(g=y^{\epsilon c_{2}}\), it suffices to prove that \(q_{\alpha_{h},\beta_{h}}\in L\). Let \(\beta_{h}=k\epsilon+\beta\) for \(0\leq\beta<\beta_{h}\). If \(\beta=0\), \(k\geq 2\), let \(\gamma=\alpha_{h}-(k-1)\eta\) and \(\sigma=\beta_{h}-(k-1)\epsilon\). If \(\beta>0\), let \(\gamma=\alpha_{h}-k\eta\) and \(\sigma=\beta_{h}-k\epsilon\). Then we have \(\frac{\alpha_{h}}{\beta_{h}}>\frac{\gamma}{\sigma}\). Furthermore, \[(-\alpha_{h}c_{3}+\beta_{h}r_{23})-(-\gamma c_{3}+\sigma r_{23})=k(-\eta c_{3} +\epsilon r_{23})\geq 0\] Hence, \(q_{\alpha,\beta}\) is divisible by \(q_{\gamma,\sigma}\) which belongs to \(L\) by Lemma 3.15. **Step 4.** Assume that \(\beta_{h}\leq\epsilon\). Let \(L=(h)+(p_{\alpha,\beta}\mid(\alpha,\beta)\in A,\beta<\beta_{h})+(q_{\gamma, \sigma}\mid(\gamma,\sigma)\in B)\). Then \(g\) and \(p_{\alpha,\beta}\) with \((\alpha,\beta)\in A\) and \(\beta\geq\beta_{h}\) belongs to \(M\). For \(g\): Let \(\epsilon=k\beta_{h}+\beta\) where \(0\leq\beta<\beta_{h}\). If \(\beta=0\), then we have \(\eta=k\alpha_{h}+1\). Furthermore, \(\frac{\eta}{\epsilon}\leq\frac{r_{23}}{c_{3}}\). Hence, \(k\beta_{h}r_{23}\geq(k\alpha_{h}+1)c_{3}\). Thus \[g-f_{-k\alpha_{h}\mathbf{w}_{1}+k\beta_{h}\mathbf{w}_{2}}=x^{k\alpha_{h}c_{1}+k \beta_{h}r_{21}}z^{-k\alpha_{h}c_{3}+k\beta_{h}r_{23}}\in(z^{c_{3}}).\] Now assume that \(\beta>0\). Then we also have \(\eta\geq k\alpha_{h}+1\). Let \(\alpha=\eta-k\alpha_{h}\) and \(\beta=\epsilon-k\beta_{h}\). Then, \(\alpha/\beta>\alpha_{h}/\beta_{h}\). By definition of \(\epsilon\) we must have \(\alpha/\beta>r_{23}/c_{3}\). Now, we have \[g-y^{\beta c_{2}}f_{-k\alpha_{h}\mathbf{w}_{1}+k\beta_{h}\mathbf{w}_{2}}=x^{k \alpha_{h}c_{1}+k\beta_{h}r_{21}}y^{\beta c_{2}}z^{-k\alpha_{h}c_{3}+k\beta_{h} r_{23}}.\] Since \((-k\alpha_{h}c_{3}+k\beta_{h}r_{23})-(\alpha c_{3}-\beta r_{23})=-\eta c_{3}+ \epsilon r_{23}\geq 0\), \(g-y^{\beta c_{2}}f_{-k\alpha_{h}\mathbf{w}_{1}+k\beta_{h}\mathbf{w}_{2}}\) is divisible by \(p_{\alpha,\beta}\) which belongs to \(L\) by Lemma 3.14. For \(p_{\alpha,\beta}\) with \((\alpha,\beta)\in A\) and \(\beta\geq\beta_{h}\). As before, we write \(\beta=k\beta_{h}+\beta^{\prime}\). Then we must have \(\alpha\geq k\beta_{h}+1\). Let \(\alpha=k\alpha_{h}+\alpha^{\prime}\). If \(\beta^{\prime}=0\), we deduce that \(p_{\alpha,\beta}-f_{-k\alpha_{h}\mathbf{w}_{1}+k\beta_{h}\mathbf{w}_{2}}\in M\) as in the previous case. If \(\beta^{\prime}>0\), then we also have \(\frac{\alpha^{\prime}}{\beta^{\prime}}>\frac{r_{23}}{c_{3}}\). Hence, \(p_{\alpha,\beta}-f_{-k\alpha_{h}\mathbf{w}_{1}+k\beta_{h}\mathbf{w}_{2}}\in(p_ {\alpha^{\prime},\beta^{\prime}})\). **Step 5.** Assume that \(\beta_{h}\leq\epsilon\). Then \(\{h\}\cup\{p_{\alpha,\beta}\mid(\alpha,\beta)\in A,\beta<\beta_{h}\}\cup\{q_{ \gamma,\sigma}\mid(\gamma,\sigma)\in B\}\) form a Grobner basis of \(I^{*}\) with respect to the revlex order. Since \(\mathrm{in}(h)=y^{\beta_{h}c_{2}}\), it suffices to prove that for any \((\alpha,\beta)\in A,\beta<\beta_{h}\), the S-pair \(S(h,p_{\alpha,\beta})\) reduces to \(0\). We have \[S(h,p_{\alpha,\beta})=x^{\alpha_{h}c_{1}+\beta_{h}r_{21}}z^{(\alpha-\alpha_{h} )c_{3}+(\beta_{h}-\beta)r_{23}}.\] If \(\alpha\geq\alpha_{h}\) then we have the exponent of \(z\) is at least \(r_{23}\), hence \(S(h,p_{\alpha,\beta})\in(q_{0,1})\). Thus, we may assume that \(\alpha<\alpha_{h}\). Since \(\frac{\delta_{2}}{\delta_{1}}<\frac{r_{23}}{c_{3}}\leq\frac{\alpha}{\beta}\), we deduce that \(\frac{\alpha_{h}-\alpha}{\beta_{h}-\beta}<\frac{\delta_{2}}{\delta_{1}}\). Hence \(S(h,p_{\alpha,\beta})\) is divisible by \(q_{\alpha_{h}-\alpha,\beta_{h}-\beta}\) which belongs to \(L=(q_{\gamma,\sigma}\mid(\gamma,\sigma)\in B)\). The conclusion follows. **Step 6.**\(\mu(I^{*})\leq c_{3}+2\). Let \(c_{3}=kr_{23}+s\) with \(0\leq s<r_{23}\). If \(s=0\), then \(-\gamma c_{3}+\sigma r_{23}\) is divisible by \(r_{23}\). Hence the system (3.8) has no solution for \(i\geq 0\). Hence \(|B|=1\). Thus, \(\mu(I^{*})\leq 1+1+|A|\leq c_{3}+2\). Now assume that \(s>0\). Then \((1,i)\in A\) for \(i=0,\ldots,k\) and \(p_{1,k}=y^{kc_{2}}z^{s}\). Thus \(|A|\leq k+s\). Furthermore, \(|B|\leq r_{23}\). Thus, \[\mu(I^{*})\leq 1+k+s+r_{23}\leq c_{3}+2.\] That concludes the proof of the lemma. **Example 3.17**.: Let \(\mathrm{H}=\langle 480,503,1950\rangle\). Then \(I=(y^{30}-x^{3}z^{7},x^{65}-z^{16})\) and \(I^{*}=(x^{3}z^{7},z^{16},y^{30}z^{9},y^{60}z^{2},x^{74}z^{5},x^{145}z^{3},y^{210})\). **Example 3.18**.: Let \(H=\langle 160,169,460\rangle\). Then \(I=(y^{20}-xz^{7},x^{23}-z^{8})\) and \(I^{*}=(xz^{7},z^{8},y^{20}z,x^{25}z^{6},x^{49}z^{5},x^{73}z^{4},y^{100}-x^{97}z^ {3})\). ## 4. Non complete intersection monomial space curves We keep the notation as in Section 2. In this section, we consider the case where \(I_{H}\) is not a complete intersection. In particular, \(r_{ij}\neq 0\) for all \(i,j\). Then, the lattice \(\Lambda\) is generated by \(\mathbf{w}_{1}=(c_{1},-r_{12},-r_{13})\) and \(\mathbf{w}_{2}=(-r_{21},c_{2},-r_{23})\). **Lemma 4.1**.: _Assume that \(r_{ij}\neq 0\) and \(c_{2}<r_{21}+r_{23}\). Then \(I^{*}=(y^{r_{12}}z^{r_{13}},y^{c_{2}},z^{c_{3}})\)._ Proof.: Let \(J=(y^{r_{12}}z^{r_{13}},y^{c_{2}},z^{c_{3}})\). Then \(J\subseteq I^{*}\). We need to show that for any \(\mathbf{w}\in\Lambda\), \(f_{\mathbf{w}}^{*}\in J\). By Lemma 2.3, there are two cases. **Case 1. \(\mathbf{w}=\alpha\mathbf{w}_{1}+\beta\mathbf{w}_{2}=(\alpha c_{1}-\beta r_{21},-\alpha r_{12}+\beta c_{2},-\alpha r_{13}-\beta r_{23})\)**. There are two subcases. Case 1.a. \(\alpha c_{1}-\beta r_{21}>0\). Then \(\mathbf{w}\) is of type 1 or type 3. In either case, we have \(f_{\mathbf{w}}^{*}\in(z^{\alpha r_{13}+\beta r_{23}})\in J\), as \(c_{3}=r_{13}+r_{23}\). Case 1.b. \(\alpha c_{1}-\beta r_{21}<0\). Then \(\alpha<\beta\). Hence, \(-\alpha r_{12}+\beta c_{2}>c_{2}\). Therefore, \(y^{-\alpha r_{12}+\beta c_{2}}\in(y^{c_{2}})\) and \(x^{\beta r_{21}-\alpha c_{1}}z^{\alpha r_{13}+\beta r_{23}}\in J\). Thus \(f_{\mathbf{w}}^{*}\in J\). **Case 2. \(\mathbf{w}=\alpha\mathbf{w}_{1}-\beta\mathbf{w}_{2}=(\alpha c_{1}+\beta r_{21 },-\alpha r_{12}-\beta c_{2},-\alpha r_{13}+\beta r_{23})\)**. There are two subcases. Case 2.a. \(-\alpha r_{13}+\beta r_{23}<0\). Then \(f_{\mathbf{w}}^{*}=y^{\alpha r_{12}+\beta c_{2}}z^{\alpha r_{13}-\beta r_{23} }\in(y^{c_{2}})\). Case 2.b. \(-\alpha r_{13}+\beta r_{23}>0\). Then \(f\) is of type 2. Since \(c_{2}<r_{21}+r_{23}\), we have \[\alpha c_{1}+\beta r_{21}-\alpha r_{13}+\beta r_{23}>\alpha r_{12}+\beta c_{2}.\] Hence \(f_{\mathbf{w}}^{*}=y^{\alpha r_{12}+\beta c_{2}}\in J\). The conclusion follows. **Example 4.2**.: Let \(H=\langle 13,20,31\rangle\). Then \(I=(x^{7}-y^{3}z,y^{7}-x^{6}z^{2},z^{3}-xy^{4})\) and \(I^{*}=(z^{3},y^{3}z,y^{7})\). **Lemma 4.3**.: _Assume that \(r_{ij}\neq 0\) and \(c_{2}\geq r_{21}+r_{23}\). Let \(\alpha,\beta>0\) and \(\mathbf{w}=\alpha\mathbf{w}_{1}+\beta\mathbf{w}_{2}\). Then \(f_{\mathbf{w}}^{*}\in(z^{c_{3}})\)._ Proof.: We have \(\mathbf{w}=\alpha\mathbf{w}_{1}+\beta\mathbf{w}_{2}=(\alpha c_{1}-\beta r_{21},-\alpha r_{12}+\beta c_{2},-\alpha r_{13}-\beta r_{23})\). There are two cases. **Case 1. \(\alpha c_{1}-\beta r_{21}\geq 0\)**. Then \(\mathbf{w}\) is of type 1 or type 3. In either case, we have \(f_{\mathbf{w}}^{*}\in(z^{\alpha r_{13}+\beta r_{23}})\subseteq(z^{c_{3}})\), as \(c_{3}=r_{13}+r_{23}\). **Case 2. \(\alpha c_{1}-\beta r_{21}<0\)**. We have \[\beta r_{21}-\alpha c_{1}+\alpha r_{13}+\beta r_{23}<-\alpha r_{12}+\beta c_{ 2}.\] Hence, \(f_{\mathbf{w}}^{*}=x^{\beta r_{21}-\alpha c_{1}}z^{\alpha r_{13}+\beta r_{23} }\in(z^{c_{3}})\). The conclusion follows. **Lemma 4.4**.: _Assume that \(r_{ij}\neq 0\) and \(c_{2}=r_{21}+r_{23}\). Then \(I^{*}=(y^{r_{12}}z^{r_{13}},y^{c_{2}}-x^{r_{21}}z^{r_{23}},z^{c_{3}})\)._ Proof.: Let \(J=(y^{r_{12}}z^{r_{13}},y^{c_{2}}-x^{r_{21}}z^{r_{23}},z^{c_{3}})\). Then \(J\subseteq I^{*}\). We need to show that for any \(\mathbf{w}\in\Lambda\), \(f_{\mathbf{w}}^{*}\in J\). By Lemma 2.3 and Lemma 4.3, we may assume that \(\mathbf{w}=\alpha\mathbf{w}_{1}-\beta\mathbf{w}_{2}=(\alpha c_{1}+\beta r_{21 },-\alpha r_{12}-\beta c_{2},-\alpha r_{13}+\beta r_{23})\). There are two cases. **Case 1. \(-\alpha r_{13}+\beta r_{23}\leq 0\)**. Then \(f_{\mathbf{w}}*=y^{\alpha r_{12}+\beta c_{2}}z^{\alpha r_{13}-\beta r_{23}}\). Now we have \[f_{\mathbf{w}}^{*}-y^{\alpha r_{12}}f_{\beta\mathbf{w}_{2}}=x^{\beta r_{21}}z^ {\beta r_{23}}z^{\alpha r_{13}-\beta r_{23}}=(y^{r_{12}}z^{r_{13}})^{\alpha}x^ {\beta r_{21}}\in J.\] **Case 2. \(-\alpha r_{13}+\beta r_{23}>0\)**. Then \(f_{\mathbf{w}}\) is of type 2. But by assumption, we have \[\alpha c_{1}+\beta r_{21}-\alpha r_{13}+\beta r_{23}>\alpha r_{12}+\beta c_{2}.\] Hence \(f_{\mathbf{w}}^{*}=y^{\alpha r_{12}+\beta c_{2}}\). Now we have \[f_{\mathbf{w}}^{*}-y^{\alpha r_{12}}f_{\beta\mathbf{w}_{2}}=y^{\alpha r_{12}}x^ {\beta r_{21}}z^{\beta r_{23}}.\] Since \(-\alpha r_{13}+\beta r_{23}>0\), \(y^{\alpha r_{12}}z^{\beta r_{23}}\) is divisible by \((y^{r_{12}}z^{r_{13}})^{\alpha}\). **Example 4.5**.: This is typical case of non-complete intersection monomial curve with large shift. For example, consider the semigroup obtained by shifting the semigroup from the previous example, \(H=\langle 193,200,211\rangle\). Then we have \(I=(x^{16}-y^{7}z^{8},y^{18}-x^{11}z^{7},z^{15}-x^{5}y^{11})\) and \(I^{*}=(y^{7}z^{8},y^{18}-x^{11}z^{7},z^{15})\). Now assume that \(c_{2}>r_{21}+r_{23}\). Let \(\delta_{1}=c_{1}-r_{12}-r_{13}\) and \(\delta_{2}=c_{2}-r_{21}-r_{23}\). Since \(\mathbf{w}=(-(b-a)/d,b/d,-a/d)\in\Lambda\), there exist \(\alpha\) and \(\beta\) such that \(\mathbf{w}=\alpha\mathbf{w}_{1}+\beta\mathbf{w}_{2}\). In particular, \(\alpha\delta_{1}+\beta\delta_{2}=0\) and \(\alpha c_{1}-\beta r_{21}=-(b-a)/d\) and \(-\alpha r_{12}+\beta c_{2}=b/d\). Thus, we must have \(\alpha<0\) and \(\beta>0\). We fix \(\alpha_{h}=-\alpha\) and \(\beta_{h}=\beta\). Then we have \[\begin{cases}\alpha_{h}c_{1}+\beta_{h}r_{21}&=(b-a)/d\\ \alpha_{h}r_{12}+\beta_{h}c_{2}&=b/d\\ -\alpha_{h}r_{13}+\beta_{h}r_{23}&=a/d.\end{cases} \tag{4.1}\] First, we claim **Lemma 4.6**.: _Assume that \(r_{ij}\neq 0\) for all \(i,j\) and \(c_{2}>r_{21}+r_{23}\). Then \(\frac{\delta_{2}}{\delta_{1}}<\frac{r_{23}}{r_{13}}\)._ Proof.: Since \(-\alpha_{h}r_{13}+\beta_{h}r_{23}=a/d>0\), we have \[\frac{\alpha_{h}}{\beta_{h}}=\frac{\delta_{2}}{\delta_{1}}<\frac{r_{23}}{r_{13 }}.\] The conclusion follows. Let \(\frac{\eta}{\epsilon}=M(\frac{\delta_{2}}{\delta_{1}},\frac{r_{23}}{r_{13}})\), i.e., \(\eta\) and \(\epsilon\) are the smallest positive integers such that \(\frac{\delta_{2}}{\delta_{1}}<\frac{\eta}{\epsilon}\leq\frac{r_{23}}{r_{13}}\). We now define the following two sets. Let \(\alpha_{0}=1,\beta_{0}=0\). For each \(i\geq 0\), consider the system of inequalities \[\begin{cases}\alpha r_{13}-\beta r_{23}&>0\\ \alpha r_{13}-\beta r_{23}&<\alpha_{i}r_{13}-\beta_{i}r_{23}\\ \beta&>\beta_{i}.\end{cases} \tag{4.2}\] When the system (4.2) has solutions, let \(\beta_{i+1}\) be the smallest solution among \(\beta\). Then \(\alpha_{i+1}=\varphi_{r}(\beta_{i+1})\) where \(r=r_{23}/r_{13}\). There can be only finitely many such \(\beta_{i}\) as the sequence \(\{\alpha_{i}r_{13}-\beta r_{23}\}\) is a strictly decreasing sequence of positive integers. Let \(t\) be the largest index of \(i\) such that \(\beta_{t}<\epsilon\) and \(A=\{(\alpha_{i},\beta_{i})\mid i=0,\dots,t\}\). Since \(\alpha_{0}r_{13}-\beta_{0}r_{23}=r_{13}\), \(|A|\leq r_{13}\). For each \(\alpha,\beta\in\mathbb{N}\) such that \(\alpha r_{13}-\beta r_{23}\geq 0\), let \(p_{\alpha,\beta}=y^{\alpha r_{12}+\beta c_{2}}z^{\alpha r_{13}-\beta r_{23}}\). We have **Lemma 4.7**.: _Assume that \(\alpha^{\prime}>0\), \(\beta^{\prime}\geq 0\) be such that \(\alpha^{\prime}r_{13}-\beta^{\prime}r_{23}\geq 0\). Then \(p_{\alpha^{\prime},\beta^{\prime}}\in L=(g,p_{\alpha,\beta}\mid(\alpha,\beta) \in A)\)._ Proof.: If \(\beta^{\prime}=0\), then \(p_{\alpha^{\prime},\beta^{\prime}}\) is divisible by \(p_{1,0}\). Assume that \(\beta^{\prime}\geq\epsilon\). Since \(\frac{\alpha^{\prime}}{\beta^{\prime}}\geq\frac{r_{23}}{r_{13}}>\frac{\eta}{\epsilon}\), \(\alpha^{\prime}>\eta\). Thus, \(p_{\alpha^{\prime},\beta^{\prime}}\) is divisible by \(g\). Thus, we may assume that \(\beta^{\prime}>0\) and \(\beta^{\prime}<\epsilon\). Let \(i\) be the largest index such that \(\beta_{i}<\beta^{\prime}\). Since \(\frac{\alpha^{\prime}}{\beta^{\prime}}\geq r\) and \(\alpha_{i}=\varphi_{r}(\beta_{i})\) where \(r=r_{23}/r_{13}\), we deduce that \(\alpha^{\prime}\geq\alpha_{i}\). If \(\alpha^{\prime}r_{13}-\beta r_{23}\geq\alpha_{i}r_{13}-\beta_{i}r_{23}\) then \(p_{\alpha^{\prime},\beta^{\prime}}\) is divisible by \(p_{\alpha_{i},\beta_{i}}\). If \(\alpha^{\prime}r_{13}-\beta^{\prime}r_{23}<\alpha_{i}r_{13}-\beta_{i}r_{23}\), then \((\alpha^{\prime},\beta^{\prime})\) is a solution of the system (4.2). Thus, we must have \(\beta^{\prime}=\beta_{i+1}\) as we assume that \(\beta_{i+1}\geq\beta^{\prime}\). Hence, \(\alpha^{\prime}\geq\alpha_{i+1}\). Thus, \(p_{\alpha^{\prime},\beta^{\prime}}\) is divisible by \(p_{\alpha_{i+1},\beta_{i+1}}\). The conclusion follows. Let \(\gamma_{0}=0\), \(\sigma_{0}=1\). For each \(i\geq 0\), consider the system of inequalities \[\begin{cases}-\gamma\delta_{1}+\sigma\delta_{2}&>0\\ -\gamma r_{13}+\sigma r_{23}&<\gamma_{i}r_{13}+\sigma_{i}r_{23}\\ \gamma&>\gamma_{i}.\end{cases} \tag{4.3}\] When the system (4.3) has solutions, let \(\gamma_{i+1}\) be the smallest solution among \(\gamma\). Then \(\sigma_{i+1}=\varphi_{r}(\gamma_{i+1})\) where \(r=\delta_{1}/\delta_{2}\). There can be only finitely many such \(\gamma_{i}\) as the sequence \(\{-\gamma_{i}r_{13}+\sigma_{i}r_{23}\}\) is a strictly decreasing sequence of positive integers. Let \(u\) be the largest index of \(i\) and \(B=\{(\gamma_{i},\sigma_{i})\mid i=0,\ldots,u\}\). Since \(-\gamma_{0}r_{13}+\sigma_{0}r_{23}=r_{23}\), \(|B|\leq r_{23}\). For each \(\gamma,\sigma\in\mathbb{N}\) such that \(-\gamma r_{13}+\sigma r_{23}>0\), let \(q_{\gamma,\delta}=x^{\gamma c_{1}+\sigma r_{21}}z^{-\gamma r_{13}+\sigma r_{23}}\). We have **Lemma 4.8**.: _Assume that \(\sigma^{\prime}>0\) and \(\gamma^{\prime}\geq 0\) be such that \(-\gamma^{\prime}\delta_{1}+\sigma^{\prime}\delta_{2}>0\). Then \(q_{\gamma^{\prime},\sigma^{\prime}}\in L=(q_{\gamma,\sigma}\mid(\gamma,\sigma )\in B).\)_ Proof.: If \(\gamma^{\prime}=0\) then \(p_{\gamma^{\prime},\sigma^{\prime}}\) is divisible by \(q_{0,1}\). Thus, we may assume that \(\gamma^{\prime}>0\). Let \(i\) be the largest index such that \(\gamma_{i}<\gamma^{\prime}\), i.e., \(\gamma_{i+1}\geq\gamma^{\prime}>\gamma_{i}\). Since \(\sigma_{i}=\varphi_{r}(\gamma_{i})\) and \(\frac{\sigma^{\prime}}{\gamma^{\prime}}>r\) where \(r=\delta_{1}/\delta_{2}\), we deduce that \(\sigma^{\prime}\geq\sigma_{i}\). If \(-\gamma^{\prime}r_{13}+\sigma^{\prime}r_{23}\geq\gamma_{i}r_{13}+\sigma_{i}r_ {23}\) then \(q_{\gamma^{\prime},\sigma^{\prime}}\) is divisible by \(q_{\gamma_{i},\sigma_{i}}\). If \(-\gamma^{\prime}r_{13}+\sigma^{\prime}r_{23}<\gamma_{i}r_{13}+\sigma_{i}r_ {23}\), then \((\gamma^{\prime},\sigma^{\prime})\) is a solution of (4.3). Thus, \(\gamma^{\prime}\geq\gamma_{i+1}\). By our definition, we must have \(\gamma^{\prime}=\gamma_{i+1}\) and \(\sigma^{\prime}\geq\sigma_{i+1}\). Hence, \(q_{\gamma^{\prime},\sigma^{\prime}}\) is divisible by \(q_{\gamma_{i+1},\sigma_{i+1}}\). The conclusion follows. **Lemma 4.9**.: _Assume that \(r_{ij}\neq 0\) and \(c_{2}>r_{21}+r_{23}\). Let \(g=y^{\eta r_{12}+\epsilon c_{2}}\) and \(h=y^{b/d}-x^{(b-a)/d}z^{a/d}\). Then_ \[I^{*}=(z^{c_{3}},g,h)+(p_{\alpha,\beta}\mid(\alpha,\beta)\in A)+(q_{\gamma, \sigma}\mid(\gamma,\sigma)\in B).\] _Furthermore,_ 1. _If_ \(\beta_{h}<\epsilon\) _then_ \(I^{*}=(h)+(p_{\alpha,\beta}\mid(\alpha,\beta)\in A,\beta<\beta_{h})+(q_{\gamma, \sigma}\mid(\gamma,\sigma)\in B).\)__ 2. _If_ \(\beta\geq\epsilon\) _then_ \(I^{*}=(g)+(p_{\alpha,\beta}\mid(\alpha,\beta)\in A)+(q_{\gamma,\sigma}\mid( \gamma,\sigma)\in B).\)__ 3. _The minimal generators of_ \(I^{*}\) _form a Grobner basis of_ \(I^{*}\) _with respect to the reverse lexicographic order._ 4. \(\mu(I^{*})\leq\max(r_{13},r_{23})+3\)_._ Proof.: Let \[L=(g,h)+(p_{\alpha,\beta}\mid(\alpha,\beta)\in A)+(q_{\gamma,\sigma}\mid( \gamma,\sigma)\in B.\] For clarity, we divide the proof into several steps. The first two steps can be proved in similar manners as those of Lemma 3.7. We recall some arguments for completeness. **Step 1.**\(L\subseteq I^{*}\). **Step 2.**\(I^{*}\subseteq L\). By Lemma 2.3 and Lemma 4.3, we may assume that \({\bf w}=\alpha{\bf w}_{1}-\beta{\bf w}_{2}=(\alpha c_{1}+\beta r_{21},-\alpha r _{12}-\beta c_{2},-\alpha r_{13}+\beta r_{23})\). There are two cases. **Case 1.**\(-\alpha r_{13}+\beta r_{23}\leq 0\). Then, \(f_{\bf w}^{*}=p_{\alpha,\beta}\in(g,p_{\alpha,\beta}\mid(\alpha,\beta)\in A)\) by Lemma 4.7. **Case 2.**\(-\alpha r_{13}+\beta r_{23}>0\). There are three subcases. Case 2.a. \(\alpha\delta_{1}<\beta\delta_{2}\). Then \(f_{\bf w}^{*}=q_{\alpha,\beta}\in(q_{\gamma,\sigma}\mid(\gamma,\sigma)\in B)\) by Lemma 4.8. Case 2.b. \(\alpha\delta_{1}=\beta\delta_{2}\), then there must exist \(k\) such that \(\alpha=k\alpha_{h}\) and \(\beta=k\beta_{h}\). Hence, \(f_{\bf w}^{*}=f_{\bf w}\in(h)\). Case 2.c. \(\alpha\delta_{1}>\beta\delta_{2}\). Then \(f_{\bf w}^{*}=y^{\alpha r_{12}+\beta c_{2}}\). Furthermore, we have \(\frac{\delta_{2}}{\delta_{1}}<\frac{\alpha}{\beta}<\frac{r_{23}}{r_{13}}\). Hence, by the definition of \(\eta\) and \(\epsilon\) we must have \(\beta\geq\epsilon\) and \(\alpha\geq\eta\). Hence, \(f_{\bf w}^{*}\in(g)\). **Step 3.** Assume that \(\beta_{h}>\epsilon\). Then \(h\in M=(g)+(p_{\alpha,\beta}\mid(\alpha,\beta)\in A)+(q_{\gamma,\sigma}\mid( \gamma,\sigma)\in B)\). Since \(\eta=\varphi_{r}(\epsilon)\) where \(r=\delta_{2}/\delta_{1}=\alpha_{h}/\beta_{h}\), we deduce that \(\alpha_{h}\geq\eta\). Thus, \(y^{\alpha_{h}r_{12}+\beta_{h}c_{2}}\in(g)\). It suffices to prove that \(q_{\alpha_{h},\beta_{h}}\in M\). Let \(\beta_{h}=k\epsilon+\beta\) for \(0\leq\beta<\beta_{h}\). If \(\beta=0\), then \(k\geq 2\), let \(\gamma=\alpha_{h}-(k-1)\eta\) and \(\sigma=\beta_{h}-(k-1)\epsilon\). If \(\beta>0\), let \(\gamma=\alpha_{h}-k\eta\) and \(\sigma=\beta_{h}-k\epsilon\). Then we have \(\frac{\delta_{2}}{\delta_{1}}=\frac{\alpha_{h}}{\beta_{h}}>\frac{\gamma}{\sigma}\). Hence, \(q_{\alpha,\beta}\) is divisible by \(q_{\gamma,\sigma}\). By Lemma 4.8, \(q_{\gamma,\sigma}\in M\). The conclusion follows. **Step 4.** Assume that \(\beta_{h}\leq\epsilon\). Let \(M=(h)+(p_{\alpha,\beta}\mid(\alpha,\beta)\in A,\beta<\beta_{h})+(q_{\gamma, \sigma}\mid(\gamma,\sigma)\in B)\). Then \(g\) and \(p_{\alpha,\beta}\) with \((\alpha,\beta)\in A\) and \(\beta\geq\beta_{h}\) belongs to \(M\). For \(g\): Let \(\epsilon=k\beta_{h}+\beta\) where \(0\leq\beta<\beta_{h}\). If \(\beta=0\), then we have \(\eta=k\alpha_{h}+1\). Furthermore, \(\frac{\eta}{\epsilon}\leq\frac{r_{23}}{r_{13}}\). Hence, \(k\beta_{h}r_{23}\geq(k\alpha_{h}+1)r_{13}\). Thus \[g-f_{-k\alpha_{h}{\bf w}_{1}+k\beta_{h}{\bf w}_{2}}=x^{k\alpha_{h}c_{1}+k \beta_{h}r_{21}}z^{-k\alpha_{h}r_{13}+k\beta_{h}r_{23}}\in(x^{r_{21}}z^{r_{23}}).\] Now assume that \(\beta>0\). Then we also have \(\eta\geq k\alpha_{h}+1\). Furthermore, \(\frac{\eta-k\alpha_{h}}{\epsilon-k\beta_{h}}>\frac{\alpha_{h}}{\beta_{h}}\). By definition of \(\epsilon\) we must have \(\frac{\eta-k\alpha_{h}}{\epsilon-k\beta_{h}}>\frac{r_{23}}{c_{3}}\). Let \(\alpha=\eta-k\alpha_{h}\). Then, we have \[g-y^{\beta c_{2}}f_{-k\alpha_{h}{\bf w}_{1}+k\beta_{h}{\bf w}_{2}}=x^{k\alpha_ {h}c_{1}+k\beta_{h}r_{21}}y^{\beta c_{2}}z^{-k\alpha_{h}c_{3}+k\beta_{h}r_{23} }\in(p_{\alpha,\beta})\subseteq M,\] by Lemma 4.7. For \(p_{\alpha,\beta}\) with \((\alpha,\beta)\in A\) and \(\beta\geq\beta_{h}\). As before, we write \(\beta=k\beta_{h}+\beta^{\prime}\). Then we must have \(\alpha\geq k\beta_{h}+1\). Let \(\alpha=k\alpha_{h}+\alpha^{\prime}\). If \(\beta^{\prime}=0\), we deduce that \(p_{\alpha,\beta}-f_{-k\alpha_{h}{\bf w}_{1}+k\beta_{h}{\bf w}_{2}}\in M\) as in the previous case. If \(\beta^{\prime}>0\), then we also have \(\frac{\alpha^{\prime}}{\beta^{\prime}}>\frac{r_{23}}{c_{3}}\). Hence, \(p_{\alpha,\beta}-f_{-k\alpha_{h}{\bf w}_{1}+k\beta_{h}{\bf w}_{2}}\in(p_{ \alpha^{\prime},\beta^{\prime}})\). **Step 5.** Assume that \(\beta_{h}\leq\epsilon\). Then \(\{h\}\cup\{p_{\alpha,\beta}\mid(\alpha,\beta)\in A,\beta<\beta_{h}\}\cup\{q_{ \gamma,\sigma}\mid(\gamma,\sigma)\in B\}\) form a Grobner basis of \(I^{*}\) with respect to the revlex order. Since \(\text{in}(h)=y^{\alpha_{h}r_{12}+\beta_{h}c_{2}}\), it suffices to prove that for any \((\alpha,\beta)\in A,\beta<\beta_{h}\), the S-pair \(S(h,p_{\alpha,\beta})\) reduces to \(0\). We have \[S(h,p_{\alpha,\beta})=x^{\alpha_{h}c_{1}+\beta_{h}r_{21}}z^{(\alpha-\alpha_{h})r _{13}+(\beta_{h}-\beta)r_{23}}.\] If \(\alpha\geq\alpha_{h}\) then the exponent of \(z\) is at least \(r_{23}\); hence, \(S(h,p_{\alpha,\beta})\in(q_{0,1})\). Thus, we may assume that \(\alpha<\alpha_{h}\). Since \(\frac{\delta_{2}}{\delta_{1}}<\frac{r_{23}}{r_{13}}\leq\frac{\alpha}{\beta}\), we deduce that \(\frac{\alpha_{h}-\alpha}{\beta_{h}-\beta}<\frac{\delta_{2}}{\delta_{1}}\). Hence \(S(h,p_{\alpha,\beta})\in(q_{\alpha_{h}-\alpha,\beta_{h}-\beta})\). By Lemma 4.8, the conclusion follows. **Step 6.**\(\mu(I^{*})\leq\max(r_{13},r_{23})+3\). There are two cases. **Case 1.**\(r_{23}\geq r_{13}\). Let \(r_{23}=kr_{13}+s_{13}\) with \(0\leq s_{13}<r_{13}\). If \(s_{13}=0\) then the system (3.7) has no solutions. Thus \(|A|=1\). Hence, \(\mu(I^{*})\leq 2+1+r_{23}\). Now assume that \(s_{13}>0\). If \(\frac{\delta_{2}}{\delta_{1}}\leq 1\) then the system (3.8) has no solutions. Hence \(|B|\leq 1\). Thus \(\mu(I^{*})\leq r_{13}+3.\) If \(\frac{\delta_{2}}{\delta_{1}}>1\) then \((1,1)\in B\). Since \(q_{1,1}=x^{c_{1}+r_{21}}z^{r_{23}-r_{13}}\), \(|B|\leq r_{23}-r_{13}+1\). Thus \(\mu(I^{*})\leq 2+r_{23}-r_{13}+r_{13}+1=r_{23}+3\). **Case 2.**\(r_{23}<r_{13}\). Then \((1,1)\in A\) and \(p_{1,1}=y^{r_{12}+c_{2}}z^{r_{13}-r_{23}}\). Thus \(|A|\leq r_{13}-r_{23}+1\). Hence \(\mu(I^{*})\leq 2+r_{13}-r_{23}+1+r_{23}=r_{13}+3\). That concludes the proof of the lemma. **Example 4.10**.: Let \(H=\langle 265,280,655\rangle\). Then \(I=(x^{6}-yz^{2},y^{22}-xz^{9},x^{5}y^{21}-z^{11})\) and \(I^{*}=(yz^{2},xz^{9},z^{11},x^{7}z^{7},x^{13}z^{5},x^{19}z^{3},y^{26}-x^{25}z)\). ## 5. Bounding Betti numbers of monomial space curves In this section, using the description of \(I^{*}_{H}\) in the previous two sections, we prove our main result. First, we recall the following general bound on the Betti numbers of a monomial ideal [12, Theorem 6.29]. Assume that \(n<r\). The cyclic polytope \(C_{n,r}\) is defined to be the convex hull of \(r\) distinct points on the moment curve \(t\to(t^{1},\ldots,t^{n})\). We have **Theorem 5.1**.: _The number \(\beta_{i}(I)\) of minimal \(i\)th syzygies of any monomial ideal \(I\) with \(r\) generators in \(n\) variables is bounded above by the number \(C_{i,n,r}\) of \(i\)-dimensional faces of the cyclic \(n\)-polytope with \(r\) vertices. If \(i=n-1\) then we even have \(\beta_{i}(I)\leq C_{n-1,n,r}-1\)._ **Theorem 5.2**.: _Let \(H\) be a numerical semigroup generated by \(\langle n_{1},n_{2},n_{3}\rangle\). Let \(s=\operatorname{wd}(\tilde{H})=\min(n_{1}-1,b)\). Then_ 1. \(\beta_{0}(I^{*}_{H})\leq s+1,\)__ 2. \(\beta_{1}(I^{*}_{H})\leq 3s-3,\)__ 3. \(\beta_{2}(I^{*}_{H})\leq 2s-3.\)__ _In particular, \(\beta_{i}(I^{*}_{H})\leq\beta_{i}(I^{*}_{\tilde{H}})\) for all \(i\)._ Proof.: By [7], if \(\mu(I^{*})\leq 3\), \(I^{*}\) is Cohen-Macaulay. Hence, \(\beta_{1}(I^{*})\leq 2\) and \(\beta_{2}(I^{*}_{H})=0\). Thus, we may assume that \(\mu(I^{*})\geq 4\). By Lemmas 3.1, 3.7, 3.9, 3.11, 3.16, 4.1, 4.4, 4.9 it remains to consider the following cases: \(r_{21}=0\), \(r_{12}=0\) and \(c_{2}>r_{21}+r_{23}\), and \(r_{ij}\neq 0\) and \(c_{2}>r_{21}+r_{23}\). We have \(\beta_{i}(I)\leq\beta_{i}(\operatorname{in}(I))\) for any homogeneous ideal \(I\), where \(\operatorname{in}(I)\) is the initial ideal of \(I\) with respect to any monomial order. By Lemmas 3.7, 3.16, 4.9, Theorem 5.1, and the fact that \(C_{1,3,r}=3r-6\) and \(C_{2,3,r}=2r-4\) it suffices to prove that \(\mu(I^{*})\leq s+1\) in these cases. First, note that we always have \(c_{3}<n_{1}-1\). Since \(c_{3}\) is the smallest positive integer such that \(c_{3}n_{3}\in\langle n_{1},n_{2}\rangle\). Furthermore, the Frobenius number of \(\langle n_{1},n_{2}\rangle\) is \((n_{1}-1)(n_{2}-1)\). Hence, \(c_{3}n_{3}\leq(n_{1}-1)(n_{2}-1)+1\). Thus \(c_{3}<n_{1}-1\). We may now assume that \(n_{1}>b=n_{3}-n_{1}\). **Case 1.**\(r_{21}=0\). By (3.1), \(\alpha_{h}r_{12}+\beta_{h}c_{2}=b/d\). Hence, \(c_{2}<b/d\). Thus \(c_{3}<c_{2}\leq b/d-1\). By Lemma 3.7, \(\mu(I^{*})\leq c_{3}+2\leq b\). **Case 2.**\(r_{12}=0\) and \(c_{2}>r_{21}+r_{23}\). By (3.6), \(\alpha_{h}c_{1}+\beta_{h}r_{21}=(b-a)/d\), thus \(c_{1}<(b-a)/d\). Hence, \(c_{3}<c_{1}\leq b-2\). By Lemma 3.16, \(\mu(I^{*})\leq c_{3}+2<b\). **Case 3.**\(r_{ij}\neq 0\) for all \(i,j\) and \(c_{2}>r_{21}+r_{23}\). By (4.1), we have \[\alpha_{h}c_{1}+\beta_{h}r_{21}=(b-a)/d\] \[\alpha_{h}r_{12}+\beta_{h}c_{2}=b/d.\] In particular, \(c_{1}<(b-a)/d\) and \(c_{2}<b/d\). Thus, \(r_{13}<c_{1}-1\leq b-3\) and \(r_{23}<c_{2}-1\leq b-2\). By Lemma 4.9, \(\mu(I^{*})\leq\max(r_{13},r_{23})+3\leq b\). The final statement that \(\beta_{i}(I^{*}_{H})\leq\beta_{i}(I^{*}_{\bar{H}})\) follows immediately as the formula for \(\beta_{i}(I^{*}_{\bar{H}})\) is given in [9, Proposition 2.5] and [4, Theorem 4.1]. **Remark 5.3**.: 1. Herzog and Stamate [9, Conjecture 2.4] only conjectured the bound for the number of minimal generators of \(I^{*}_{H}\). We believe it should hold for all Betti numbers as well. 2. In [16], Stamate proved that when \(n_{1}\geq k_{a,b}\), \(\mu(I_{H})=\mu(I^{*}_{H})\), where \[k_{a,b}=b\max(\frac{a}{d},\frac{b-a}{d}-1).\] This can be deduced from our description of \(I^{*}_{H}\) as well.
2310.17542
The Blanco DECam Bulge Survey (BDBS) VIII: Chemo-kinematics in the southern Galactic bulge from 2.3 million red clump stars with Gaia DR3 proper motions
The Blanco DECam Bulge Survey (BDBS) provides near-ultraviolet to near-infrared photometry for ~250 million unique stars. By combining BDBS photometry with the latest Gaia astrometry, we characterize the chemo-dynamics of red clump stars across the BDBS footprint, using an unprecedented sample size and sky coverage. We construct a sample of ~2.3 million red clump giants in the bulge with photometric metallicities, BDBS photometric distances, and proper motions. We study the kinematics of the red clump stars as a function of sky position and metallicity, by investigating proper motion rotation curves, velocity dispersions, and proper motion correlations across the southern Galactic bulge. We find that metal-poor red clump stars exhibit lower rotation amplitudes, at ~29 km s$^{-1}$ kpc^{-1}. The peak of the angular velocity is ~39 km s^{-1} kpc^{-1} for [Fe/H] ~ -0.2 dex, exhibiting declining rotation at higher [Fe/H]. The velocity dispersion is higher for metal-poor stars, while metal-rich stars show a steeper gradient with Galactic latitude, with a maximum dispersion at low latitudes along the bulge minor axis. Only metal-rich stars ([Fe/H] >~ -0.5 dex) show clear signatures of the bar in their kinematics, while the metal-poor population exhibits isotropic motions with an axisymmetric pattern around Galactic longitude l = 0. This work reports the largest sample of bulge stars with distance, metallicity, and astrometry and shows clear kinematic differences with metallicity. The global kinematics over the bulge agrees with earlier studies. However, we see striking changes with increasing metallicity and for the first time, see kinematic differences for stars with [Fe/H]>-0.5, suggesting that the bar itself may have kinematics that depends on metallicity.
Tommaso Marchetti, Meridith Joyce, Christian Johnson, R. Michael Rich, William Clarkson, Andrea Kunder, Iulia T. Simion, Catherine A. Pilachowski
2023-10-26T16:38:41Z
http://arxiv.org/abs/2310.17542v1
# The Blanco DECam Bulge Survey (BDBS) ###### Abstract Context:The inner Galaxy is a complex environment, and the relative contributions of different formation scenarios to its observed morphology and stellar properties are still debated. The different components are expected to have different spatial, kinematic, and metallicity distributions, and a combination of photometric, spectroscopic, and astrometric large-scale surveys is needed to study the formation and evolution of the Galactic bulge. Aims:The Blanco DECam Bulge Survey (BDBS) provides near-ultraviolet to near-infrared photometry for \(\sim 250\) million unique stars over \(>\)200 square degrees of the southern Galactic bulge. By combining BDBS photometry with the latest \(Gaia\) astrometry, we aim to characterize the chemo-dynamics of red clump stars across the BDBS footprint, using an unprecedented sample size and sky coverage. Methods:Our field of regard is \(|\ell|\leq 10^{\circ}\), \(-10^{\circ}\leq b\leq-3^{\circ}\). We construct a sample of \(\sim 2.3\) million red clump giants in the bulge with photometric metallicities, BDBS photometric distances, and proper motions. Photometric metallicities are derived from a \((u-j_{0})\)vs [Fe/H] relation; astrometry including precise proper motions is from the third data release (DR3) of the ESA satellite \(Gaia\). We study the kinematics of the red clump stars as a function of sky position and metallicity, by investigating proper motion rotation curves, velocity dispersions, and proper motion correlations across the southern Galactic bulge. Results:By binning our sample into 8 metallicity bins from \(-1.5\) dex \(<[{\rm Fe/H}]<+1\) dex, we find that metal-poor red clump stars exhibit lower rotation amplitudes, at \(\sim 29\) km s\({}^{-1}\) kpc\({}^{-1}\). The peak of the angular velocity is \(\sim 39\) km s\({}^{-1}\) kpc\({}^{-1}\) for [Fe/H] \(\sim-0.2\) dex, exhibiting declining rotation at higher [Fe/H]. The velocity dispersion is higher for metal-poor stars, while metal-rich stars show a steeper gradient with Galactic latitude, with a maximum dispersion at low latitudes along the bulge minor axis. Only metal-rich stars ([Fe/H] \(\gtrsim-0.5\) dex) show clear signatures of the bar in their kinematics, while the metal-poor population exhibits isotropic motions with an axisymmetric pattern around Galactic longitude \(\ell=0\). Conclusions:This work reports the largest sample of bulge stars with distance, metallicity, and astrometry and shows clear kinematic differences with metallicity. The global kinematics over the bulge agrees with earlier studies. However, we see striking changes with increasing metallicity and for the first time, see kinematic differences for stars with [Fe/H] \(>-0.5\), suggesting that the bar itself may have kinematics that depends on metallicity. ## 1 Introduction The last decades of observations of the central region of our Galaxy have pictured a highly complex environment, with several distinct and interacting stellar populations coexisting along the same line of sight. Studying the formation, history, and evolution of the Galactic bulge is essential to improve our knowledge of the Milky Way, which is a benchmark for understanding the evolution of disk galaxies in the Universe (e.g. Bland-Hawthorn & Gerhard 2016). The reconstruction of its evolutionary history requires precise photometric (e.g. Minniti et al. 2010; Udalski et al. 2015; Rich et al. 2020), spectroscopic (e.g. Rich et al. 2007; Kunder et al. 2012; Freeman et al. 2013; Zoccali et al. 2014; Majewski et al. 2017), and astrometric (e.g. Kuijken & Rich 2002; Clarkson et al. 2008, 2018; Gezari et al. 2022) measurements for statistically large samples of stars covering the whole Galactic bulge. Because of our location in the Milky Way plane, several studies have focused on the removal of the foreground disk population, whose presence contaminates studies of the Galactic bulge, using a mix of photometric and astrometric techniques (e.g., Kuijken & Rich, 2002; Zoccali et al., 2003; Clarkson et al., 2008, 2011; Valenti et al., 2013; Calamida et al., 2014; Clarkson et al., 2018; Surot et al., 2019; Queiroz et al., 2021; Marchetti et al., 2022). While the recent data releases of the European Space Agency satellite _Gaia_(Gaia Collaboration et al., 2016, 2018, 2021, 2023) provide geometric parallaxes for 1.5 billion stars in the Milky Way, large uncertainties and systematics prevent the precise localization of stars beyond a few kiloparsec from the Sun. The identification of stellar standard candles, whose distance can be derived using photometry, is, therefore, an important tool to map the three-dimensional structure of the Galaxy. In particular, Red Clump (RC) stars have long been used to study and constrain the morphology of the inner part of the Milky Way (e.g. Stanek et al., 1994; Stanek et al., 1997; Rattenbury et al., 2007; McWilliam & Zoccali, 2010; Saito et al., 2011; Wegg & Gerhard, 2013; Simion et al., 2017; Paterson et al., 2020; Johnson et al., 2022), and they have been critical to discovering details about the Galactic bulge's rotating bar (Binney et al., 1991; Stanek et al., 1994; Wegg & Gerhard, 2013) along with its X-shaped morphology (Weiland et al., 1994; Nataf et al., 2010; McWilliam & Zoccali, 2010; Wegg & Gerhard, 2013). As the nearest example of a spheroidal/bar population, and near enough to resolve individual stars, the Galactic bulge offers the possibility to explore the structure and formation history by an analysis of the spatial variation of abundances and kinematics, often called "chemodynamics". This effort has historically required the correlation of spectroscopically derived metallicities and radial velocities, beginning with Rich (1990) and Minniti et al. (2010), early works that showed a trend of declining velocity dispersion with increasing metallicity. Spectroscopic surveys have grown in size from \(\sim\) 10,000 stars in the Bulge Radial Velocity Assay (Rich et al., 2007; Kunder et al., 2012, BRAVA) to tens of thousands of stars with abundances and kinematics in projects such as the Abundances and Radial velocity Galactic Origins Survey (Freeman et al., 2013, ARGOS), the GIRAFFE Inner Bulge Survey (Zoccali et al., 2014, GIBS), and the Apache Point Observatory Galactic Evolution Experiment (Majewski et al., 2017, APOGEE). However, even sample sizes of \(10^{4}\) or greater can be insufficient when binned by spatial location, kinematics, and abundance. Two revolutionary developments- wide field imagers like the Dark Energy Camera on the Blanco 4m telescope (Flaugher et al., 2015) and the _Gaia_ astrometric survey (Gaia Collaboration et al., 2023) have combined to expand sample sizes into the millions. A significant recent breakthrough has been the development of a robust photometric metallicity scale, \((u-i)_{0}\) versus [Fe/H] for bulge red clump (RC) giants (Johnson et al., 2020), by correlating the photometry with [Fe/H] from GIBS survey spectroscopy (Zoccali et al., 2017). Combining photometric metallicities and distances for the RC with _Gaia_ kinematics potentially unlocks the exploration of chemodynamics for millions of stars, albeit with only [Fe/H] to 0.2 dex precision, and no detailed abundances (e.g., alpha elements). Here we exploit the large numbers possible by combining the two datasets, reaching numerical sizes that are of the same order of those of the two planned major bulge surveys by 4MOST and MOONS (Bensby et al., 2019; Gonzalez et al., 2020). Many investigations have pointed toward a significant break in kinematics at [Fe/H] \(\sim-0.5\), beginning with Zhao et al. (1994) and continuing with Soto et al. (2007), Zoccali et al. (2008), and Ness et al. (2013). The break at \(-0.5\) dex is clearly seen when the vertex deviation is derived (this requires the construction of the velocity ellipsoid from radial velocities and longitudinal proper motions, and is interpreted as due to the metal-rich population being in the bar). Babusiaux et al. (2010) explored Baade's window and two other fields on the bulge minor axis, combining proper motions from OGLE-II (Sumi et al., 2004) with radial velocities and metallicities from FLAMES/GIRAFFE at the VLT. They identified two distinct populations: metal-rich stars have bar-like kinematics, while the metal-poor ones are consistent with an old spheroid (or thick disk). This supported the co-existence of both a classical and a pseudo-bulge in the central region of our Galaxy (see Kormendy & Kennicutt, 2004). Subsequent studies have confirmed the different behavior of metal-poor and metal-rich stars in the inner Milky Way (see e.g. Hill et al., 2011; Dekany et al., 2013; Ness et al., 2013; Rojas-Arriagada et al., 2014; Babusiaux, 2016; Kunder et al., 2016; Athanassoula et al., 2017; Rojas-Arriagada et al., 2017; Zoccali et al., 2017; Clarkson et al., 2018; Zoccali et al., 2020; Arentsen et al., 2020; Du et al., 2020; Rojas-Arriagada et al., 2020; Simion et al., 2021; Wylie et al., 2021; Rix et al., 2022). Stellar populations with different chemical compositions will also show different kinematics due to the presence of the bar, a process called kinematic fractionation (Debattista et al., 2017). Our investigation lacks radial velocities, but we are able to explore proper motions and proper motion dispersion versus [Fe/H] over an unprecedented scale, allowing us to study the dependence of kinematics on the metallicity across the spatial extent of the Southern Galactic bulge. Future investigations will benefit from large numbers of stars with spectroscopic abundances and radial velocities, but our work aims to offer some insight into what we can expect from these surveys. There is good reason to suspect that strong chemodynamic correlations exist in our bulge dataset. Johnson et al. (2022) shows a striking trend in which RC giants of increasing metallicity are more concentrated to the Galactic plane; there is no such trend in the radial direction. This striking vertical abundance gradient that steepens at [Fe/H]\(>-0.5\) is consistent with a complex bar with properties that depend on metallicity. The age of the bulge, especially of the metal-rich population, has also been called into question. Early HST-based investigations used proper motion cleaning to find a compact old main sequence turnoff (Ortolani et al., 1993; Kuijken & Rich, 2002; Clarkson et al., 2008, 2011); these studies have been segregated by metallicity (Renzini et al., 2018). The apparently old age was confirmed over the wider bulge field population (e.g., Zoccali et al., 2003; Valenti et al., 2013; Surot et al., 2019). While Haywood et al. (2016) have proposed that a bulge population with a complex age distribution might fit a more vertical turnoff, the bulge turnoff morphology has been investigated in multiple fields (e.g., Valenti et al., 2013) and does not exhibit the vertical structure required in this study. A more significant case for young populations is raised by Bensby et al. (2017), who derive spectroscopic ages for microlensed dwarfs in the bulge, finding a large fraction of the suprasolar metallicity stars in the bulge to be \(<\)5 Gyr in age. A new analysis using MIST isochrones by Joyce et al. (2023) suggests the presence of a significantly smaller, but not absent, population of metal-rich stars with ages \(\leq\) 8Gyr, but largely supports a bulge age more tightly clustered around 10-11 Gyr. Though this latter analysis, based on the same sample of microlensed dwarfs presented in Bensby et al. (2017), is in better agreement with HST-based ages, the tension between the microlensing-derived ages and the HST-/main sequence turnoff-derived ages is not fully resolved. If a substantial fraction of the bar is \(<10\) Gyr (younger than the thick disk) then one might expect the younger ages to be associated with the formation in the disk and to present a chemodynamic correlation that arises from age as well as metallicity. At present, we lack age constraints for individual RC stars, though a rough mapping between [Fe/H] and age can be constructed from the works of Bensby et al. (2017) and Joyce et al. (2023), but our study provides metallicities that can be correlated with kinematics. We can now build on this legacy to investigate the detailed relationship between stellar metallicity and kinematics by constructing a comprehensive sample of RC stars residing within the Milky Way bulge that combines the photometric metallicities and distances for \(\sim 2.6\) million RC stars from Johnson et al. (2022) with the high-quality proper motions from the third data release of _Gaia_ (DR3, Gaia Collaboration et al., 2023; Lindegren et al., 2021). The paper is organized as follows. In Section 2 we introduce the sample of RC stars with BDBS photometry, distances, and metallicities and describe the process of assigning _Gaia_ DR3 proper motions and projected velocities, that will be used in this work. In Section 3 we investigate the kinematics of RC stars as a function of metallicity, presenting our results, and highlighting the difference between the motion of the metal-poor and metal-rich components. Finally, in Section 4 we discuss our findings, and in we summarize our results. ## 2 The Red Clump Star Sample In this work, we start with the sample of 2.6 million RC stars extracted from BDBS data in Johnson et al. (2020, 2022), spanning the region in Galactic coordinates \((\ell,b)\)\(-10^{\circ}\leq\ell\leq 10^{\circ}\), \(-10^{\circ}\leq b\lesssim-3^{\circ}\) of the southern Galactic bulge1. Thanks to the sensitivity of the near-ultraviolet \(u\) band to the metallicity of the RC stars, Johnson et al. (2020, 2022) derived photometric metallicities for the whole sample, with a typical dispersion of 0.2 dex, employing the dust map constructed in Simion et al. (2017) using infrared VVV data. The distance distribution of the RC sample, derived using BDBS photometry (Johnson et al., 2020, 2022), is shown in the left panel of Fig. 1. RC distances span a range of [6, 12] kpc, with a median value of 8.83 kpc. The metallicity of the sample, shown in the right panel of Fig. 1, covers a range going from \(-1.5\) to 1 dex, with a median value of \(\rm[Fe/H]\sim-0.04\) dex. The two peaks of the metallicity distribution are at \(\rm[Fe/H]\sim-0.15\) dex and \(\rm[Fe/H]\sim 0.2\) dex. More information on the metallicity distribution function and on the spatial distribution of the RC stars is provided in Johnson et al. (2022), including a description of the red clump color and magnitude selection functions (see their Section 2). Footnote 1: An electronic table containing the positions, _ugi_-band photometry, extinction, distance, and [Fe/H] values for all stars used here is provided in Table 1 of Johnson et al. (2022). We cross-match this dataset to _Gaia_ DR3 data, back-propagating _Gaia_ coordinates to the mean BDBS epoch through _Gaia_ proper motions, using a tolerance radius of 1 arcsec (see Marchetti et al., 2022, for further details on the cross-matching procedure). This results in a sample of 2593172 RC stars with BDBS photometry (and therefore photometric distances and metallicities), and _Gaia_ DR3 astrometry (parallaxes and proper motions). We further select sources with a value of the _Gaia_ DR3 Renormalised Unit Weight Error (RUWE) \(<1.4\), to avoid contamination from spurious astrometric solutions (Lindegren et al., 2021; Belokurov et al., 2020; Penoyre et al., 2020). After this selection cut, we are left with a sample of 2315197 RC stars, which will be the main focus of this paper. Thanks to the homogeneous _Gaia_ DR3 proper motions and BDBS metallicities over the large BDBS instrumental footprint, this is an ideal dataset to study the chemo-kinematics of RC stars across the whole southern Galactic bulge. While random and systematic uncertainties in _Gaia_ parallaxes are too large to provide reliable distances for individual stars in the Galactic bulge (e.g. Lindegren et al., 2021), we rely on photometric distances and _Gaia_ DR3 proper motions to investigate the kinematics of the RC sample. We convert Galactic proper motions (\(\mu_{\rm G_{\rm R}}\equiv\mu_{\rm G}\cos b,\mu_{b}\)) to Galactocentric velocities along longitude \(v_{\rm J}^{\rm GC}\) and latitude \(v_{\rm J}^{\rm GC}\) by subtracting the contribution given by the motion of the Sun: \[v_{\rm J}^{\rm GC}=v_{\rm J*}-U_{\odot}\sin l\cos b+V_{\odot}\cos l\cos b\, \tag{1}\] \[v_{b}^{\rm GC}=v_{b}+W_{\odot}\cos b, \tag{2}\] where \(v_{i}\) [\(\rm km/s\)] = \(4.74\cdot\mu_{i}\)[mas/yr] \(\cdot d\)[kpc] for \(i=\ell*\), \(b\), and \(d\) is the heliocentric distance of the star, derived with BDBS photometry (Johnson et al., 2020). We assume a three-dimensional Cartesian velocity of the Sun \([U_{\rm G},V_{\rm 0},W_{\odot}]=[12.9,245.6,7.78]\) km s\({}^{-1}\)(Reid & Brunthaler, 2004; Drimmel & Poggio, 2018; GRAVITY Collaboration et al., 2018). In the left panel of Fig. 2, we show the distribution of the sample of RC stars in Cartesian Galactocentric coordinates \((X_{\rm GC},Y_{\rm GC})\), where the black cross marks the position of the Galactic Centre, assuming a distance of 8.122 kpc (GRAVITY Collaboration et al., 2018). The \(X_{\rm GC}\) coordinate is aligned to the Sun-Galactic Centre direction, and is positive in the direction of the Galactic Centre, while the \(Y_{\rm GC}\) coordinate is positive along the direction of the Sun circular rotation in the disk. As already shown by previous works (e.g. Wegg & Gerhard, 2013; Johnson et al., 2022), the over-density at \(Y_{\rm GC}>0\), \(X_{\rm GC}<0\) is due to the orientation of the near side of the bar, forming an angle of \(\sim 27^{\circ}\) with the line of sight at \(\ell=0\)(Wegg & Gerhard, 2013). If we split our sample in metallicity, using the same bins in [Fe/H] adopted by Johnson et al. (2022), Fig.3 shows that the distribution of the stars is heavily dependent on their chemical composition. Metal-poor stars ([Fe/H]\(\lesssim-0.5\) dex) are more centrally distributed around the Galactic Centre, while the morphology of more metal-rich stars ([Fe/H]\(\gtrsim-0.5\) dex) shows the typical asymmetry with Galactic longitude caused by the orientation of the bar. ## 3 Chemo-kinematics of the Red Clump Stars In this Section, we inspect the projected kinematics of the RC stars, as a function of position and metallicity, in the southern Figure 1: Distance (left panel) and metallicity (right panel) distributions of the sample of 2.6 million RC stars, as derived from BDBS photometry in Johnson et al. (2020, 2022). Galactic bulge. However, we caution that the spatial distributions presented here are heavily affected by our \(\sim\)10-20% distance uncertainties. As Figure 8 of Hey et al. (2023) shows, distance uncertainties of this magnitude distort two and three dimensional projections, particularly on the near side of the Galactic Center, and can smear out otherwise well-defined features. In the central and right panel of Fig. 2, we plot the distribution of RC stars color-coded by Galactocentric velocity along Galactic longitude and latitude, respectively (equations 1 and 2). The clear asymmetry in the velocity field along longitude with respect to \(Y_{\rm GC}\) is an indication of the presence of the bar, which breaks the axial symmetry of the potential in the inner Galaxy (see also Sanders et al., 2019). The orientation of the \(v_{\ell_{\rm e}}^{\rm GC}=0\) contour line is \(\sim 80^{\circ}\) at \(Y_{\rm GC}=0\), consistently with results from Sanders et al. (2019). As expected, the mean velocity along latitude is \(v_{b}^{\rm GC}\sim 0\), indicating no clear streaming motion above/below the Galactic plane (e.g. Reid & Brunthaler, 2004; Du et al., 2020). Figure 4: Upper: Angular velocity \(\omega\) as a function of metallicity for the whole sample of RC stars. Uncertainties on the angular velocity are derived by the fitting procedure, while the ones on the x-axis correspond to the \(16^{\rm th}\) and \(84^{\rm th}\) percentiles of the metallicity distribution in each bin. Lower: Ages inferred from the age-metallicity relation described in Joyce et al. (2023) are assigned to the metallicity bins shown above, yielding angular velocity as a function of age. The x-axis is inverted (e.g. ages shown in reverse) for better visual alignment with the panel above. Figure 3: Same as the Fig.2 (left panel), but for the metallicity bins used in Fig. 19 of Johnson et al. (2022). Figure 2: Top-down views of the RC stars in Galactocentric Cartesian coordinates. The black cross marks the position of the Galactic Centre, at \((X_{\rm GC},Y_{\rm GC})=(0,0)\). Grey dashed lines correspond to constant values of Galactic longitude (\(\ell=-10^{\circ},-5^{\circ},0^{\circ},5^{\circ},10^{\circ}\)), and grey dashed arcs correspond to constant heliocentric distances of 6, 7, 8, 9, and 10 kpc. Left: Logarithmic density of stars in each bin of position. Middle: Galactic longitudinal velocity. The black contour corresponds to the line of nodes \(v_{\ell_{\rm e}}^{\rm GC}=0\). Right: Galactic latitudinal velocity. ### Proper motion rotation curves as a function of metallicity _Gaia_ DR3 proper motions can be further used to construct proper motion rotation curves, as a function of heliocentric distance. Longitudinal Galactic proper motions provide similar results to line-of-sight velocities in the direction of the Galactic bulge (Du et al., 2020). By plotting the median of the Galactic longitudinal velocity (as given by Eq.(1)) as a function of distance \(d\), we can compute the angular velocity \(\omega\) as the slope of the linear relation (e.g. Du et al., 2020), for different metallicity bins. By binning the stars in metallicity using the same intervals adopted by Johnson et al. (2022), in the upper panel of Fig. 4 we show the resulting \(\omega\) as a function of \(\rm[Fe/H]\) (we report the median metallicity value in each bin) for our sample of RC stars. The uncertainty on \(\omega\) is computed from the linear fit using the method of least-squares, taking into account the uncertainties on the velocity (computed by dividing the velocity dispersion by the square root of the number of stars in each metallicity bin). Metal-poor RC stars rotate slower: \(\omega\sim 29\) km s\({}^{-1}\) kpc\({}^{-1}\), and then the angular velocity increases significantly at \(\rm[Fe/H]\sim-0.5\) dex, with a maximum value \(\omega=39\) km s\({}^{-1}\) kpc\({}^{-1}\) for \(\rm[Fe/H]\sim-0.2\) dex. The angular velocity starts decreasing again for \(\rm[Fe/H]\gtrsim-0.2\) dex, reaching a value of \(\sim 35\) km s\({}^{-1}\) kpc\({}^{-1}\) in the most metal-rich bin at \(\rm[Fe/H]\gtrsim 0.4\) dex. The lower rotation of metal-poor stars is consistent with several studies of the Galactic bulge (e.g. Ness et al., 2013; Ness and Lang, 2016; Zoccali et al., 2017; Clarkson et al., 2018). We can now use the age-metallicity relation presented in Joyce et al. (2023) to roughly estimate the dependence of the angular velocity versus the age of the stars. This is shown in the lower panel of Fig.4 We do not attempt to compute proper age uncertainties for this analysis, so no horizontal error bars are given (conservatively, one expects uncertainties of roughly \(1.5-2\) Gyr; Joyce et al., 2023). As expected by the monotonic behavior of the adopted relation, the shape of the curve reflects the chemical one. The peak of the angular velocity corresponds to a population with an age of \(\sim 11.8\) Gyr. The oldest stars in our sample have ages comparable to the age of the Universe, while the metal-rich stars are older than 8 Gyr. Thanks to the large spatial coverage of our sample in the southern Galactic bulge, and to the availability of homogeneous all-sky _Gaia_ DR3 proper motions, in Fig. 5 we plot the sky distribution in Galactic coordinates of the angular velocity \(\omega\), for RC stars of all metallicities. We see that \(\omega\) peaks at \(\sim 40\) km s\({}^{-1}\) kpc\({}^{-1}\) at low latitudes along the bulge minor axis, and it decreases to a minimum of \(\sim 25\) km s\({}^{-1}\) kpc\({}^{-1}\) for off-axis fields at higher latitudes. Typical uncertainties from the fitting procedures are \(\sim 3\) km s\({}^{-1}\) kpc\({}^{-1}\), and they do not show any significative dependence on the sky location. A comparison between Fig. 5 here and Fig. 16c in Johnson et al. (2022), which shows spatial distribution differences in \(\rm[Fe/H]\) between the metal-rich peak and half-power position of the metal-rich tail, indicates that the sky patterns are highly correlated. Both distributions peak along the minor axis close to the plane, and show noticeable enhancements in both values within a region encompassing approximately \(|\ell|<3^{\circ}\) and \(|b|<6^{\circ}\). This observation provides one of the few clear connections between chemistry and kinematics for bulge formation and evolution as the regions showing the highest angular velocities are also those for which the broadest high metallicity tails formed. Fig. 6 incorporates the spatial and chemical dependence of the Galactic longitudinal proper motion rotation curves, by presenting the resulting angular velocity \(\omega\) as a function of Galactic coordinates, for several metallicity bins, chosen to match those in Fig. 19 of Johnson et al. (2022). At the lowest metallicities (\(\rm[Fe/H]\lesssim-0.7\) dex), there is no evidence for a clear, coherent signal over the survey footprint, with a noisy distribution with values ranging from 20 to 50 km s\({}^{-1}\) kpc\({}^{-1}\). On the other hand, for \(\rm[Fe/H]\gtrsim-0.5\) dex, there is a clear continuous pattern over the sky, with a mean value \(\omega\sim 40\) km s\({}^{-1}\) kpc\({}^{-1}\). We observe a clear asymmetry with Galactic longitude, showing lower values of \(\omega\) for positive \(\ell\). We suspect this to be a projection effect due to the orientation of the bar. We do not observe a strong gradient with Galactic latitude. As also evident from Fig. 4, the angular velocities across the sky decrease to \(\sim 30\) km s\({}^{-1}\) kpc\({}^{-1}\) for \(\rm[Fe/H]\gtrsim 0\), confirming the slower rotation of the most metal-rich population observed in Fig. 4. ### Impact of the Galactic bar on the chemo-kinematics of red clump stars Recent works, combining _Gaia_ DR2 and VVV infrared data, showed the impact of the Galactic bar on the transverse kinematics of bulge stars (Clarke et al., 2019; Sanders et al., 2019). The authors quantified these aspects by investigating the proper motion dispersions, dispersion ratio, and correlation between Galactic proper motions as a function of Galactic coordinates across the bulge. They found higher dispersions at low latitudes, a characteristic X-shape structure in the dispersion ratio, and an approximate radial alignment in the proper motions with a clear asymmetry at positive \(\ell\), due to the orientation of the near-side of the bar. We can now reproduce these transverse kinematic maps using more precise and accurate _Gaia_ DR3 data, adding the further dimension provided by photometric metallicities. We bin the sample in metallicities using the same bins adopted by Johnson et al. (2022) and in Fig. 6, and in Galactic coordinates by adopting bin sizes of \(0.5^{\circ}\times 0.5^{\circ}\). In each bin, we then compute the Galactic longitudinal proper motion dispersion as: \[\sigma_{\mu_{\epsilon}}(\ell,b,\rm[Fe/H])=\sqrt{\langle\mu_{\epsilon_{\ast}}^{ 2}\rangle-\langle\mu_{\epsilon_{\ast}}\rangle^{2}}\, \tag{3}\] and the correlation between the Galactic proper motions as: \[\rho_{\ell,b}(\ell,b,\rm[Fe/H])=\frac{\langle\mu_{\epsilon_{\ast}}\mu_{b} \rangle-\langle\mu_{\epsilon_{\ast}}\rangle\langle\mu_{b}\rangle}{\sqrt{ \langle\langle\mu_{\epsilon_{\ast}}^{2}\rangle-\langle\mu_{\epsilon_{\ast}} \rangle^{2}\rangle\langle\langle\mu_{b}^{2}\rangle-\langle\mu_{b}\rangle^{2} \rangle}}. \tag{4}\] Fig. 7 shows the resulting proper motion dispersion along Galactic longitude, \(\sigma_{\mu_{\epsilon_{\ast}}}\), as a function of sky positions, for the same metallicity bins adopted in Fig. 6. We can see how the metal-poor population is kinematically hotter, \(\sigma_{\mu_{\epsilon_{\ast}}}\sim 3\) mas yr\({}^{-1}\), which corresponds to \(\sim 115\) km s\({}^{-1}\) at a distance of 8 kpc, while the metal-rich stars show a lower velocity dispersion (e.g. Ness et al., 2013; Athanassoula et al., 2017; Zoccali et al., 2018). In Figure 5: Angular velocity \(\omega\) as a function of Galactic coordinates in the southern Galactic bulge region, computed using all the RC stars in our sample. The bins have sizes of \(0.5^{\circ}\times 0.5^{\circ}\). addition, stars with \(\rm[Fe/H]\gtrsim-0.5\) dex exhibit a coherent pattern over the sky, reproducing results by Clarke et al. (2019); Sanders et al. (2019). This is particularly evident at \(\rm[Fe/H]\sim 0\), where the RC population is kinematically colder at higher latitudes, \(\sigma_{\mu_{i}}\sim 2\) mas yr\({}^{-1}\sim 76\) km s\({}^{-1}\), and the peak is observed on the minor axis towards the centre of the Galaxy because of the Galactic potential well (see also Rattenbury et al., 2007). We find that, for metal-rich stars, there is a strong gradient of the velocity dispersion with latitude, and on the minor axis at low latitudes it becomes higher than (or comparable to) the one of metal-poor stars (consistently with the findings of Zoccali et al. 2017, in the inner bulge). We also observe an asymmetry with respect to \(\ell=0\) for metal-rich stars, with higher values of the dispersion at positive Galactic longitudes. Fig. 8 shows instead the velocity dispersion along Galactic latitude, \(\sigma_{\mu_{i}}\). Once again, we note that metal-poor stars are kinematically hotter, and they do not exhibit a clear coherent pattern over the sky. To better illustrate the different kinematics of stars with different chemistry, in the upper panel of Fig. 9 we plot the velocity dispersion along Galactic longitude (left panel) and Galactic latitude (right panel) as a function of metallicity, for different slices of Galactic latitude \(b\). In both cases, we clearly see that metal-rich stars are kinematically colder (see also Arentsen et al. 2020). For a given metallicity, stars closer to the Galactic plane show lower values of velocity dispersions (except for the most metal-poor star sample, which also has the largest uncertainties). Finally, we find that the difference in velocity dispersion between the stars at low Galactic latitude and high Galactic latitude increases with metallicity. Following the approach outlined in Section 3.1, we can estimate the dependence of the proper motion dispersions on the ages of the stars, as shown in the lower panel of Fig. 9. Younger stars are kinematically colder, and the highest values of the dispersions (\(\sigma_{\mu_{i}}\sim 3\) mas yr\({}^{-1}\), \(\sigma_{\mu_{b}}\sim 2.75\) mas yr\({}^{-1}\)) are obtained for stars older than 13 Gyr. The results shown in Figures 7-9 closely mirror the metallicity dependent radial velocity dispersion variations noted by Wyie et al. (2021) in their Figure 26. Those authors found that stars with \(\rm[Fe/H]>-0.5\) had strongly peaked radial velocity dispersions near \(\ell=0\), but only for fields with \(3<|b|<6\). Additionally, Wylie et al. (2021) showed that the difference in radial velocity dispersion between low and high latitude fields increases strongly with increasing \(\rm[Fe/H]\). Therefore, we confirm Figure 6: Angular velocity \(\omega\) as a function of Galactic coordinates, for different metallicities of the RC sample. The metallicity bins are the same as in Fig.3. their result that the more metal-poor stars have flatter velocity dispersion profiles as a function of latitude. In Fig. 10 we plot the dispersion ratio as a function of metallicity and sky position, defined as the ratio between the velocity dispersion along longitude and along latitude. Similar to Figure 6 of Sanders et al. (2019), we find a significantly lower dispersion ratio for sight lines near \(0^{\circ}<\ell<+5^{\circ}\) and \(b<\) -6\({}^{\circ}\) compared to adjacent fields. This feature appears to be related to the influence of the X-shape structure in the bulge and is only prominent at [Fe/H] \(>0\). In Fig. 11, we plot the correlation coefficient between Galactic proper motions (Eq. 4), for the different spatial and metallicity bins. The dipole pattern (which becomes a quadrupole when having access also to the northern Galactic bulge) is a sign of radial alignment towards the Galactic Centre, with a stronger amplitude of the correlation at \(\ell>0\) due to the orientation of the bar (see Clarke et al., 2019; Sanders et al., 2019). By looking at the different metallicity bins, we see how, similarly to Fig. 6 and Fig. 7, the dipole patterns start to become apparent for [Fe/H] \(\gtrsim-0.7\) dex, with an amplitude \(|\rho_{\ell\beta}|\sim 0.1\). The maximum amplitude of \(|\rho_{\ell\beta}|\sim 0.2\) is attained for [Fe/H] \(\gtrsim 0.1\) dex. Interestingly, the asymmetry due to geometric effects at positive Galactic longitudes is most evident for the [Fe/H] \(=0.2\) bin, but the absence of BDBS fields at \(\ell>3^{\circ}\), \(b<-8^{\circ}\) hampers the possibility to study this in more details. ### Kinematic fractionation In the previous sections, we noted that the most striking transition features due to the presence of the bar are most evident for [Fe/H] \(\gtrsim-0.5\) dex. In this section, we, therefore, decide to split the sample of RC stars into two large metallicity bins, defining metal-poor stars as RC stars with [Fe/H] \(<-0.5\) dex, and metal-rich RC stars as those with [Fe/H] \(>-0.5\) dex. We will now make a qualitative comparison between our results and the N-body + smoothed-hydrodynamics star-forming simulations of Debattista et al. (2017), following the approach outlined in Gough-Kelly et al. (2022). We note that the metrics introduced in Gough-Kelly et al. (2022) to investigate kinematic fractionation through proper motions in the bulge are defined not in terms of metallicity but of ages, for young (\(<7\) Gyr) and old (\(>9\) Gyr) stars, as ages are more natural units than [Fe/H] for their simulations, due to the lack of chemical mixing. To further investigate the impact of the bar on the kinematics of stars with different metallicities, in Fig. 12 we plot the mean proper motion along longitude \(\mu_{\ell,s}^{\rm GC}\) (corrected for the motion of the Sun) as a function of Galactocentric Cartesian coordinates Figure 7: Proper motion dispersion along Galactic longitude as a function of Galactic coordinates in the southern bulge, for the metallicity bins considered in this work and in Fig. 19 of Johnson et al. (2022). (\(X_{\rm GC},Y_{\rm GC}\)) for metal-poor (\({\rm[Fe/H]}<-0.5\) dex, top panel) and metal-rich (\({\rm[Fe/H]}>-0.5\) dex, middle panel) stars. Confirming the predictions of Gough-Kelly et al. (2022) when inspecting the kinematics of young and old stars in N-body+smoothed particle hydrodynamic simulations (see their figure 5), we find that the asymmetry in the velocity field around \(\pm Y_{\rm GC}\) is clearly observed in the metal-rich sample, while metal-poor stars show an axisymmetric rotation field. The non-axisymmetric pattern observed for more metal-rich stars is a clear signature of the presence of the bar, as shown by the stronger longitudinal variation of the proper motions compared to the metal-poor stars. To gain more insight into the kinematic differences between metal-rich and metal-poor stars, in the bottom panel of Fig. 12 we plot, in each bin of Galactocentric Cartesian coordinates, the quantity \(\Delta\mu^{\rm GC}_{\ell_{\rm e}}\): \[\Delta\mu^{\rm GC}_{\ell_{\rm e}}\equiv\langle\mu^{\rm GC,MR}_{\ell_{\rm e}} \rangle-\langle\mu^{\rm GC,MP}_{\ell_{\rm e}}\rangle \tag{5}\] where \(\langle\mu^{\rm GC,MR}_{\ell_{\rm e}}\rangle\) and \(\langle\mu^{\rm GC,MP}_{\ell_{\rm e}}\rangle\) are, respectively, the mean value of the proper motions along Galactic longitude for metal-rich and metal-poor stars in each spatial bin. We observe a good qualitative agreement with the simulations from Gough-Kelly et al. (2022): the near positive peak in \(\Delta\mu^{\rm GC}_{\ell_{\rm e}}\) is at \((X_{\rm GC},Y_{\rm GC})\sim(-1,0)\) kpc with an extended tail towards negative values of Galactic longitude, while the negative peak around \((Y_{\rm GC})\sim 1\) kpc extends to positive longitudes. We stress here that this is only a qualitative comparison between the observed and predicted trends, and we postpone to further work a more rigorous and careful interpretation of our data in light of realistic models of the Galactic bulge. The simple adopted definitions of metal-poor and metal-rich stars might result in overlapping age distributions, which might cause the differences in the spatial distribution between our data and the clear dipole pattern shown in Gough-Kelly et al. (2022). Finally, we note that Fig. 12 does not change significantly if we consider vertical slices in \(Z_{\rm GC}\). Following Gough-Kelly et al. (2022), we define the separation amplitude \(\xi\) as the integral along the line of sight of the difference in mean proper motion between the metal-rich and metal-poor stars: \[\xi=\delta d\cdot\sum_{d=d_{1}}^{d_{2}}\Delta\mu^{\rm GC}_{\ell_{\rm e}}(d)\;. \tag{6}\] To facilitate the comparison with Gough-Kelly et al. (2022), we fix the parameters \(\delta d=0.5\) kpc, \(d_{1}=6\) kpc, \(d_{2}=10\) kpc, and we bin in distance using a fixed bin size of 0.5 kpc. In the top panel of Fig. 13, we plot the separation amplitude \(\xi\) as a function of Galactic coordinates, across the BDBS footprint. The observed projected pattern matches qualitatively the one inferred Figure 8: Same as Fig.7, but showing the proper motion dispersion along Galactic latitude. from simulations (see the middle panel in figure 4 of Gough-Kelly et al. 2022): the separation amplitude \(\xi\) is negative for \(\ell\gtrsim 2.5^{\circ}\), and positive for the other line of sights. The variations of \(\xi\) across the Galactic bulge are due to the difference in the intrinsic velocity distributions of the metal-poor and metal-rich populations, tracing different kinematic structures. The presence of negative values of \(\xi\) in our data is likely to be due to the absence of metal-poor stars in the closest distance bins (see Fig. 12), due to the longitudinal dependence of the colour cuts implemented in Johnson et al. (2022) to minimize foreground contamination from the stellar disk. We compute the error on the separation amplitude, \(e_{\xi}\), using equation 3 in Gough-Kelly et al. (2022). The second panel of Fig. 13 shows the separation amplitude as a function of Galactic coordinates. The values of \(e_{\xi}\) are driven by the number of stars in each bin, and we find that the overall distribution loosely traces the underlying one for the metal-rich stars. To quantify the effect of the kinematic fractionation in the southern Galactic bulge, we also define the metric \(\delta\mu_{\ell\ast}\), following Gough-Kelly et al. (2022): \[\delta\mu_{\ell\ast}=\langle\mu_{\ell\ast}^{\rm GC,MR}\rangle_{\rm 8kpc}- \langle\mu_{\ell\ast}^{\rm GC,MP}\rangle_{\rm 8kpc}\, \tag{7}\] where the mean of the proper motions is computed on a large distance bin along the line of sight, from 7 kpc to 9 kpc. In the third panel of Fig. 13 we plot \(\delta\mu_{\ell\ast}\) as a function of Galactic coordinates. The large values of \(\delta\mu_{\ell\ast}\) result from forbidden velocities in the rotation curves (negative longitudinal proper motions at \(\ell>0\), and positive values for \(\ell<0\)) and are deeply connected to the distribution of radial velocities of stars in the bar (Gough-Kelly et al. 2022). A purely axisymmetric distribution of velocities would not produce forbidden velocities, therefore their non-zero observed values are a strong signature of the presence of the bar (Gough-Kelly et al. 2022). The observed pattern in the third panel of Fig. 13 matches again qualitatively the results from Gough-Kelly et al. (2022). The uncertainty on \(\delta\mu_{\ell\ast}\), shown in the fourth panel of Fig. 13, follows \(e_{\xi}\), and it is again driven by the underlying number density of \([{\rm Fe/H}]>-0.5\) dex RC stars in our sample. ## 4 Discussion and Conclusions In this work, we have combined _Gaia_ DR3 proper motions with BDBS photometric distances and metallicities, to investigate the chemo-kinematics of a sample of 2.6 million RC stars in the southern Galactic bulge. Our main results can be summarised as follows: * We find that the angular velocity, defined as the slope of the longitudinal proper motion \(\mu_{\ell\ast}\) curve as a function of distance, is highest at \(\omega\sim 39\) km s\({}^{-1}\) kpc\({}^{-1}\) for stars with [Fe/H] \(\sim-0.25\) dex (see Fig.4). Metal-poor RC stars exhibit the lowest rotation values (\(\omega\sim 29\) km s\({}^{-1}\) kpc\({}^{-1}\)). Surprisingly, Figure 9: Upper: Proper motion dispersion along Galactic longitude (left panel) and Galactic latitude (right panel) as a function of metallicity, for different values of Galactic latitude. Lower: The same, but shown with the age–metallicity substitution from Joyce et al. (2023) described in Section 3.1. the angular velocity is not a monotonic function of metallicity, but it decreases for RC stars with [Fe/H] \(\gtrsim-0.25\) dex, and it reaches \(\omega\sim 35\) km s\({}^{-1}\) kpc\({}^{-1}\) for [Fe/H] \(\sim 0.5\) dex. When integrated over all metallicities, the angular velocity peaks on the minor axis closer to the plane (Fig. 5). When plotted as a function of both metallicity and sky position (Fig. 6), the angular velocity peaks at low latitudes for stars with [Fe/H] \(\gtrsim-0.5\) dex, with a clear asymmetry with Galactic longitude, possibly due to the orientation of the near side of the bar. Stars with lower metallicities do not exhibit a coherent angular velocity pattern on the sky. * Proper motion dispersions along Galactic longitude and latitude are maximum towards the Galactic Centre for stars with [Fe/H] \(\gtrsim-0.5\) dex, because of the Galactic potential well (Fig. 7 and 8, respectively). Metal-poor stars are kinematically hotter, with proper motion dispersions \(\sim 3\) mas yr\({}^{-1}\), while the mean dispersion for metal-rich stars is \(\sim 2\) mas yr\({}^{-1}\). If we plot the dispersions as a function of metallicity for different latitude bins (Fig. 9) we see that, for a given metallicity bin, RC stars are kinematically hotter closer to the plane. Also, the spread in dispersion between the stars at low latitudes and high latitudes increases with increasing metallicity. * The correlation between Galactic proper motions clearly shows a quadrupole pattern corresponding to radial alignment for stars with [Fe/H] \(\gtrsim-0.5\) dex (Fig. 11). The prominence of the correlation amplitude at \(\ell>0\degr\) is a consequence of the orientation of the bar. * Based on the striking difference in the sky plots, we split our sample of RC stars into metal-poor ([Fe/H] \(<-0.5\) dex) and metal-rich ([Fe/H] \(>-0.5\) dex) stars. We compare our results to the simulations presented in Gough-Kelly et al. (2022), which analyze the predicted proper motion trends for young and old stars in the Galactic bulge. We observe a non-axisymmetric pattern around the Galactic center in the proper motions along Galactic longitude only for the metal-rich sample, which is a clear sign of the bar. More generally, we find that the footprint of the bar is clearly present in the metal-rich sample, and totally absent in the metal-poor one. Our results clearly demonstrate that RC stars with different metallicities have dramatically different kinematics. In particular, metal-rich stars are kinematically colder, and their orbits show a clear imprint of the Galactic bar. On the other hand, metal-poor RC stars have larger velocity dispersions, and exhibit random motions over the bulge, without showing any radial alignment towards the Galactic Centre, or clear asymmetries with Galactic longitude that could be due to the bar's ori Figure 10: Same as Fig.7, but showing the dispersion ratio, defined as the ratio between the proper motion dispersion along Galactic longitude and the one along Galactic latitude. entation. At the moment, we are not able to assess whether the more metal-poor stars trace a weaker bar, or if they constitute the so-called classical bulge, assembled over cosmic time by galaxy mergers. A more detailed comparison with cosmological simulations is needed to assess the contribution of the different formation scenarios to the observed kinematics of the stellar populations in the Galactic bulge, and we postpone this to a future dedicated study. Our results also help us interpret age-metallicity relations in the Galactic bulge. Though others have proposed a flat age-metallicity relation in the bulge (e.g., Bensby et al., 2017), the present chemodynamical results support membership of the highest-metallicity star to the bar rather than bulge. This hypothesis is more broadly consistent with the results of Joyce et al. (2023), who found that, though a handful of the highest-metallicity, micro-lensed subdwarfs identified as belonging to the bulge region may be as young as 2 Gyr, the majority are not. These results, in combination with the kinematic-metallicity correlations presented here suggest that the most metal-rich stars belong to the bar rather than the bulge. Further, based on Figure 11, the bar orbits become visible for stars with \(\rm[Fe/H]>-0.2\). According to Figure 4, stars in this metallicity range have ages less than 12 Gyr, with the youngest stars having ages around 8.4 Gyr. We note, however, that the bar structure itself may be younger than the stars comprising it. The values of the angular velocity we determine for RC stars are in agreement with findings for RRL stars (Du et al., 2020), but considerably slower than what was previously found for younger bulge stars (e.g. Sanders et al., 2019; Du et al., 2020). In the upcoming years, new data releases by _Gaia_ will provide more precise proper motions thanks to the improved observational baseline. In addition to that, a longer baseline will also improve the accuracy of the astrometric solution, especially in crowded fields, allowing a better call of proper motion systematics. The Vera C. Rubin Telescope will also monitor the Milky Way bulge, and derive proper motions down to the Main Sequence Turnoff (Gonzalez et al., 2018). In addition, large-scale multifiber spectroscopic facilities such as MOONS (Gonzalez et al., 2020) and 4MOST (de Jong et al., 2012) will measure radial velocities for millions of giants in the bulge, allowing a full three-dimensional kinematic analysis when combined with photometric and spectroscopic metallicities. ###### Acknowledgements. T.M. thanks F. Fargkoudi for interesting discussions. M.J. gratefully acknowledges funding of MATISSE: _Measuring Ages Through Isochrones, Seismology, and Stellar Evolution_, awarded through the European Commission's Widening Fellowship. This project has received funding from the Figure 11: Correlation between Galactic proper motions as a function of Galactic coordinates in the southern bulge, for the metallicity bins considered in this work, and in Fig. 19 of Johnson et al. (2022). European Union's Horizon 2020 research and innovation programme. Data used in this paper comes from the Blanco DECam Survey Collaboration. This project used data obtained with the Dark Energy Camera (DECam), which was constructed by the Dark Energy Survey (DES) collaboration. Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, the Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A&M University, Finanadiero de Estudos e Projetos, Fundos Carlos Chagas Filho de Amparo & Pesquisa de Estado de Rio de Janeiro, Consho Nacional de Deservidimento Cientifico e Tecnologico and the Ministerio da Ciencia, Tecnologia e Inovacao, the Deutsche Forschungsgemeinschaft, and the Collaborating Institutions in the Dark Energy Survey. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones Energeticas, Medionhentales y Tecnologicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgenossische Technische (ETH) Zurich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ciencies de l'Essuio (IEEC/CSIC), the Institut de Fisica d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universitat Munchen and the associated Excellence Cluster Universe, the University of Michigan, the National Optical Astronomy Observatory, the University of Nottingham, the Ohio State University, the OzDES Membership Consortium the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A&M University. Based on observations at Cerro Tololo Inter-American Observatory (2013A-0529), 2014A-0480; PI: Rich), National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/idae/consortium](https://www.cosmos.esa.int/web/gaia/idae/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. _Software used:_ numpy (Harris et al., 2020), matplotlib (Hunter, 2007), scipy (Virtanen et al., 2020), astropy (Astropy Collaboration et al., 2013, 2018), TOPCAT (Taylor, 2005, 2006)
2308.12670
Optimal data pooling for shared learning in maintenance operations
We study optimal data pooling for shared learning in two common maintenance operations: condition-based maintenance and spare parts management. We consider a set of systems subject to Poisson input -- the degradation or demand process -- that are coupled through an a-priori unknown rate. Decision problems involving these systems are high-dimensional Markov decision processes (MDPs) and hence notoriously difficult to solve. We present a decomposition result that reduces such an MDP to two-dimensional MDPs, enabling structural analyses and computations. Leveraging this decomposition, we (i) demonstrate that pooling data can lead to significant cost reductions compared to not pooling, and (ii) show that the optimal policy for the condition-based maintenance problem is a control limit policy, while for the spare parts management problem, it is an order-up-to level policy, both dependent on the pooled data.
Collin Drent, Melvin Drent, Geert-Jan van Houtum
2023-08-24T09:27:38Z
http://arxiv.org/abs/2308.12670v2
# Optimal data pooling for shared learning in maintenance operations ###### Abstract We study optimal data pooling for shared learning in two common maintenance operations: condition-based maintenance and spare parts management. We consider a set of systems subject to Poisson input - the degradation or demand process - that are coupled through an a-priori unknown rate. Decision problems involving these systems are high-dimensional Markov decision processes (MDPs) and hence notoriously difficult to solve. We present a decomposition result that reduces such an MDP to two-dimensional MDPs, enabling structural analyses and computations. Leveraging this decomposition, we (i) demonstrate that pooling data can lead to significant cost reductions compared to not pooling, and (ii) show that the optimal policy for the condition-based maintenance problem is a control limit policy, while for the spare parts management problem, it is an order-up-to level policy, both dependent on the pooled data. keywords: condition-based maintenance, data pooling, Bayesian learning, spare parts, optimal policy ## 1 Introduction Unplanned downtime of advanced technical systems such as aircraft, lithography systems, or rolling stock, is extremely costly for companies that rely on these systems in their primary processes. As such, these companies typically have agreements with maintenance service providers - external or internal - to ensure sufficiently high availability of their systems. Recent advancements in information technology along with continuous reductions in costs of sensors have led to ample opportunities for service providers to improve their maintenance operations [21]. Indeed, modern systems are now increasingly equipped with sensors that relay degradation data of critical components in real-time to maintenance decision-makers. This data is useful for inference of system degradation behavior; however, the amount of data that each such system generates to predict failures of a particular component is scarce, especially for newly introduced systems. Maintenance service providers typically maintain several systems of the same type (e.g. similar systems for the same customer at different locations, or similar systems for different customers). At the beginning of the life-cycle of a newly introduced system, the maintenance service provider thus faces a setting where (i) multiple systems of the same type generate a steady stream of degradation data, but at the same time, (ii) each such system alone has not yet generated sufficient amounts of data. A prime example of this can be found in the semiconductor industry, where the original equipment manufacturer itself is often also responsible for maintaining its lithography systems after they are sold. Upon the introduction of a new generation of lithography system in the field, many critical components in this system are also used for the first time, and hence no historical degradation data is available [11]. For the settings described above, it is evident that pooling degradation data from multiple systems can lead to cost reductions in maintenance operations. However, it remains unclear how we can precisely quantify these cost reductions, especially when we are interested in optimal decisions and the state space of the corresponding Markov decision process (MDP) thus becomes large. In this paper we address this question. More specifically, we consider a maintenance service provider that is responsible for maintaining multiple systems of the same type at different locations or customers. These systems are equipped with sensory technology that relay degradation data in real-time to the service provider. As these systems are used for the first time, there is only limited information available per system at the start of their life-cycle. We consider a single component that is present in the configuration of all systems. These components deteriorate according to a Poisson process with the same but unknown rate. As the components are critical, the systems fail whenever the component's degradation reaches a certain failure threshold. Such failures can be prevented by performing preventive maintenance, which is cheaper than replacement upon failure, which generally leads to costly unplanned downtime. The maintenance service provider must periodically decide - based on the state of all systems and accumulated data - for each system to perform preventive maintenance or not, thereby trading off costly premature interventions with costly tardy replacements. Systems are homogeneous with respect to the unknown deterioration rate, but are otherwise heterogeneous (i.e. costs and failure thresholds). We endow the unknown rate with a prior distribution and propose a Bayesian procedure that is able to pool all data and learn this rate jointly on-the-fly as data becomes available. We model this decision problem as a Bayesian MDP for which the optimal policy - in theory - can be computed through standard methods. However, because both the action and state space grow exponentially in the number of systems, this MDP will quickly suffer from the curse of dimensionality, making it impossible to assess the value of optimal data pooling. As a remedy, we establish a novel decomposition result that reduces this high-dimensional MDP to multiple two-dimensional MDPs that permit structural analyses and computations. When components have constant failure rates, maintenance service providers typically replace these components with new spares only correctively upon failure, i.e. they apply repair-by-replacement. The underlying spare parts inventory system responsible for supplying these spares then largely determines the availability of the technical systems. As an extension, we will show that our decomposition result also applies to such a spare parts inventory system consisting of multiple local warehouses that keep spares for the same critical component whose failure rate is unknown. Sequential Bayesian learning based on sensory data stemming from systems has been used extensively in the maintenance literature to study optimal maintenance decision-making when relevant parameters are a-priori unknown (e.g. 12, 8, 9), but only exclusively for single-component systems in isolation (we refer to [6] for a comprehensive overview of the area). This makes sense when the unknown parameter is unique to the specific system in use. However, as we argued above, in practice parameters may be the same for multiple systems of the same type. When a maintenance service provider maintains several systems of the same type, as we consider in this paper, it is natural to pool data stemming from all these systems to jointly learn the common parameter. The benefit of pooling has been extensively studied in many application domains, yet almost exclusively related to pooling of physical resources. Notable examples include inventory pooling in inventory networks (see, e.g., [13] and pooling of server capacity in queuing networks (see, e.g., [20]). Recently, researchers have started to investigate the benefits of pooling data, mainly driven by the benefits of pooling physical resources (see, e.g., [2; 15]). Within the maintenance literature, only two works exist on data pooling for learning parameters [7; 11]. [7] investigates the benefits of combining data from a set of heterogeneous machines in the context of time-based preventive maintenance. The authors propose a method in which limited data stemming from multiple systems can be aggregated such that it can be utilized for selecting a periodic interval at which preventive maintenance is performed for each individual system. [11] exploits the use of data pooling to determine whether a set of systems is stemming from a so-called weak or strong population, where the former has lifetimes that are stochastically smaller than the latter. Unlike [7], who proposes a static estimation procedure based on historical pooled data, [11] builds a partially observable MDP that sequentially learns as more data becomes available. They numerically show - only for small instances due to the curse of dimensionality - that data pooling can lead to savings of up to 14% compared to not pooling data. We also learn from pooled data in a dynamic, sequential way, but circumvent the resulting curse of dimensionality by leveraging our new decomposition result. Both [7] and [11] pool data to learn a time-to-failure model in a time-based maintenance setting, while we focus on learning a degradation model in a condition-based maintenance setting. The main contributions of this paper are as follows: 1. We formulate the problem of optimally maintaining \(N\) systems with a common, unknown deterioration rate over a finite lifespan \(T\) as a finite horizon Bayesian MDP in which data is pooled for shared learning. This formulation suffers from the well-known curse of dimensionality: The cardinality of both the action and state space grow exponentially in \(N\). As a remedy, we provide a new decomposition result that establishes the equivalence between the original MDP and \(N\) two-state MDPs with a binary action space, each focused on an individual system. 2. Using the decomposition, we are able to show that the structure of the optimal policy of each individual system has a control limit structure, where the control limit depends on the pooled data obtained from all systems. Perhaps counterintuitively, we show that this optimal control limit is not monotone in general. Although the control limit typically decreases first, it always increases and converges to the failure level when the pooled data grows very large, implying that preventive maintenance is never optimal in that asymptotic regime. 3. We investigate numerically the savings that can be attained by pooling data to learn the a-priori unknown deterioration rate, while optimally maintaining the systems. We find that the savings can be significant, even for small values of \(N\), and that the exact magnitude of these savings largely depends on the magnitude of the uncertainty in the parameter (measured by the variation of the initial prior distribution). When there is high uncertainty, huge savings of close to 57% can be realized on average, while these savings become almost negligible when the uncertainty is low. 4. We finally demonstrate the general applicability of our decomposition result by applying it to a spare parts inventory system consisting of multiple local warehouses where a common, but unknown failure rate needs to be learned. For this setting, we establish the optimality of monotone order-up-to policies, where the optimal order-up-to levels are non-decreasing in the data obtained from all local warehouses. The remainder is organized as follows. We provide a model description in Section 2. In Section 3, we formulate the problem as a Bayesian MDP and we show that it can be decomposed into \(N\) alternative MDPs. We present some structural properties of both the expected cost and the optimal policy of the alternative MDP in Section 4. In Section 5, we report on an extensive numerical study that highlights the benefit of pooling data. In Section 6, we apply our decomposition result to a set of spare parts inventory systems. Finally, Section 7 provides concluding remarks. ## 2 Model description We consider a set of \(N\geq 1\) systems subject to damage accumulation due to random shocks that arrive over time. Random shock degradation is a common assumption in the maintenance literature (see, e.g., 17; 19) that has been validated by practice-based research (9). We remark that although data pooling has only value when \(N>1\), the analysis in this paper also holds for \(N=1\). The set of all systems is denoted by \(\mathcal{N}\), i.e. \(\mathcal{N}=\{1,\ldots,N\}\). We assume that each system has a critical component such that the system breaks down whenever this component fails. The deterioration processes of these components are modeled as independent Poisson processes with the same rate \(\lambda\), denoted with \(\{X_{i}(t),t\geq 0\}\), with \(X_{i}(0)=0\), for \(i\in\mathcal{N}\). A component of system \(i\in\mathcal{N}\) deteriorates until its deterioration level reaches or crosses a finite deterministic failure threshold, denoted with \(\xi_{i}\in\mathbb{N}_{+}\), where \(\mathbb{N}_{+}\triangleq\{1,2,\ldots\}\), after which the component has failed. This failure threshold \(\xi_{i}\) is essentially the maximum physical capacity of a component to withstand the accumulated damage and under which system \(i\in\mathcal{N}\) still adequately performs its function. In most practical situations, components of the same type will have the same failure threshold \(\xi_{i}\). However, to allow for the setting in which components have different capacities to withstand deterioration - which is reasonable when some components are of better quality than others - we let the failure threshold \(\xi_{i}\) depend on system \(i\in\mathcal{N}\). Observe that the same failure threshold for each component is of course a special case of this general set-up. The deterioration levels are monitored at equally spaced decision epochs, though failure moments can happen at any point in time (i.e. not only at decision epochs). Replacing only at decision epochs is a reasonable assumption given that critical components in these systems typically have mean lifetimes ranging from 1 to 10 years, while maintenance decisions are made much more frequently, often on a daily to weekly basis (22; 18). For convenience, we rescale time such that the time between two decision epochs equals 1. If at a decision epoch a component of system \(i\in\mathcal{N}\) is failed, it needs to be replaced correctively at costs \(c_{u}^{i}>0\). Such a failure can be prevented by performing a prevenu replacement, which costs \(c_{p}^{i}>0\), with \(c_{p}^{i}<c_{u}^{i}\) for all \(i\in\mathcal{N}\). Corrective maintenance is more expensive because it includes costs caused by a component failure in addition to the costs related to the replacement (e.g. unplanned downtime costs). Both replacements involve a new component that starts deteriorating again from level 0 according to a Poisson process with rate \(\lambda\), that is, \(\{X_{i}(t),t\geq 0\}\) is reset to \(X_{i}(0)=0\). We assume that replacement times are negligible. This is a reasonable assumption given the efficiency of replacing old components with new ones, which usually takes only a few minutes to an hour - significantly shorter than the time between consecutive decision epochs. The systems have a common, finite lifespan, with length \(T\in\mathbb{N}_{+}\) time units. This lifespan represents the time from their introduction until they are taken out of service, with typical durations ranging from 10 to 30 years [22]. We let this lifespan consist of \(T\) discrete time steps corresponding to the intervals between consecutive decision epochs. The maintenance service provider, responsible for maintaining the set of \(N\) systems, seeks to minimize the total expected maintenance costs - due to both corrective and preventive replacements of components - over the systems' lifespan. In dealing with this optimization problem, the maintenance service provider faces another layer of uncertainty in addition to the random shock arrivals. That is, the components used for all replacements always have the same rate but this rate is a-priori unknown and needs to be inferred based on the observations of the deterioration processes throughout their lifespan. Since all components have the same rate, the maintenance service provider can pool and utilize all accumulated data together in real-time when inferring this unknown rate. To this end, we adopt a Bayesian approach and treat the unknown rate \(\lambda\) as a random variable denoted with \(\Lambda\). Upon the start of operating all systems, at \(t=0\), \(\Lambda\) is modeled by a Gamma distribution with shape parameter \(\alpha_{0}\) and rate parameter \(\beta_{0}\). The subscript notation reflects that this corresponds to \(t=0\); we adopt this notation in the remainder of this paper. Thus, at \(t=0\), the density function of \(\Lambda\) is equal to \[f_{\Lambda}(\lambda;\alpha_{0},\beta_{0})=\frac{\lambda^{\alpha_{0}-1}e^{- \beta_{0}\cdot\beta_{0}^{\alpha_{0}}}}{\Gamma(\alpha_{0})}\quad\text{ for }\lambda>0,\quad\alpha_{0},\beta_{0}>0,\] where \(\Gamma(\cdot)\) denotes the Gamma function. Estimation procedures are available in the literature for obtaining the parameters of this initial belief based on expert knowledge or historical data (see, e.g., 1, 9). Suppose that at decision epoch \(t\in\mathbb{N}_{+}\), we observed a cumulative amount of \(k\) deterioration increments from all installed components. As degradation is modeled by a Poisson process, which is a non-decreasing, integer-valued process, we know that the degradation increments are non-negative and integer-valued as well. Hence, we know that the cumulative sum of all deterioration increments from all installed components, \(k\), will always be a non-negative integer. Our choice for the Gamma distribution is not only empirically grounded (e.g. 1, 9), but also mathematically convenient and therefore quite customary in the literature. Indeed, it is well-known that the Gamma distribution is a conjugate prior for the Poisson distribution, which implies that the new posterior distribution describing our belief of \(\Lambda\) is again a Gamma distribution but with updated parameters (see, e.g., 14, Chapter 2): \[\alpha_{t}=\alpha_{0}+k\quad\text{and}\quad\beta_{t}=\beta_{0}+N\cdot t. \tag{1}\] Observe that from the updating scheme in Equation (1), it is immediately clear that the data stemming from all systems is pooled for learning the unknown rate \(\lambda\) that the systems have in common. In Bayesian terminology, \(k\) is the sufficient statistic (which is thus linear in the observations) and \(N\cdot t\) is the total amount of observations at decision epoch \(t\). At each decision epoch, based on her current belief of \(\Lambda\), the maintenance service provider wishes to predict the future evolution of the deterioration of each component so that she can decide upon potential replacements. This prediction is encoded in the posterior predictive distribution. For this Gamma-Poisson model, it is well-known that the posterior predictive distribution is a Negative Binomial distribution (see, e.g., [14], e.g., [14]). Specifically, given parameters \(\alpha_{t}\) and \(\beta_{t}\), the deterioration increment (i.e. \(X_{i}(t+1)-x_{i}(t)\) with \(x_{i}(t)\) the current deterioration at decision epoch \(t\)) of a component at system \(i\) at the next decision epoch, denoted with \(Z_{i}\), is Negative Binomially distributed with parameters \[r=\alpha_{t}\quad\text{and}\quad\ p=\frac{\beta_{t}}{\beta_{t}+1}, \tag{2}\] where \(r\) is the number of successes and \(p\) is the success probability, so that \(Z_{i}\) can be interpreted as the number of failures until the \(r^{th}\) success. In the remainder we use the notation \(Z\sim NB(r,p)\) to denote that \(Z\) is a Negative Binomially distributed random variable with parameters \(r\) and \(p\). Equation (2) together with the updating scheme in (1) can be used to construct an updated posterior predictive distribution at each decision epoch of the next deterioration increments in real-time based on the observed data. Since the posterior predictive distributions of the deterioration increments of each system are fully described by only the current decision epoch \(t\) and cumulative amount of deterioration increments \(k\), it is a Markov process. This allows us to formulate the optimization problem as a finite horizon (with length \(T\)) MDP equipped with the state variable \(k\) for Bayesian inference of the unknown rate. Before doing so, we end this section with an important result that establishes a stochastic ordering property of the posterior predictive distribution \(Z\) (for brevity we drop the dependence on \(k\), \(N\) and \(t\)) in the cumulative amount of deterioration increments \(k\) when everything else is fixed. **Lemma 1**.: _The posterior predictive random variable \(Z\) is stochastically increasing in \(k\) in the usual stochastic order._ Proof.: See Appendix A.1. Lemma 1 implies that if the sum of observed deterioration increments increases, and all else is fixed, then the next random deterioration increments are more likely to take on higher values. This is also intuitive since the mean increment (\(\frac{\alpha_{t}}{\beta_{t}}\)) increases in \(k\), see Equation (1). ## 3 Markov decision process formulation We will now formulate the problem described in the previous section as an MDP. The state space of the MDP is represented by the set \(\mathcal{S}\triangleq\mathbb{N}_{0}^{N+1}\) where \(\mathbb{N}_{0}\triangleq\mathbb{N}_{+}\cup\{0\}\) represents the set of non-negative integers. For a given state \((\mathbf{x},k)\in\mathcal{S}\), \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{N})\) represents the vector of all deterioration levels, and \(k\) denotes the sum of all deterioration increments. Recall that as we are dealing with Poisson degradation, both the deterioration levels and the sum of deterioration increments are non-negative integer-valued. For a given state \((\mathbf{x},k)\in\mathcal{S}\), let \(\mathcal{A}(\mathbf{x})\) denote the action space. For any action \(\mathbf{a}=(a_{1},a_{2},\ldots,a_{N})\in\mathcal{A}(\mathbf{x})\), \(a_{i}\) represents the action per system, with \(a_{i}\in\{0,1\}\) when \(x_{i}<\xi_{i}\) and \(a_{i}=1\) otherwise. Here, \(a_{i}=0\) corresponds to taking no action and \(a_{i}=1\) corresponds to performing maintenance on the component of system \(i\), respectively. This implies that if the critical component of system \(i\) is failed (i.e. \(x_{i}\geq\xi_{i}\)), then the maintenance service provider must (correctively) replace it. For all components that have not failed, the maintenance service provider can choose to either preventively replace it, or do nothing and continue to the next decision epoch. Given the state \((\mathbf{x},k)\in\mathcal{S}\) and an action \(\mathbf{a}=(a_{1},a_{2},\ldots,a_{N})\in\mathcal{A}(\mathbf{x})\), the maintenance service provider incurs a direct cost, denoted by \(C(\mathbf{x},\mathbf{a})\), equal to \[C(\mathbf{x},\mathbf{a})\triangleq\sum_{i\in N}\left(a_{i}(1-\mathbb{I}_{i}(\mathbf{x}))c _{p}^{i}+\mathbb{I}_{i}(\mathbf{x})c_{u}^{i}\right), \tag{3}\] where \(\mathbb{I}_{i}(\mathbf{x})\) is an indicator function that indicates whether the component of system \(i\) has failed in the deterioration vector \(\mathbf{x}\); that is, \(\mathbb{I}_{i}(\mathbf{x})=0\) if \(x_{i}<\xi_{i}\) and \(\mathbb{I}_{i}(\mathbf{x})=1\) otherwise. Let \(V_{t}^{N}(\mathbf{x},k)\) denote the optimal expected total cost over decision epochs \(t,t+1,\ldots,T\), starting from state \((\mathbf{x},k)\in\mathcal{S}\), and let the terminal cost, \(V_{T}^{N}(\mathbf{x},k)\), be equal to the function \(C(\mathbf{x})\triangleq\sum_{i\in N}\mathbb{I}_{i}(\mathbf{x})c_{u}^{i}\) for all \(k\). This terminal cost function essentially assigns a corrective maintenance cost to failed components, while no costs are incurred for non-failed components. Then, by the principle of optimality, \(V_{t}^{N}(\mathbf{x},k)\) satisfies the following recursive Bellman optimality equations \[V_{t}^{N}(\mathbf{x},k)=\min_{\mathbf{a}\in\mathcal{A}(\mathbf{x})}\left\{C(\mathbf{x},\mathbf{a}) +\mathbb{E}_{\mathbf{Z}}\left[V_{t+1}^{N}\left(\mathbf{x}^{\prime}+\mathbf{Z},k+\sum_{i \in N}Z_{i}\right)\right]\right\}, \tag{4}\] where \(\mathbf{Z}=(Z_{1},Z_{2},\ldots,Z_{N})\) is an \(N\)-dimensional random vector with \(Z_{i}\sim NB\Big{(}\alpha_{0}+k,\frac{\beta_{0}+N_{t}}{\beta_{0}+N+1}\Big{)}\) (all \(Z_{i}\)'s are independent and identically distributed), \(\mathbb{E}_{\mathbf{Z}}\) denotes that the expectation is taken with respect to \(\mathbf{Z}\), and \(\mathbf{x}^{\prime}=\left(x_{1}^{\prime},x_{2}^{\prime},\ldots,x_{N}^{\prime}\right)\) with \[x_{i}^{\prime}=\begin{cases}x_{i}&\text{if }a_{i}=0,\\ 0&\text{if }a_{i}=1.\end{cases} \tag{5}\] We also refer to \(V_{t}^{N}(\mathbf{x},k)\) as the value function of the original MDP. The first part between the brackets is the direct costs while the second part is the expected future costs of taking action \(\mathbf{a}\) in state \((\mathbf{x},k)\). Specifically, each component's deterioration accumulates further according to the posterior predictive distribution that corresponds to state \((\mathbf{x},k)\), and \(k\) increases with the sum of all those increments. Systems that are maintained start with an as-good-as-new component, which is governed by the auxiliary vector \(\mathbf{x}^{\prime}\) which ensures that \(x_{i}^{\prime}=0\) when \(a_{i}=1\) (see Equation (5)). The formulation in (4) shows that the learning process about the unknown rate \(\lambda\) is pooled through the evolution of the common state variable \(k\), while the future evolution of all individual deterioration processes depends on all pooled information and the parameter \(N\). The existence of an optimal policy in this setting is guaranteed, see e.g., Proposition 3.4 of [3]. Observe that the minimum total expected cost for \(N\) systems over the complete lifespan of length \(T\) is given by \(V_{0}^{N}(\mathbf{0},0)\) (\(\mathbf{0}\) denotes the N-dimensional zero vector) which can be found by solving Equation (4) via backward induction. It is however clear from the formulation in (4), that as the number of systems grows, the problem will increasingly suffer from the curse of dimensionality: The cardinality of both the action and state space grow exponentially in \(N\). Instead of solving (4) (referred to as the original MDP) directly, we will therefore construct an alternative MDP and show that the original MDP can be decomposed into \(N\) of these alternative MDPs: One for each system \(i\in\mathcal{N}\). This decomposition is imperative as it allows us to (i) analyze the benefits of pooling of learning when \(N\) is relatively large without suffering from the curse of dimensionality, and (ii) establish structural properties of the optimal policy. To this end, let \(\tilde{V}_{t}^{N,i}(x,k)\) denote the optimal expected total cost for system \(i\in\mathcal{N}\), over decision epochs \(t,t+1,\ldots,T\), starting from state \((x,k)\in\mathbb{N}_{0}^{2}\), and let the terminal cost, \(\tilde{V}_{T}^{N,i}(x,k)\), be equal to the function \(C_{i}(x)\triangleq\mathbb{I}_{i}(x)c_{u}^{i}\) for all \(k\). Then, \(\tilde{V}_{t}^{N,i}(x,k)\) satisfies the following recursive Bellman optimality equations \[\tilde{V}_{t}^{N,i}(x,k)=\min_{a\in\mathcal{A},\mathbb{I}_{(x)}} \left\{C_{i}(x,a)+\mathbb{E}_{(Z,K)}\bigg{[}\tilde{V}_{t+1}^{N,i}\Big{(}x\cdot( 1-a)+Z,k+Z+K\Big{)}\bigg{]}\right\}, \tag{6}\] where \(Z\sim NB\Big{(}\alpha_{0}+k,\frac{\beta_{0}+N\cdot t}{\beta_{0}+N\cdot t+1} \Big{)}\), \(K\sim NB\Big{(}(N-1)\cdot(\alpha_{0}+k),\frac{\beta_{0}+N\cdot t}{\beta_{0}+N \cdot t+1}\Big{)}\), \(\mathbb{E}_{(Z,K)}\) denotes that the expection is taken with respect to \(Z\) and \(K\), and \[C_{i}(x,a)\triangleq a(1-\mathbb{I}_{i}(x))c_{p}^{i}+\mathbb{I}_{i}(x)c_{u}^{i}. \tag{7}\] The indicator functions and actions (spaces) are as defined before. It is noteworthy to mention that the formulation in (6) in fact resembles a single component optimization problem in isolation, where the transition probabilities depend on both parameter \(N\) and the state variable \(k\). The evolution of the state variable \(k\) depends on the random deterioration increment of the component (\(Z\)) but it also accounts for the evolution of the other components through the random variable \(K\). Below we present the decomposition result, which establishes that the value function of the original MDP is the sum of all \(N\) value functions of the alternative MDPs. **Theorem 1**.: _For each \(t\in\{0,1,\ldots,T\}\), we have_ \[V_{t}^{N}(\mathbf{x},k)=\sum_{i\in\mathcal{N}}\tilde{V}_{t}^{N,i}(x_{i},k),\quad \text{for all }(\mathbf{x},k)\in\mathcal{S}.\] Proof.: We prove the statement using induction on \(t\). Note that the terminal values can be decomposed: \[V_{T}^{N}(\mathbf{x},k)=C(\mathbf{x})=\sum_{i\in\mathcal{N}}\mathbb{I}_{i}(\mathbf{x})c_{ u}^{i}=\sum_{i\in\mathcal{N}}\mathbb{I}_{i}(x_{i})c_{u}^{i}=\sum_{i\in \mathcal{N}}\tilde{V}_{T}^{N,i}(x_{i},k),\] so that the statement trivially holds for the base case \(T\). Assume that the statement holds for some \(t+1,0<t+1\leq T\) we will show that the statement then also holds for \(t\). We have, by Equation (4), \[V_{t}^{N}(\mathbf{x},k) =\min_{\mathbf{a}\in\mathcal{H}(\mathbf{x})}\left\{C(\mathbf{x},\mathbf{a})+ \mathbb{E}_{\mathbf{Z}}\Big{[}V_{t+1}^{N}\Big{(}\mathbf{x}^{\prime}+\mathbf{Z},k+\sum_{j\in N }Z_{j}\Big{)}\Big{]}\right\}\] \[\overset{(a)}{=}\min_{\mathbf{a}\in\mathcal{H}(\mathbf{x})}\bigg{\{}\sum_ {i\in N}C_{i}(x_{i},a_{i})+\mathbb{E}_{\mathbf{Z}}\Big{[}V_{t+1}^{N}\Big{(}\mathbf{x}^ {\prime}+\mathbf{Z},k+\sum_{j\in N}Z_{j}\Big{)}\Big{]}\bigg{\}}\] \[\overset{(b)}{=}\min_{\mathbf{a}\in\mathcal{H}(\mathbf{x})}\bigg{\{}\sum_ {i\in N}C_{i}(x_{i},a_{i})+\mathbb{E}_{\mathbf{Z}}\Big{[}\sum_{i\in N}\tilde{V}_{ t+1}^{N,i}\Big{(}x_{i}^{\prime}+Z_{i},k+\sum_{j\in N}Z_{j}\Big{)}\Big{]}\bigg{\}}\] \[\overset{(c)}{=}\min_{\mathbf{a}\in\mathcal{H}(\mathbf{x})}\bigg{\{}\sum_ {i\in N}C_{i}(x_{i},a_{i})+\sum_{i\in N}\mathbb{E}_{\mathbf{Z}}\Big{[}\tilde{V}_{ t+1}^{N,i}\Big{(}x_{i}^{\prime}+Z_{i},k+Z_{i}+\sum_{j\in N\setminus[a]}Z_{j} \Big{)}\Big{]}\bigg{\}}\] \[\overset{(d)}{=}\sum_{i\in N}\min_{a_{i}\in\mathcal{H}(\mathbf{x})} \left\{C_{i}(x_{i},a_{i})+\mathbb{E}_{(Z_{i},\mathcal{K})}\Big{[}\tilde{V}_{ t+1}^{N,i}\Big{(}x_{i}(1-a_{i})+Z_{i},k+Z_{i}+K\Big{)}\Big{]}\right\}\] \[=\sum_{i\in N}\tilde{V}_{t}^{N,i}(x_{i},k),\] where \((a)\) is because the direct costs can be decomposed (see (3) and (7)), \((b)\) is due to the induction hypothesis, \((c)\) is because of the linearity of an expectation and extracting \(Z_{i}\) from the summation, \((d)\) is because the sum of \(N-1\) independent Negative Binomially distributed random variables with \(r=\alpha_{0}+k\) and \(p=\frac{\beta_{0}+N\cdot t}{\beta_{0}+N\cdot t+1}\) is again Negative Binomially distributed with the same \(p\) but with \(r=(N-1)\cdot(\alpha_{0}+k)\) [see, e.g., 5, Chapter 6], and the last equality follows from using Equation (6). The decomposition in Theorem 1 reduces the computational burden of solving (4) significantly. It collapses the original, high-dimensional MDP into \(N\) 2-dimensional MDPs with a binary action space, each with their own cost structure and failure threshold, while still taking into account pooled learning across the \(N\) systems. Next to reducing this computational burden, the decomposition result also eases the process of establishing structural properties on a system level, which is the topic of the next section. Next to the trivial conditions that the action space and cost function should be decomposable and that actions should not influence the future evolution of the deterioration processes, there are two essential conditions required for our decomposition result to hold. As our proof relies crucially on these two general conditions, we believe they can guide future research on pooled learning in MDPs. Therefore, we discuss these conditions in the remark below. _Remark 1_.: The decomposition result can be applied to other high-dimensional Bayesian MDPs in which data is pooled to learn a common but unknown parameter. Specifically, the decomposition result necessitates two conditions related to the underlying conjugate pair of the MDP. First, the sufficient statistic in the conjugate pair for learning the parameter should be linear in the observations. This condition enables step \((c)\) in the proof where we extract \(Z_{i}\) from the summation. Secondly, the resulting posterior predictive distribution should be closed under convolutions. This enables step \((d)\) in the proof where we use the fact that the convolution of \(N-1\) Negative Binomially distributed random variables is again Negative Binomially distributed. One other conjugate pair - next to the Gamma - Poisson pair used in this paper - that, for instance, satisfies these conditions and which is used very often in the OM literature is the Normal - Normal pair. This pair is generally adopted when the mean of a Normal distribution with known variance is unknown and needs to be learned. ## 4 Structural properties In this section we establish structural properties of the alternative MDP, which then carry over to the original MDP through our decomposition result. We first derive monotonicity properties of the value function, and then use these properties to establish the optimality of a control limit policy. We finally show that the control limit approaches the failure level as the pooled data increases. To that end, we first rewrite (6) into the conventional formulation for single component optimization problems: \[\tilde{V}_{t}^{N,j}(x,k)= \tag{8}\] \[\begin{cases}c_{u}^{i}+\mathbb{E}_{(Z,K)}\bigg{[}\tilde{V}_{t+1}^ {N,j}\Big{(}Z,k+Z+K\Big{)}\bigg{]},&\text{if }x\geq\xi_{i},\\ \min\left\{c_{p}^{i}+\mathbb{E}_{(Z,K)}\bigg{[}\tilde{V}_{t+1}^{N,i}\Big{(}Z, k+Z+K\Big{)}\bigg{]};\mathbb{E}_{(Z,K)}\bigg{[}\tilde{V}_{t+1}^{N,i}\Big{(}x+Z,k+Z+K \Big{)}\bigg{]}\right\},&\text{if }x<\xi_{i}.\end{cases}\] The first case in Equation (8) is because failed components must be replaced correctively at cost \(c_{u}^{i}\). If the component's deterioration level is less than \(\xi_{i}\), we can either perform a preventive replacement, which costs \(c_{p}^{i}\), or leave the component in operation until the next decision epoch at no cost. The terminal costs are as introduced before. The next result establishes the monotonicity of the value function \(\tilde{V}_{t}^{N,i}(x,k)\) in \(x\) and \(k\). **Proposition 1**.: _For each \(t\in\{0,1,\ldots,T\}\) and \(i\in\mathcal{N}\), the value function \(\tilde{V}_{t}^{N,i}(x,k)\) is:_ 1. _non-decreasing in_ \(x\)_, and_ 2. _non-decreasing in_ \(k\)_._ Proof.: See Appendix A.2. Proposition 1 implies that (i) if a component is more deteriorated or (ii) when the total amount of deterioration increments is higher, we expect to incur higher costs. This is intuitive: A higher level of deterioration increases the probability of a costly failure and/or the need for preventive replacement, while a higher total amount of deterioration increments implies that all components are deteriorating relatively fast (i.e. \(\lambda\) is larger). By Proposition 1, we also conclude that the value function \(V_{t}^{N}(\mathbf{x},k)\) of the original MDP is non-decreasing in the standard component-wise order in \(\mathbf{x}\), and non-decreasing in \(k\). The former means that for any deterioration vectors \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\) such that \(x_{i}\leq x_{i}^{\prime}\) for all \(i\in\mathcal{N}\), we have that \(V_{t}^{N}(\mathbf{x},k)\leq V_{t}^{N}(\mathbf{x}^{\prime},k)\). The intuition behind this is similar to the intuition behind Proposition 1. The next result establishes the optimality of a control limit policy for the alternative MDP that depends on the state variable \(k\). **Proposition 2**.: _For each \(t\in\{0,1,\ldots,T-1\}\), \(k\in\mathbb{N}_{0}\), and \(i\in\mathcal{N}\), there exists a control limit \(\delta_{i}^{(k,t)}\), \(0<\delta_{i}^{(k,t)}\leq\xi_{i}\), such that the optimal action is to carry out a replacement if and only if \(x\geq\delta_{i}^{(k,t)}\)._ Proof.: See Appendix A.3. Proposition 2 shows that the control limit at each time of each component, \(\delta_{i}^{(k,t)}\), depends in real-time on the shared learning process across all components through pooling data via the state variable \(k\). The optimality of a control limit policy itself is not only intuitive and convenient for the implementation of this optimal policy in practice, it can also be exploited to further decrease the computational burden of solving the original MDP. That is, existing algorithms that rely on these structural properties such as the modified policy iteration algorithm (see 23, Section 6.5) can be used to efficiently solve the alternative MDP, and hence the original MDP. Conceivably, one would expect that as we learn from the pooled data that \(\lambda\) is larger (through a higher \(k\)) and everything else is fixed, we would impose a lower control limit per component, i.e. \(\delta_{i}^{(k,t)}\) is non-increasing in \(k\). The intuition behind this is that because it is more likely that deterioration increments will take on higher values (see Lemma 1), we should replace a component earlier. Although such a non-increasing \(\delta_{i}^{(k,t)}\) would indeed be intuitive, we found numerically that this is in general not true. Specifically, we found that the control limit usually decreases in \(k\) first, as expected, but that it always increases eventually to \(\xi_{i}\) as \(k\) grows large, in which case it is never optimal to do preventive maintenance. See Figure 1 for an excellent illustration of this behavior. In this figure, we use, \(N=2\), \(T=50\), \(\alpha_{0}=4\), and \(\beta_{0}=4\), and use the same values for the parameters of both components: \(c_{p}=1\), \(c_{u}=10\), and \(\xi=10\), and plot the optimal control limit at three decision epochs (10, 25, and 40) as a function of \(k\). This limiting behavior, which breaks the monotonic behavior of \(\delta_{i}^{(k,t)}\) in \(k\), is formalized in the proposition below. **Proposition 3**.: _For each \(t\in\{0,1,\ldots,T-1\}\) and \(i\in\mathcal{N}\), we have \(\lim_{k\to\infty}\delta_{i}^{(k,t)}=\xi_{i}\)._ Proof.: See Appendix A.4. Figure 1: The optimal control limit (value on \(y\)-axis) as a function of \(k\) at three different decision epochs. The optimal control limit first decreases in \(k\), but then increases to \(\xi\) as \(k\) grows large. While Proposition 3 may initially appear counterintuitive, it can be heuristically explained as follows. When \(k\) grows very large and everything else is kept fixed, the random deterioration per period grows so large that any component will fail with certainty solely due to the one-period deterioration. In this case, if a component is still working at a certain decision epoch and has deterioration level \(x\) (\(<\xi_{i}\)), performing preventive maintenance will induce an extra cost of \(c_{p}^{i}\) because in the next period, the component will fail anyway, regardless of the value of \(x\). To make this argument more explicit, consider a component with deterioration level \(x<\xi_{i}\) at decision epoch \(T-1\). As we only have the terminal cost at time \(T\), we can readily obtain an expression for the optimality equation from the recursive Bellman equations in (8): \[\tilde{V}_{T-1}^{N,i}(x,k)=\min\{c_{p}^{i}+\mathbb{P}[Z\geq\xi_{i}]\cdot c_{u }^{i}+(1-\mathbb{P}[Z\geq\xi_{i}])\cdot c_{p}^{i}\ ;\ \underbrace{\mathbb{P}[Z\geq\xi_{i}-x]\cdot c_{u}^{i}+(1- \mathbb{P}[Z\geq\xi_{i}-x])\cdot c_{p}^{i}}_{\text{leave in operation}}\}. \tag{9}\] It is clear that preventive maintenance in this state is not optimal if and only if the following holds: \[c_{p}^{i}+\mathbb{P}[Z\geq\xi_{i}]\cdot c_{u}^{i}+(1-\mathbb{P}[Z\geq\xi_{i}] )\cdot c_{p}^{i}>\mathbb{P}[Z\geq\xi_{i}-x]\cdot c_{u}^{i}+(1-\mathbb{P}[Z\geq \xi_{i}-x])\cdot c_{p}^{i}. \tag{10}\] Since the random variable \(Z\) is increasing in \(k\) (see Lemma 1), one can show that \(\mathbb{P}[Z\geq\xi_{i}]\to 1\) and \(\mathbb{P}[Z\geq\xi_{i}-x]\to 1\) for each \(x<\xi_{i}\) as \(k\to\infty\). This implies, using (9), that at decision epoch \(T-1\) as \(k\to\infty\), leaving the component in operation costs \(c_{u}^{i}\), while performing preventive maintenance costs \(c_{p}^{i}+c_{u}^{i}\) (an extra cost of \(c_{p}^{i}\)). Since \(c_{p}^{i}>0\), Equation (10) will always hold for any \(x<\xi_{i}\) at decision epoch \(T-1\) as \(k\to\infty\). In the proof of Proposition 3 in Appendix A.4, we formalize this heuristic argument and do so for each decision epoch \(t\in\{0,1,\ldots,T-1\}\). ## 5 Numerical study This section reports the results of a comprehensive numerical study in which we assess the benefits of pooling data to learn an a-priori unknown parameter, while optimally maintaining \(N\) systems over a finite lifespan \(T\). Although the results in the previous sections hold for asymmetric - in terms of costs and failure thresholds - systems, we shall focus on symmetric systems in this numerical study. By doing so, the value function \(\tilde{V}_{0}^{N}(0,0)\) (we drop the index \(i\) as we consider symmetric systems) gives us the cost per system over its lifespan when the data of \(N\) systems is pooled. We can use this cost per system to assess the value of pooling learning as a function of \(N\) compared to not pooling. To this end, we define the performance measure \(\Delta=100\left[1-\frac{\tilde{V}_{0}^{\alpha}(0,0)}{\tilde{V}_{0}^{\alpha}(0,0)}\right],\) which is the percentage savings per system over the lifespan when the learning of \(N\) systems is pooled compared to not pooling any data for those systems and learning the unknown rate independently from the other systems. We first perform an extensive numerical study. Recall that the initial parameter uncertainty is modeled by the random variable \(\Lambda\) which has a Gamma distribution with shape \(\alpha_{0}\) and rate \(\beta_{0}\). By fixing the mean of \(\Lambda\) and subsequently varying its coefficient of variation, we can thus increase or decrease the initial parameter uncertainty. We do so by solving the following set of equations for \(\alpha_{0}\) and \(\beta_{0}\): \[\mathbb{E}[\Lambda]=\frac{\alpha_{0}}{\beta_{0}},\text{ and }cv_{\Lambda}= \frac{1}{\sqrt{\alpha_{0}}},\] where \(cv_{\Lambda}\) is the coefficient of variation of \(\Lambda\). This allows us to explicitly study the impact of the uncertainty (in terms of its mean and coefficient of variation) on the pooling effects. Our testbed consists of 2268 instances. These are obtained by permuting all parameter values in Table 1, with the corrective maintenance cost \(c_{u}\) held fixed at 10. These values are representative for the capital goods industry and are in line with the maintenance literature (see, e.g., [25] on typical maintenance costs, and [9] on initial parameter uncertainty). For each instance of the test bed we compute the relative savings %\(\Delta\). The results of the numerical study are summarized in Table 2. In this table, we present the average and maximum relative savings %\(\Delta\). For each value of \(N\), we first present the average relative savings for subsets of instances with the same value for a given input parameter of Table 1 (row wise), and then present the average results for all instances with that fixed value of \(N\) (bottom row), where each average value is accompanied with the maximum value in brackets. Based on the results in Table 2, we can state the following main observations: 1. Pooling of data for learning a common, unknown parameter can lead to significant savings compared to not pooling data and learning it independently. 2. The magnitude of the savings seems to be inextricably linked with the magnitude of uncertainty in the parameter \(\lambda\) measured by the coefficient of variation of \(\Lambda\). When \(cv_{\Lambda}\) is high, savings of up to 56.9% on average (over all instances with \(cv_{\Lambda}=4\) and \(N=20\)) can be achieved, while if \(cv_{\Lambda}\) is low, savings become almost negligible (\(\leq 0.2\%\) on average). This can be explained as follows. When there is high uncertainty in the unknown parameter, pooling data allows the maintenance service provider to faster learn the unknown parameter compared to learning it from data generated by a single system. This result implies that pooling data is especially beneficial for real-life settings where there is high uncertainty in \(\lambda\) through limited knowledge, limited historical data, and/or poor estimation procedures. The opposite is also true. When there is little uncertainty in the unknown parameter, the benefit of data pooling vanishes; a maintenance service provider already has an accurate belief of the unknown parameter that needs little updating. 3. When comparing the average savings for increasing values of \(N\), we find that pooling has already a significant impact for small values of \(N\), and that the marginal savings gradually decrease when \(N\) increases. 4. The savings for each value of \(N\) tend to decrease as the ratio \(c_{u}/c_{p}\) decreases (recall that we keep \(c_{u}\) fixed and vary \(c_{p}\)). When this ratio decreases and \(N\) is fixed, maintenance decisions have less impact on the resulting \begin{table} \begin{tabular}{l l l l} \hline \hline & Input parameter & No. of choices & Values \\ \hline 1 & Number of systems, \(N\) & 7 & 1, 2, 4, 6, 8, 10, 20 \\ 2 & Failure threshold, \(\xi\) & 2 & 7, 10 \\ 3 & Length of lifespan, \(T\) & 3 & 50, 70, 90 \\ 4 & Preventive maintenance cost, \(c_{p}\) & 3 & 0.5, 1, 1.5 \\ 5 & Mean of \(\Lambda\), \(\mathbb{E}[\Lambda]\) & 3 & 0.5, 0.75, 1 \\ 6 & Coefficient of variation of \(\Lambda\), \(cv_{\Lambda}\) & 6 & 0.1, 0.25, 0.5, 1, 2, 4 \\ \hline \hline \end{tabular} \end{table} Table 1: Input parameter values for numerical study. costs - simply because their cost difference decreases. Consequently, the benefits of utilizing pooled learning in such maintenance decisions also decrease when \(c_{u}/c_{p}\) decreases. 5. The savings for each value of \(N\) tend to increase as \(\mathbb{E}[\Lambda]\) increases. When \(\mathbb{E}[\Lambda]\) increases and \(N\) is fixed, the expected deterioration increment between two consecutive decision epochs is larger and, as a result, the optimal control limit will be more conservative. The results suggest that in that regime, the choice of the control limit has a higher impact on the resulting costs than when \(\mathbb{E}[\Lambda]\) is low and a less conservative control limit is chosen. By pooled learning, one is able to better choose this control limit, and as a result, the relative savings of pooled learning also increase when \(\mathbb{E}[\Lambda]\) increases. Observations 1-3 are also illustrated by Figure 2. In this figure we plot the relative savings (%\(\Delta\)) as a function of \(N\) for various values of \(cv_{\Lambda}\) when \(\mathbb{E}[\Lambda]=0.75\), \(\xi=10\), \(c_{p}=0.5\), \(c_{u}=10\), and \(T=90\). The plot indeed shows that for a given level of parameter uncertainty, pooling data across a larger number of systems increases the relative savings. The rate at which the savings increase in \(N\) increases significantly in the coefficient of variation. This confirms that pooling data can lead to significant cost reductions, especially when the uncertainty surrounding an unknown parameter is high. We further clearly see that the marginal savings due to adding an extra system to the pooled systems decreases as \(N\) increases. \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline \multicolumn{1}{l}{Input} & \multicolumn{6}{c}{\(N\)} \\ \cline{3-8} \multicolumn{1}{l}{parameter} & Value & 2 & 4 & 6 & 8 & 10 & 20 \\ \hline \multirow{3}{*}{\(\xi\)} & 7 & 2.7 (16.4) & 9.8 (37.2) & 14.8 (59.2) & 18.0 (71.2) & 19.7 (77.2) & 24.5 (88.5) \\ & 10 & 3.4 (19.7) & 9.9 (36.8) & 14.5 (59.4) & 17.1 (71.8) & 19.1 (79.7) & 22.7 (89.2) \\ & 50 & 3.0 (19.7) & 9.7 (36.3) & 14.5 (57.3) & 17.4 (70.2) & 19.3 (78.4) & 23.6 (88.4) \\ & 70 & 3.1 (19.7) & 9.8 (36.6) & 14.7 (58.6) & 17.6 (71.2) & 19.4 (79.2) & 23.6 (88.9) \\ & 90 & 3.1 (19.7) & 10.0 (37.2) & 14.8 (59.4) & 17.6 (71.8) & 19.5 (79.7) & 23.6 (89.2) \\ & 0.5 & 4.4 (19.7) & 12.7 (37.2) & 18.3 (59.4) & 21.6 (71.8) & 23.7 (79.7) & 28.0 (89.2) \\ & 1 & 2.9 (12.7) & 9.5 (33.5) & 14.2 (53.9) & 17.1 (65.1) & 18.9 (72.5) & 22.9 (82.8) \\ & 1.5 & 1.9 (8.7) & 7.4 (29.9) & 11.5 (48.9) & 14.0 (59.4) & 15.6 (65.8) & 19.9 (77.6) \\ & 0.5 & 3.0 (19.7) & 8.3 (33.5) & 12.1 (47.5) & 14.6 (60.1) & 16.2 (69.1) & 19.6 (83.6) \\ & 0.75 & 3.0 (18.7) & 10.0 (36.8) & 14.9 (54.6) & 17.8 (67.6) & 19.7 (76.3) & 23.8 (87.5) \\ & 1 & 3.1 (14.3) & 11.2 (37.2) & 16.9 (59.4) & 20.3 (71.8) & 22.4 (79.7) & 27.3 (89.2) \\ & 0.1 & 0.0 (0.1) & 0.0 (0.2) & 0.1 (0.2) & 0.1 (0.3) & 0.1 (0.3) & 0.2 (0.4) \\ & 0.25 & 0.2 (0.5) & 0.4 (0.8) & 0.5 (1.0) & 0.5 (1.1) & 0.6 (1.2) & 0.8 (1.4) \\ & 0.5 & 0.6 (1.3) & 1.3 (2.7) & 1.8 (3.5) & 2.1 (4.1) & 2.4 (5.3) & 3.1 (6.4) \\ \(cv_{\Lambda}\) & 1 & 4.1 (12.0) & 7.8 (22.5) & 9.4 (26.3) & 10.4 (28.3) & 11.0 (31.1) & 12.4 (35.7) \\ & 2 & 10.3 (19.7) & 22.7 (36.8) & 29.5 (44.8) & 33.8 (51.5) & 36.7 (57.3) & 46.0 (73.9) \\ & 4 & 13.0 (24.6) & 27.2 (37.2) & 35.3 (59.4) & 40.8 (71.8) & 44.9 (79.7) & 56.9 (89.2) \\ \hline Total & & 3.6 (24.6) & 9.8 (37.2) & 14.0 (59.4) & 16.5 (71.8) & 18.2 (79.7) & 22.3 (89.2) \\ \hline \hline \end{tabular} \end{table} Table 2: Relative savings (%\(\Delta\)) due to pooled learning. ## 6 An application to spare parts inventory systems In this section we illustrate that our decomposition result can also be applied to spare parts inventory systems. To make this illustration explicit, we shall redefine some notation introduced in the previous sections. We consider a maintenance service provider that operates a set of \(N\geq 1\) local spare parts warehouses, and we denote this set by \(\mathcal{N}=\{1,\ldots,N\}\). Each local warehouse stocks spare parts of the same critical component to serve an installed base of (closely located) technical systems. As is common in the spare parts inventory literature (e.g. 10), we assume that that demand for spare parts at each local warehouse follows an independent Poisson process. The rate of these Poisson processes \(\lambda\) is identical across all local warehouses. This is a reasonable assumption when the installed bases served by each local warehouse are of similar size. The maintenance service provider is concerned with stocking decisions over a finite horizon of \(T\) periods. At the start of each such period, the maintenance service provider decides how many new spare parts are transported to each local warehouse \(i\in\mathcal{N}\). Each unit has a transportation cost \(c_{v}^{i}\). Since the lead times to the local warehouses are typically much shorter than the duration of a period, we assume that these new spare parts are instantly delivered, after which the period commences. When a component in a technical system fails during the period, local warehouse \(i\in\mathcal{N}\) responsible for this system immediately replaces the failed component with a read-for-new one, if it has one available. Otherwise, the part is backordered at unit cost \(c_{b}^{i}\), which reflects expensive downtimes or emergency shipments from a central depot or an external supplier. Spare parts on stock that are carried over to the next period cost \(c_{h}^{i}\) per unit. We account for both backorder and holding costs at the end of each period. We assume that each period lasts 1 time unit so that demand in each period is Poisson distributed with mean \(\lambda\). We employ the Bayesian approach of Section 2 to infer the a-priori unknown rate \(\lambda\) based on the observed demands at all local warehouses over the entire planning horizon. Observe that in the updating scheme of this approach (cf. Equation 1), \(k\) is now defined as the total cumulative demand at all \(N\) local warehouses up to period \(t\). Given \(k\) and \(t\), the posterior predictive \(Z_{i}\) now represents the total demands that arrive at local warehouse \(i\in\mathcal{N}\) during the next decision epoch. The state space of the Bayesian MDP corresponding to the decision problem described above is given by \(\mathbb{Z}^{N}\times\mathbb{N}_{0}\). For a given state \((\mathbf{x},k)\in\mathcal{S}\), \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{N})\) represents the vector of net inventory levels of all local warehouses before order placement at the start of a period, and \(k\) denotes the sum of all observed demands until that period. For a given state \((\mathbf{x},k)\in\mathcal{S}\), the action space \(\mathcal{A}(\mathbf{x})\) contains all possible net inventory levels after orders are placed and received but before demand is realized, i.e. for any action \(\mathbf{a}=(a_{1},a_{2},\ldots,a_{N})\in\mathcal{A}(\mathbf{x}),a_{i}\in\{x_{i},x_{i}+ 1,\ldots\}\) is the net inventory level per local warehouse. As before, we let \(\mathbf{Z}=(Z_{1},Z_{2},\ldots,Z_{N})\) denote an \(N\)-dimensional random vector with \(Z_{i}\sim NB\Big{(}\alpha_{0}+k,\frac{\beta_{0}+N+t}{\beta_{0}+N+t+1}\Big{)}\). As is customary in inventory theory, the direct cost in a given period accounts for the expected holding and backorder costs of the orders placed in that period. As such, the total transportation, holding, and backorder costs over all local spare parts warehouses, denoted \(C(\mathbf{a},\mathbf{x},k)\), is given by \[C(\mathbf{a},\mathbf{x},k)\triangleq\sum_{i\in N}\left(c_{v}^{i}(a_{i}-x_{i})+c_{h}^{ i}\mathbb{E}\left[(a_{i}-Z_{i})^{+}\right]+c_{b}^{i}\mathbb{E}\left[(Z_{i}-a_{i})^{+ }\right]\right),\] with \(x^{+}\triangleq\max(x,0)\). While the direct cost now depends on the pooled data through state variable \(k\), we note that it remains decomposable in \(N\) direct costs \(C_{i}(a,x,k)\triangleq c_{v}^{i}(a_{i}-x_{i})+c_{h}^{i}\mathbb{E}[(a_{i}-Z_{i })^{+}]+c_{b}^{i}\mathbb{E}[(Z_{i}-a_{i})^{+}]\), each associated with a local warehouse \(i\in\mathcal{N}\). Let \(V_{t}^{N}(\mathbf{x},k)\) denote the optimal expected total cost over decision epochs \(t,t+1,\ldots,T\), starting from state \((x,k)\in\mathcal{S}\). By the principle of optimality, \(V_{t}^{N}(\mathbf{x},k)\) satisfies the following recursive Bellman optimality equations \[V_{t}^{N}(\mathbf{x},k)=\min_{\mathbf{a}\in\mathcal{A}(\mathbf{x})}\Bigg{\{}C(\mathbf{a},\mathbf{ x},k)+\mathbb{E}_{Z}\bigg{[}V_{t+1}^{N}\Big{(}\mathbf{a}-\mathbf{Z},k+\sum_{i\in N}Z_{i} \Big{)}\bigg{]}\Bigg{\}},\] with \(V_{T}^{N}(\cdot,\cdot)\triangleq 0\). We now formulate the corresponding alternative MDP in which the original MDP can be decomposed. For each \(i\in\mathcal{N}\), we let \(\tilde{V}_{t}^{N,i}(x,k)\) denote the optimal expected total cost over decision epochs \(t,t+1,\ldots,T\), starting from state \((x,k)\in\mathbb{Z}\times\mathbb{N}_{0}\). Then, \(\tilde{V}_{t}^{N,i}(x,k)\) satisfies the following recursive Bellman optimality equations \[\tilde{V}_{t}^{N,i}(x,k)=\min_{\mathbf{a}\in\mathcal{A}(x)}\Bigg{\{}C_{i}(a,x,k)+ \mathbb{E}_{(Z,K)}\bigg{[}\tilde{V}_{t+1}^{N,i}\Big{(}a-Z,k+Z+K\Big{)}\bigg{]} \Bigg{\}}, \tag{11}\] where \(Z\sim NB\Big{(}\alpha_{0}+k,\frac{\beta_{0}+N+t}{\beta_{0}+N+t+1}\Big{)}\), \(K\sim NB\Big{(}(N-1)\cdot(\alpha_{0}+k),\frac{\beta_{0}+N+t}{\beta_{0}+N+t+1} \Big{)}\), and \(\tilde{V}_{T}^{N,i}(\cdot,\cdot)\triangleq 0.\) Observe that the alternative formulation in (11) resembles a single spare parts warehouse problem in isolation, but where the dynamics of the system depend on the learned information of all warehouses through \(k\). We are now in the position to present our decomposition result applied to the spare parts inventory system setting. Its proof is almost verbatim the proof of Theorem 1 and therefore omitted. **Theorem 2**.: _For each \(t\in\{0,1,\ldots,T\}\), we have:_ \[V_{t}^{N}(\mathbf{x},k)=\sum_{i\in N}\tilde{V}_{t}^{N,i}(x_{i},k),\quad\text{for all }(\mathbf{x},k)\in\mathcal{S}.\] As before, the above decomposition result motivates us to establish structural properties of the alternative 2-dimensional MDP, which then carry over to the original, high-dimensional MDP. To that end, we first establish convexity of the value function in the inventory level before order placement. **Proposition 4**.: _For each \(t\in\{0,1,\ldots,T\}\), \(k\in\mathbb{N}_{0}\), and \(i\in\mathcal{N}\), the value function \(\tilde{V}_{t}^{N,i}(x,k)\) is convex in \(x\)._ Proof.: See Appendix A.5 The above result also implies that the optimal policy of the decomposed MDP is characterized by an order-up-to structure, in which we place orders such that the inventory level after ordering reaches a certain target level (if needed). Our next result formalizes the optimality of order-up-to levels, together with their non-decreasing monotonic behavior in the state variable \(k\). **Proposition 5**.: _For each \(t\in\{0,1,\ldots,T-1\}\), \(k\in\mathbb{N}_{0}\), and \(i\in\mathcal{N}\), there exists a single target level \(\delta_{i}^{(k,t)}\in\mathbb{Z}\) such that the optimal action is \(a_{i}=\delta_{i}^{(k,t)}\) if \(x<\delta_{i}^{(k,t)}\) and \(a_{i}=x\) otherwise. The optimal target level \(\delta_{i}^{(k,t)}\) is non-decreasing in \(k\)._ Proof.: See Appendix A.6. Proposition 5 shows that the optimal target levels depend in real-time on the shared learning process across all local warehouses via the state variable \(k\) in a monotonic way. This is intuitive: As we learn from the pooled data that \(\lambda\) is higher (through a higher \(k\)) and everything else fixed, the next demand will likely take on higher values. Therefore, we should increase the target level to which we raise our spare part inventories. This monotonicity result stands in contrast to the limiting result of the optimal control limits in the condition-based maintenance setting, as described in Proposition 3. The key difference is that, unlike in the condition-based maintenance setting, there is a cost incentive that is proportional to demand realizations. This cost incentive remains proportional, even if the demand becomes very large because of a very large \(k\), so that ordering quantities are monotonically non-decreasing in \(k\). We conclude by noting that similar monotonicity results for Bayesian inventory systems exist in the literature, but only for single inventory systems in isolation and without any data pooling considerations (e.g. [16]. ## 7 Conclusion We have studied the benefits of data pooling when an unknown deterioration rate that multiple systems have in common needs to be learned over a finite lifespan. We formulated this problem as a finite horizon Bayesian MDP in which learning is pooled. This formulation suffers from the well-known curse of dimensionality. As a remedy, we proved a new decomposition result that establishes the equivalence between the original, highly dimensional MDP and multiple low-dimensional MDPs, enabling both structural analyses and numerical computations. Results of a comprehensive numerical study indicated that significant savings can be attained by pooling data to learn the a-priori unknown parameter. We also illustrated how our decomposition result can be applied to other settings, notably a set of canonical spare parts inventory systems, where an unknown but common demand rate needs to be learned across this set of inventory systems. For further research, it would be interesting to investigate the applicability of our decomposition result in other areas. One methodological area that seems particularly promising is a set of queues subjected to dynamic control; each with Poisson arrivals with a common but unknown rate and each queue with their own cost structure and action space. As many operations management problems (production-inventory systems, service systems, ride-sharing systems, to name a few) are modeled using (a set of) queues, we expect that, when extending such problems to settings where there is an unknown but common parameter (e.g., an arrival rate) that needs to be learned by pooling data (e.g., arrival data) while dynamically controlling the individual systems, our decomposition result might be useful for tractable analyses. ## Acknowledgements The authors are grateful to the editorial team whose comments greatly improved the paper.
2303.10315
Lung segmentation with NASNet-Large-Decoder Net
Lung cancer has emerged as a severe disease that threatens human life and health. The precise segmentation of lung regions is a crucial prerequisite for localizing tumors, which can provide accurate information for lung image analysis. In this work, we first propose a lung image segmentation model using the NASNet-Large as an encoder and then followed by a decoder architecture, which is one of the most commonly used architectures in deep learning for image segmentation. The proposed NASNet-Large-decoder architecture can extract high-level information and expand the feature map to recover the segmentation map. To further improve the segmentation results, we propose a post-processing layer to remove the irrelevant portion of the segmentation map. Experimental results show that an accurate segmentation model with 0.92 dice scores outperforms state-of-the-art performance.
Youshan Zhang
2023-03-18T02:52:16Z
http://arxiv.org/abs/2303.10315v1
# Lung Segmentation with NasNet-Large-Decoder Net ###### Abstract Lung cancer has emerged as a severe disease that threatens human life and health. The precise segmentation of lung regions is a crucial prerequisite for localizing tumors, which can provide accurate information for lung image analysis. In this work, we first propose a lung image segmentation model using the NASNet-Large as an encoder and then followed by a decoder architecture, which is one of the most commonly used architectures in deep learning for image segmentation. The proposed NASNet-Large-decoder architecture can extract high-level information and expand the feature map to recover the segmentation map. To further improve the segmentation results, we propose a post-processing layer to remove the irrelevant portion of the segmentation map. Experimental results show that an accurate segmentation model with 0.92 dice scores outperforms state-of-the-art performance.1 Footnote 1: This work is written before our previous paper [1]. Youshan Zhang Computer Science and Engineering, Lehigh University, Bethlehem, PA, USA [email protected] Lung segmentation, NASNet-Large Net, Encoder, Decoder ## 1 Introduction Lung cancer is one of the malignant tumors with the fastest increase in morbidity and mortality and the greatest threat to human health and life. In the past fifty years, many countries have reported a marked increase in the incidence and mortality of lung cancer [2]. The lung-related disease is kept on the list of top ten causes of death in the country in the United States [3, 4]. Therefore, the prevention of lung cancer is important. From the definition, lung cancer is a malignant lung tumor that is characterized by uncontrollable growth in the lung tissue. Clinicians find that the cure rate of early lung cancer is over 90%. Hence, the early diagnosis of lung cancer is essential since it could control the disease, reduce the mortality rate, and increase the patient's survival rate when the treatment is more likely curative. Chest radiograph (CXR) is commonly performed as diagnostic imaging for lung cancer and Pneumonia diagnosis. However, there are a number of factors, such as the positioning of the patient and depth of inspiration, that can affect the appearance of the CXR, complicating interpretation further. In addition, clinicians are faced with reading high volumes of images every shift. It is difficult for the physician to obtain an accurate diagnosis without the help of an additional tool. Therefore, it is necessary to develop segmentation tools to improve the effectiveness of the treatment. However, designing an effective lung segmentation method is a challenging problem since the ROIs are often confused with the lung tissue. A large number of medical image analysis techniques have been proposed for lung segmentation, such as threshold method [5, 6, 7], region-based method [8, 9], genetic method[10], level set method [11, 12, 13] and artificial neural network [14, 15, 16], etc. The threshold method is one of the most common and straightforward segmentation methods in lung segmentation. It is a region segmentation technology, which divides the gray value into two or more gray intervals, chooses one or more appropriate thresholds to judge whether the region meets the threshold requirement according to the difference between the target and the background, and separates the background and the target to produce a binary image. Threshold processing has two forms: global threshold and adaptive threshold. The global threshold only sets one threshold, and the adaptive threshold sets multiple thresholds. The target and background regions are segmented by determining the threshold at the peak and valley of the gray histogram [5, 6]. Level set methods are also widely used in the segmentation task. The basic idea of the level set method for image segmentation is the continuous evolution of curve motion. The boundary of the image is searched until the target contour is found and then the moving curve is stopped. Curves are moved along every three-dimensional section of images to slice different levels of the three-dimensional surface. The level of the obtained closed curves of each layer change over time, and finally get a corresponding shape extraction contour [11]. Deep neural networks have also been applied in segmentation, and they supersede many traditional image segmentation approaches. Garcia published a review of deep learning approaches that aimed to present an overview of deep learning-based segmentation [17]. There are several models to address the segmentation task. Fully convolutional network (FCN) [18] is one of the earliest works for deep learning-based image segmentation problem, which performs end-to-end segmentation. It is a convolution network for a dense prediction that does not need to pass through the full connection layer. This method leads to the possibility of segmenting images of any size effectively, and it is much faster than the patch classification method. Almost all of the other more advanced methods follow this architecture. However, there are several limitations of the FCN model, such as the inherent spatial invariance causes the model fails to take into account of useful global context information, and its efficiency in high-resolution scenarios is worse and is not available for real-time segmentation. Another problem that causes difficulty in using CNN networks in segmentation is the existence of pooling layers. The pooling layer not only enlarges the sensing field of the upper convolution layer but also aggregates the background and discards part of the location information. However, the semantic segmentation method needs to adjust the category map accurately, so it requires retaining the location information abandoned in the pooling layer. Later encoder-decoder architecture is widely used in the segmentation. Segnet [19] and U-net [20] are representative encoder-decoder architectures. It first selects a classification network such as VGG-16 and then removes its fully connection layer to produce low-resolution image representation or feature mapping. This part of the segmentation network is usually called an encoder. A decoder is part of the network, which can learn how to decode or map these low-resolution images to the prediction at the pixel level. The difference between different encoder-decoder architectures is the design of the decoder. Another type of method uses dilated convolutions layers and removes the pooling layer structure. Chen et al. proposed the Deeplab model, which used the dilated convolutions, and it used the fully connected conditional random field to realize the atrous spatial pyramid pooling (ASPP) in the spatial space [21]. In this paper, we provide a lung segmentation model using one of the common encoder-decoder architectures for image segmentation with a deep learning model called NASNet-Large-decoder. This architecture can erase unnecessary information provided in lung images. Our network achieves an accurate segmentation result with a 0.92 dice score. Our contributions are in three folds: 1. We are the first to define a NASNet-Large-decoder net to segment the chest radiography images; 2. To remove the irrelevant small segmented parts, we are the first to propose a post-processing layer, which is able to filter out the false segmented section in prediction images; 3. Experiments results demonstrate that the proposed model achieves the highest dice and IoU score over state-of-the-art. This paper is organized as follows: in section 2, the NASNet-Large-decoder net architecture is summarized; we present the segmentation results in section 3; In section 4, we discuss the advantages and disadvantage of the proposed model and conclude in section 5. ## 2 Method In this section, we first introduce the proposed segmentation net and then describe a post-processing layer for filtering out the irrelevant segmented section in the prediction image. ### NASNet-Large segmentation net The NASNet-Large segmentation net contains an encoder and decoder, which is followed by a classification layer. The Figure 1: The architecture of NASNet-Large segmentation net. The encoder consists of the first 414 layers from the NASNet-Large model. There are four blocks in the decoder, and each block contains Upsamling, Conv+ReLu, and BN layer. (Convolution (Conv), Batch normalization (BN), Rectified linear units (ReLU). The final decoder is fed into a softmax layer for pixel-wise lung prediction. architecture is shown in Fig.1. There are two significant differences in our model compared with Segnet, which employs the pre-trained vgg16 network for the encoder. Our NasnetLarge-decoder net uses the first 414 layers of NasnetLarge net (which is a well-trained classification net on ImageNet) as the encoder to decompose images [22]. We do not use the pre-trained weighted but retrain the layers using the new data to fit the NasnetLarge net in our experiment since our dataset is significantly different from the ImageNet. Another one is that the decoder is different, and there are no pooling indices in our model since the NasnetLarge net can produce detailed information for the decoder. An appropriate decoder in the decoder network can up-sample its input feature map using the max-pooling layer. The decoding technique is illustrated in Fig 1. There are four blocks in the decoder. Each block first begins with an up-sampling layer, which can expand the feature map, and then feature maps are followed by convolution and rectified linear units. A batch normalization layer is then applied to each of these maps. The first decoder, which is closest to the last encoder, can produce a multi-channel feature map. This is similar to the Segnet, which can generate a different number of sizes and channels as their encoder inputs. The final output of the last decoder is fed to a trainable soft-max classifier. And the output of this softmax layer is a \(K\) channel image of probabilities where \(K\) is the number of classes (two in our problem). The predicted segmentation corresponds to the class with maximum probability at each pixel. ### Post-processing layer However, there are some small parts that are not the true lung in the prediction result. We then propose a post-processing layer, which can filter the irrelevant part in the image. The first step of the post-processing layer is to classify the predicted image into several parts2, and we then select the first two largest areas (the lung area) as the final segmented lung. As shown in the left image in Fig.2, the red box is the wrong prediction of the image (false negative). After the post-processing, the red box is removed. Therefore, the prediction result will be improved if we filter out these irrelevant parts in images. Footnote 2: The major step of the post-processing step is using the connectedComponentsWithStats function in OpenCV. ## 3 Results ### Datasets and parameters Our lung segmentation data is from the RSNA pneumonia detection dataset. The whole dataset can be downloaded from [https://www.kaggle.com/c/rsna-pneumonia-](https://www.kaggle.com/c/rsna-pneumonia-)\ detection-challenge. To remove the unrelated features, we focus on lung segmentation; the lung is manually segmented3. There are 800 training images and 200 test images in this dataset. Fig. 3 shows an example of a lung image in the training dataset. Footnote 3: The dataset is available at: [https://github.com/YoushanZhang/Lung_Segmentation](https://github.com/YoushanZhang/Lung_Segmentation). Our experimentation is based on Keras, which is a high-level neural networks API written in python and is able to run on top of either TensorFlow or Theano and runs seamlessly on CPU and GPU. In addition, our network was trained on a graphics processor NVIDIA TITAN Xp equipped with 12Gb of memory in order to exploit its computational speed. Hence, we run our code on a GPU in order to greatly accelerate the execution with the TensorFlow backend. The network parameters are set to: 1. Batch size: 4 2. Step size: 5 3. Number of epochs: 100 ### Metrics To evaluate the performance of our NASNet-Large segmentation net, we use the dice coefficient index as a similarity metric to indicate the goodness of the segmentation results since the dice score is currently widely used in the segmentation task. Furthermore, we also report the IoU score. The two metrics are defined in the following formula: Figure 3: An example lung image in the training dataset (left: raw image; middle: ground truth mask image; right: mask overlay with raw image). Figure 2: The post-processing of prediction result. The left one is the prediction result from the proposed net, and the right one is the post-processing result using the proposed post-processing layer. The red box is an irrelevant feature. \[Dice=2\times\frac{|A\cap B|}{|A|+|B|},\ \ IoU=\frac{|A\cap B|}{|A\cup B|},\] where \(A\) is ground truth mask, and \(B\) is the prediction mask. ### Segmentation results As shown in Fig.4, it compares the predicted segmented lung image with the ground truth image in the test dataset. The prediction image is close to the real mask, which demonstrates the high performance of our model. We also compare prediction results with state-of-the-art methods. Tab. 1 lists the comparison results of different models (Notice that the metrics are only reported on the lung area and exclude the background area). Our Nastnetlarge-net has a higher IoU and dice score than other models, and we also observe that the post-processing layer outputs the highest scores, which illustrates that the post-processing layer is useful in the segmentation tasks. ## 4 Discussion One of the distinct advantages of our model is that it achieves a higher dice and IoU score. And there are two reasons: the designed NASNet-Large segmentation net is suitable for image segmentation, and the post-processing layer filters out the unnecessary parts in the image, which improves the segmented results. Although our model achieves a 0.92 dice score, it still fails in some cases. As shown in Fig. 5, we observe that the segmentation results are worse in these two situations. There are also two reasons, and one is that there is no similar image in the training dataset, which leads to low performance. Second is that our model is not robust enough to deal with some unusual images. Compared with the normal training images as shown in Fig. 3, two worse raw images (in Fig. 5) are either too light (first row) or too dark (second row), and it is even difficult for a human to distinguish the lung area and other tissues. A straightforward solution is to include more similar images in the training images. ## 5 Conclusion In this paper, we are the first to present a lung segmentation using the NASNet-Large-decoder architecture, and we get an accurate segmentation with a 0.92 dice score. A post-processing layer is employed to remove the unnecessary part of the prediction map. The proposed model can also be applied to a wide area of different medical image segmentation tasks. Our objective in the next stage is to design a more robust encoder and decoder model for application in all different cases.
2303.07669
AutoTransfer: AutoML with Knowledge Transfer -- An Application to Graph Neural Networks
AutoML has demonstrated remarkable success in finding an effective neural architecture for a given machine learning task defined by a specific dataset and an evaluation metric. However, most present AutoML techniques consider each task independently from scratch, which requires exploring many architectures, leading to high computational cost. Here we propose AutoTransfer, an AutoML solution that improves search efficiency by transferring the prior architectural design knowledge to the novel task of interest. Our key innovation includes a task-model bank that captures the model performance over a diverse set of GNN architectures and tasks, and a computationally efficient task embedding that can accurately measure the similarity among different tasks. Based on the task-model bank and the task embeddings, we estimate the design priors of desirable models of the novel task, by aggregating a similarity-weighted sum of the top-K design distributions on tasks that are similar to the task of interest. The computed design priors can be used with any AutoML search algorithm. We evaluate AutoTransfer on six datasets in the graph machine learning domain. Experiments demonstrate that (i) our proposed task embedding can be computed efficiently, and that tasks with similar embeddings have similar best-performing architectures; (ii) AutoTransfer significantly improves search efficiency with the transferred design priors, reducing the number of explored architectures by an order of magnitude. Finally, we release GNN-Bank-101, a large-scale dataset of detailed GNN training information of 120,000 task-model combinations to facilitate and inspire future research.
Kaidi Cao, Jiaxuan You, Jiaju Liu, Jure Leskovec
2023-03-14T07:23:16Z
http://arxiv.org/abs/2303.07669v1
# AutoTransfer: AutoML with Knowledge Transfer - An Application to Graph Neural Networks ###### Abstract AutoML has demonstrated remarkable success in finding an effective neural architecture for a given machine learning task defined by a specific dataset and an evaluation metric. However, most present AutoML techniques consider each task independently from scratch, which requires exploring many architectures, leading to high computational costs. Here we propose AutoTransfer, an AutoML solution that improves search efficiency by transferring the prior architectural design knowledge to the novel task of interest. Our key innovation includes a _task-model bank_ that captures the model performance over a diverse set of GNN architectures and tasks, and a computationally efficient _task embedding_ that can accurately measure the similarity among different tasks. Based on the task-model bank and the task embeddings, we estimate the design priors of desirable models of the novel task, by aggregating a similarity-weighted sum of the top-K design distributions on tasks that are similar to the task of interest. The computed design priors can be used with any AutoML search algorithm. We evaluate AutoTransfer on six datasets in the graph machine learning domain. Experiments demonstrate that (i) our proposed task embedding can be computed efficiently, and that tasks with similar embeddings have similar best-performing architectures; (ii) AutoTransfer significantly improves search efficiency with the transferred design priors, reducing the number of explored architectures by _an order of magnitude_. Finally, we release GNN-Bank-101, a large-scale dataset of detailed GNN training information of 120,000 task-model combinations to facilitate and inspire future research. ## 1 Introduction Deep neural networks are highly modular, requiring many design decisions to be made regarding network architecture and hyperparameters. These design decisions form a search space that is nonconvex and costly even for experts to optimize over, especially when the optimization must be repeated from scratch for each new use case. Automated machine learning (AutoML) is an active research area that aims to reduce the human effort required for architecture design that usually covers hyperparameter optimization and neural architecture search. AutoML has demonstrated success (Zoph and Le, 2016; Pham et al., 2018; Zoph et al., 2018; Cai et al., 2018; He et al., 2018; Guo et al., 2020; Erickson et al., 2020; LeDell and Poirier, 2020) in many application domains. Finding a reasonably good model for a new learning _task1_ in a computationally efficient manner is crucial for making deep learning accessible to domain experts with diverse backgrounds. Efficient AutoML is especially important in domains where the best architectures/hyperparameters are highly sensitive to the task. A notable example is the domain of graph learning2. _First_, graph learning methods receive input data composed of _a variety of data types_ and optimize over tasks that span an equally _diverse set of domains and modalities_ such as recommendation (Ying et al., 2018; He et al., 2020), physical simulation (Sanchez-Gonzalez et al., 2020; Pfaff et al., 2020), and bioinformatics (Zitnik et al., 2018). This differs from computer vision and natural language processing where the input data has a predefined, fixed structure that can be shared across different neural architectures. _Second_, neural networks that operate on graphs come with _a rich set of design choices_ and _a large set of parameters to explore_. However, unlike other domains where a few pre-trained architectures such as ResNet (He et al., 2016) and GPT-3 (Brown et al., 2020) dominate the benchmarks, it has been shown that the best graph neural network (GNN) design is highly task-dependent (You et al., 2020). Although AutoML as a research domain is evolving fast, existing AutoML solutions have massive computational overhead when finding a good model for a _new_ learning task is the goal. Most present AutoML techniques consider each task independently and in isolation, therefore they require redoing the search from scratch for each new task. This approach ignores the potentially valuable architectural design knowledge obtained from the previous tasks, and inevitably leads to a high computational cost. The issue is especially significant in the graph learning domain Gao et al. (2019); Zhou et al. (2019), due to the challenges of diverse task types and the huge design space that are discussed above. Here we propose AutoTransfer3, an AutoML solution that drastically improves AutoML architecture search by transferring previous architectural design knowledge to the task of interest. Our key innovation is to introduce a _task-model bank_ that stores the performance of a diverse set of GNN architectures and tasks to guide the search algorithm. To enable knowledge transfer, we define a _task embedding_ space such that tasks close in the embedding space have similar corresponding top-performing architectures. The challenge here is that the task embedding needs to capture the performance rankings of different architectures on different datasets, while being efficient to compute. Our innovation here is to embed a task by using the condition number of its Fisher Information Matrix of various _randomly initialized_ models and also a learning scheme with an empirical generalization guarantee. This way we implicitly capture the properties of the learning task, while being orders of magnitudes faster (within seconds). We then estimate the design prior of desirable models for the new task, by aggregating design distributions on tasks that are close to the task of interest. Finally, we initiate a hyperparameter search algorithm with the task-informed design prior computed. Footnote 3: Source code is available at [https://github.com/snap-stanford/AutoTransfer](https://github.com/snap-stanford/AutoTransfer). We evaluate AutoTransfer on six datasets, including both node classification and graph classification tasks. We show that our proposed task embeddings can be computed efficiently and the distance measured between tasks correlates highly (0.43 Kendall correlation) with model performance rankings. Furthermore, we present AutoTransfer significantly improves search efficiency when using the transferred design prior. AutoTransfer reduces the number of explored architectures needed to reach a target accuracy by _an order of magnitude_ compared to SOTA. Finally, we release GNN-Bank-101--the first large-scale database containing detailed performance records for 120,000 task-model combinations which were trained with 16,128 GPU hours--to facilitate future research. ## 2 Related Work In this section, we summarize the related work on AutoML regarding its applications on GNNs, the common search algorithms, and pioneering work regarding transfer learning and task embeddings. **AutoML for GNNs.** Neural architecture search (NAS), a unique and popular form of AutoML for deep learning, can be divided into two categories: multi-trial NAS and one-shot NAS. During multi-trial NAS, each sampled architecture is trained separately. GraphNAS (Gao et al., 2020) and Auto-GNN (Zhou et al., 2019) are typical multi-trial NAS algorithms on GNNs which adopt an RNN controller that learns to suggest better sets of configurations through reinforcement learning. One-shot NAS (_e.g._, (Liu et al., 2018; Qin et al., 2021; Li et al., 2021)) involves encapsulating the entire model space in one super-model, training the super-model once, and then iteratively sampling sub-models from the super-model to find the best one. In addition, there is work that explicitly studies fine-grained design choices such as data augmentation (You et al., 2021), message passing layer type (Cai et al., 2021; Ding et al., 2021; Zhao et al., 2021), and graph pooling (Wei et al., 2021). Notably, AutoTransfer is the _first_ AutoML solution for GNNs that efficiently transfer design knowledge across tasks. **HPO Algorithms.** Hyperparameter Optimization (HPO) algorithms search for the optimal model hyperparameters by iteratively suggesting a set of hyperparameters and evaluating their performance. Random search samples hyperparameters from the search space with equal probability. Despite not learning from previous trials, random search is commonly used for its simplicity and is much more efficient than grid search (Bergstra and Bengio, 2012). The TPE algorithm (Bergstra et al., 2011) builds a probabilistic model of task performance over the hyperparameter space and uses the results of past trials to choose the most promising next configuration to train, which the TPE algorithm defines as maximizing the Expected Improvement value (Jones, 2001). Evolutionary algorithms (Real et al., 2017; Jaderberg et al., 2017) train multiple models in parallel and replace poorly performing models with "mutated" copies of the current best models. AutoTransfer is a general AutoML solution and can be applied in combination with any of these HPO algorithms. **Transfer Learning in AutoML.** Wong et al. (2018) proposed to transfer knowledge across tasks by reloading the controller of reinforcement learning search algorithms. However, this method assumes that the search space on different tasks starts with the same learned prior. Unlike AutoTransfer, it cannot address the core challenge in GNN AutoML: the best GNN design is highly task-specific. GraphGym (You et al., 2020) attempts to transfer the best architecture design directly with a metric space that measures task similarity. GraphGym (You et al., 2020) computes task similarity by training a set of 12 "anchor models" to convergence which is computationally expensive. In contrast, AutoTransfer designs light-weight task embeddings requiring minimal computations overhead. Additionally, Zhao and Bilen (2021); Li et al. (2021) proposes to conduct architecture search on a proxy subset of the whole dataset and later transfer the best searched architecture on the full dataset. Jeong et al. (2021) studies a similar setting in vision domain. **Task Embedding.** There is prior research trying to quantify task embeddings and similarities. Similar to GraphGym, Taskonomy (Zamir et al., 2018) estimates the task affinity matrix by summarizing final losses/evaluation metrics using an Analytic Hierarchy Process (Saaty, 1987). From a different perspective, Task2Vec (Achille et al., 2019) generates task embeddings for a given task using the Fisher Information Matrix associated with a pre-trained probe network. This probe network is shared across tasks and allows Task2Vec to estimate the Fisher Information Matrix of different image datasets. Le et al. (2022) extends a similar idea to neural architecture search. The aforementioned task embeddings cannot be directly applied to GNNs as the inputs do not align across datasets. AutoTransfer avoids the bottleneck by using asymptotic statistics of the Fisher Information Matrix with randomly initiated weights. ## 3 Problem Formulation and Preliminaries We first introduce formal definitions of data structures relevant to AutoTransfer. Figure 1: **Overview of AutoTransfer. Left: We introduce GNN-Bank-101, a large database containing a diverse set of GNN architectures and hyperparameters applied to different tasks, along with their training/evaluation statistics. Middle: We introduce a task embedding space, where each point corresponds to a different task. Tasks close in the embedding space have similar corresponding top-performing models. Right: Given a new task of interest, we guide the AutoML search by referencing the design distributions of the most similar tasks in the task embedding space.** **Definition 1** (Task): _We denote a task as \(T=(\mathcal{D},\mathcal{L}(\cdot))\), consisting of a dataset \(\mathcal{D}\) and a loss function \(\mathcal{L}(\cdot)\) related to the evaluation metric._ For each training attempt on a task \(T^{(i)},\) we can record its model architecture \(M_{j}\), hyperparameters \(H_{j}\), and corresponding value of loss \(l_{j}\), _i.e._, \((M_{j},H_{j},l_{j})\). We propose to maintain a task-model bank to facilitate knowledge transfer to future novel tasks. **Definition 2** (Task-Model Bank): _A task-model bank \(\mathcal{B}\) is defined as a collection of tasks, each with multiple training attempts, in the form of \(\mathcal{B}=\{(T^{(i)},\{(M_{j}^{(i)},H_{j}^{(i)},l_{j}^{(i)})\})\}\)._ **AutoML with Knowledge Transfer.** Suppose we have a task-model bank \(\mathcal{B}\). Given a novel task \(T^{(n)}\) which has not been seen before, our goal is to quickly find a model that works reasonably well on the novel task by utilizing knowledge from the task-model bank. In this paper, we focus on AutoML for graph learning tasks, though our developed technique is general and can be applied to other domains. We define the input graph as \(G=\{V,E\}\), where \(V\) is the node set and \(E\subseteq V\times V\) is the edge set. Furthermore, let \(y\) denote its output labels, which can be node-level, edge-level or graph-level. A GNN parameterized by weights \(\theta\) outputs a posterior distribution \(\mathcal{P}(G,y,\theta)\) for label predictions. ## 4 Proposed Solution: AutoTransfer In this section, we introduce the proposed AutoTransfer solution. AutoTransfer uses the _task embedding space_ as a tool to understand the relevance of previous architectural designs to the target task. The designed task embedding captures the performance rankings of different architectures on different tasks while also being efficient to compute. We first introduce a theoretically motivated solution to extract a scale-invariant performance representation of each task-model pair. We use these representations to construct task features and further learn task embeddings. These embeddings form the task embedding space that we finally use during the AutoML search. ### Basics of the Fisher Information Matrix (FIM) Given a GNN defined above, its Fisher Information Matrix (FIM) \(F\) is defined as \[F=\mathbb{E}_{G,y}[\nabla_{\theta}\log\mathcal{P}(G,y,\theta)\;\nabla_{\theta} \log\mathcal{P}(G,y,\theta)^{\top}].\] which formally is the expected covariance of the scores with respect to the model parameters. There are two popular geometric views for the FIM. First, the FIM is an upper bound of the Hessian and coincides with the Hessian if the gradient is 0. Thus, the FIM characterizes the local landscape of the loss function near the global minimum. Second, similar to the Hessian, the FIM models the loss Figure 2: **Pipeline for extracting task embeddings. Left: To efficiently embed a task, we first extract task features by concatenating features measured from \(R\) randomly initialized anchor models. Then, we introduce a projection function \(g(\cdot)\) with learned weights to transform the task features into task embeddings. Right: Training objective for optimizing \(g(\cdot)\) with triplet supervision.** landscape with respect not to the input space, but to the parameter space. In the information geometry view, if we add a small perturbation to the parameter space, we have \[\text{KL}(\mathcal{P}(G,y,\theta)\|\mathcal{P}(G,y,\theta+d\theta))=d\theta^{ \top}Fd\theta.\] where \(\text{KL}(\cdot,\cdot)\) stands for Kullback-Leibler divergence. It means that the parameter space of a model forms a Riemannian manifold and the FIM works as its Riemannian metric. The FIM thus allows us to quantify the importance of a model's weights in a way that is applicable to different architectures. ### FIM-based Task Features **Scale-invariant Representation of Task-Model Pairs.** We aim to find a scale-invariant representation for each task-model pair which will form the basis for constructing task features. The major challenge in using the FIM to represent GNN performance is that graph datasets do not have a universal, fixed input structure, so it is infeasible to find a single pre-trained model and extract its FIM. However, training multiple networks poses a problem as the FIMs computed for different networks are not directly comparable. We choose to use multiple networks but additionally propose to use asymptotic statistics of the FIM associated with randomly initialized weights. The theoretical justification for the relationship between the asymptotic statistics of the FIM and the trainability of neural networks was studied in (Karakida et al., 2019; Pennington and Worah, 2018) to which we refer the readers. We hypothesize that such a measure of trainability encodes loss landscapes and generalization ability and thus correlates with final model performance on the task. Another issue that relates to input structures of graph datasets is that different models have different number of parameters. Despite some specially designed architectures, _e.g._, (Lee et al., 2019; Ma et al., 2019), most GNN architecture design can be represented as a sequence of pre-processing layers, message passing layers, and post-processing layers. Pre-process layers and post-process layers are Multilayer Perceptron (MLP) layers, of which the dimensions vary across different tasks due to different input/output structures. Message passing layers are commonly regarded as the key design for GNNs and the number of weight parameters can remain the same across tasks. In this light, we only consider the FIM with respect to the parameters in message passing layers so that the number of parameters considered stays the same for all datasets. We note that such formulation has its limitations, in the sense that it cannot cover all the GNN designs in the literature. We leave potential extensions with better coverage for future work. We further approximate the FIM by only considering the diagonal entries, which implicitly neglects the correlations between parameters. We note that this is common practice when analyzing the FIMs of deep neural networks, as the full FIM is massive (quadratic in the number of parameters) and infeasible to compute even on modern hardware. Similar to Pennington and Worah (2018), we consider the first two moments of FIM \[m_{1}=\frac{1}{n}\text{tr}[F]\quad\text{and}\quad m_{2}=\frac{1}{n}\text{tr}[F ^{2}] \tag{1}\] and use \(\alpha=m_{2}/m_{1}^{2}\) as the scale-invariant representation. The computed \(\alpha\) is lower bounded by 1 and captures how concentrated the spectrum is. A small \(\alpha\) indicates the loss landscape is flat, and its corresponding model design enjoys fast first-order optimization and potentially better generalization. To encode label space information into each task, we propose to train only the last linear layer of each model on a given task, which can be done efficiently. The parameters in other layers are frozen after being randomly initialized. We take the average over \(R\) initializations to estimate the average \(\bar{\alpha}\). **Constructing Task Features.** We denote task features as measures extracted from each task that characterize its important traits. The design of task features should reflect our final objective: to use these features to identify similar tasks and transfer the best design distributions. Thus, we select \(U\) model designs as anchor models and concatenate the scale-invariant representations \(\bar{a}_{u}\) of each design as task features. To retain only the relative ranking among anchor model designs, we normalize the concatenated feature vector to a scale of 1. We let \(\mathbf{z}_{f}\) denote the normalized task feature. ### From Task Features to Task Embeddings The task feature \(\mathbf{z}_{f}\) introduced above can be regarded as a means of feature engineering. We construct the feature vector with domain knowledge, but there is no guarantee it functions as anticipated. We thus propose to learn a projection function \(g(\cdot):\mathbb{R}^{U}\rightarrow\mathbb{R}^{D}\) that maps task feature \(\mathbf{z}_{f}\) to final task embedding \(\mathbf{z}_{e}=g(\mathbf{z}_{f})\). We do not have any pointwise supervision that can be used as the training objective. Instead, we consider the metric space defined by GraphGym. The distance function in GraphGym - computed using the Kendall rank correlation between performance rankings of anchor models trained on the two compared tasks - correlates nicely with our desired knowledge transfer objective. It is not meaningful to enforce that task embeddings mimic GraphGym's exact metric space, as GraphGym's metric space can still contain noise, or does not fully align with the transfer objective. We consider a surrogate loss that enforces only the rank order among tasks. To illustrate, let us consider tasks \(T^{(i)},T^{(j)}\), \(T^{(k)}\) and their corresponding task embeddings, \(\mathbf{z}_{e}^{(i)}\), \(\mathbf{z}_{e}^{(j)}\), \(\mathbf{z}_{e}^{(k)}\). Note that \(\mathbf{z}_{e}\) is normalized to 1 so \({\mathbf{z}_{e}^{(i)}}^{\top}\mathbf{z}_{e}^{(j)}\) measures the cosine similarity between tasks \(T^{(i)}\) and \(T^{(j)}\). Let \(d_{g}(\cdot,\cdot)\) denote the distance estimated by GraphGym. We want to enforce \[\mathbf{z}_{e}^{(i)}{}^{\top}\mathbf{z}_{e}^{(j)}>\mathbf{z}_{e}^{(i)}{}^{\top}\mathbf{z}_{e}^ {(k)}\quad\text{if}\quad d_{g}(T^{(i)},T^{(j)})<d_{g}(T^{(i)},T^{(k)}).\] To achieve this, we use the margin ranking loss as our surrogate supervised objective function: \[\mathcal{L}_{r}(\mathbf{z}_{e}^{(i)},\mathbf{z}_{e}^{(j)},\mathbf{z}_{e}^{(k)},y)=\max(0, -y\cdot(\mathbf{z}_{e}^{(i)}{}^{\top}\mathbf{z}_{e}^{(j)}-\mathbf{z}_{e}^{(i)}{}^{\top} \mathbf{z}_{e}^{(k)})+\text{margin}). \tag{2}\] Here if \(d_{g}(T^{(i)},T^{(j)})<d_{g}(T^{(i)},T^{(k)})\), then we have its corresponding label \(y=1\), and \(y=-1\) otherwise. Our final task embedding space is then a FIM-based metric space with cosine distance function, where the distance is defined as \(d_{e}(T^{(i)},T^{(j)})=1-\mathbf{z}_{e}^{(i)}{}^{\top}\mathbf{z}_{e}^{(j)}\). Please refer to the detailed training pipeline at Algorithm 2 in the Appendix. ### AutoML Search Algorithm with Task Embeddings To transfer knowledge to a novel task, a naive idea would be to directly carry over the best model configuration from the closest task in the bank. However, even a high Kendall rank correlation between model performance rankings of two tasks \(T^{(i)}\), \(T^{(j)}\) does not guarantee the best model configuration in task \(T^{(i)}\) will also achieve the best performance on task \(T^{(j)}\). In addition, since task similarities are subject to noise, this naive solution may struggle when there exist multiple reference tasks that are all highly similar. To make the knowledge transfer more robust to such failure cases, we introduce the notion of design distributions that depend on top performing model designs and propose to transfer design distributions rather than the best design configurations. Formally, consider a task \(T^{(i)}\) in the task-model bank \(\mathcal{B}\), associated with its trials \(\{(M^{(i)}_{j},H^{(i)}_{j},l^{(i)}_{j})\}\). We can summarize its designs as a list of configurations \(C=\{c_{1},\ldots,c_{W}\}\), such that all potential combinations of model architectures \(M\) and hyperparameters \(H\) fall under the Cartesian product of the configurations. For example, \(c_{1}\) could be the instantiation of aggregation layers, and \(c_{2}\) could be the start learning rate. We then define design distributions as random variables \(\texttt{c}_{1},\texttt{c}_{2},\ldots,\texttt{c}_{W}\), each corresponding to a hyperparameter. Each random variable c is defined as the frequency distribution of the design choices used in the top \(K\) trials. We multiply all distributions for the individual configurations \(\{\texttt{c}_{1},\ldots,\texttt{c}_{W}\}\) to approximate the overall task's design distribution \(\mathcal{P}(C|T^{(i)})=\prod_{w}\mathcal{P}(\texttt{c}_{w}|T^{(i)})\). During inference, given a novel task \(T^{(n)}\), we select a close task subset \(\mathcal{S}\) by thresholding task embedding distances, _i.e._, \(\mathcal{S}=\{T^{(i)}|d_{e}(T^{(n)},T^{(i)})\leq d_{\text{these}}\}\). We then derive the transferred design prior \(\mathcal{P}_{t}(C|T^{(n)})\) of the novel task by weighting design distributions from the close task subset \(\mathcal{S}\). \[\mathcal{P}_{t}(C|T^{(n)})=\frac{\sum_{T^{(i)}\in\mathcal{S}}\frac{1}{d_{e}(T^ {(n)},T^{(i)})}\mathcal{P}(C|T^{(i)})}{\sum_{T^{(i)}\in\mathcal{S}}\frac{1}{d_{ e}(T^{(n)},T^{(i)})}}. \tag{3}\] The inferred design prior for the novel task can then be used to guide various search algorithms. The most natural choice for a few trial regime is random search. Rather than sampling each design configuration following a uniform distribution, we propose to sample from the task-informed design prior \(\mathcal{P}_{t}(C|T^{(n)})\). Please refer to Appendix A to check how we augment other search algorithms. For AutoTransfer, we can preprocess the task-model bank \(\mathcal{B}\) into \(\mathcal{B}_{p}=\{(\mathcal{D}^{(i)},\mathcal{L}^{(i)}(\cdot)),\mathbf{z}_{e}^{(i )},\mathcal{P}(C|T^{(i)})\}\) as our pipeline only requires using task embedding \(\mathbf{z}_{e}^{(i)}\) and design distribution \(\mathcal{P}(C|T^{(i)})\) rather than detailed training trials. A detailed search pipeline is summarized in Algorithm 1. ``` 0: A processed task-model bank \(\mathcal{B}_{p}=\{(\mathcal{D}^{(i)},\mathcal{C}^{(i)}(\cdot)),\mathbf{z}_{e}^{(i)}, \mathcal{P}(C|T^{(i)})\}\), a novel task \(T^{(n)}=\big{(}\mathcal{D}^{(n)},\mathcal{C}^{(n)}(\cdot)\big{)}\), \(U\) anchor models \(M_{1},...,M_{U}\), \(R\) specifies the number of repeats. 1:for\(u=1\) to \(U\)do 2:for\(r=1\) to \(R\)do 3: Initialize weights \(\theta\) for anchor model \(M_{u}\) randomly 4: Estimate FIM \(F\leftarrow\mathbb{E}_{\mathcal{D}}[\nabla_{\theta}\log\mathcal{P}(M_{u},y, \theta)\ \nabla_{\theta}\log\mathcal{P}(M_{u},y,\theta)^{\top}]\) 5: Extract scale-invariant representation \(a_{u}^{(v)}\gets m_{2}/m_{1}^{2}\) following Eq. 1 6:endfor 7:\(\bar{\mathbf{a}}_{u}\leftarrow\text{mean}(a_{u}^{(1)},a_{u}^{(2)},...,a_{u}^{(V)})\) 8:endfor 9:\(\mathbf{z}_{f}^{(n)}\leftarrow\text{concat}(\bar{a}_{1},\bar{a}_{2},...,\bar{a}_{U})\) 10:\(\mathbf{z}_{e}^{(n)}\gets g(\mathbf{z}_{f}^{(n)})\) 11: Select close task subset \(\mathcal{S}\leftarrow\{T^{(i)}|1-\mathbf{z}_{e}^{(n)}\}^{\top}\mathbf{z}_{e}^{(i)}\leq d _{\text{dnews}}\}\) 12: Get design prior \(\mathcal{P}_{t}(C|T^{(n)})\) by aggregating subset \(\mathcal{S}\) following Eq. 3 13: Start a HPO search algorithm with the task-informed design prior \(\mathcal{P}_{t}(C|T^{(n)})\) ``` **Algorithm 1** Summary of AutoTransfer search pipeline ## 5 Experiments ### Experimental Setup **Task-Model Bank: GNN-Bank-101.** To facilitate AutoML research with knowledge transfer, we collected GNN-Bank-101 as the first large-scale graph database that records reproducible design configurations and detailed training performance on a variety of tasks. Specifically, GNN-Bank-101 currently includes six tasks for node classification (AmazonComputers (Shchur et al., 2018), AmazonPhoto (Shchur et al., 2018), CiteSeer (Yang et al., 2016), CoauthorCS (Shchur et al., 2018), CoauthorPhysics (Shchur et al., 2018), Cora (Yang et al., 2016)) and six tasks for graph classification (PROTEINS (Ivanov et al., 2019), BZR (Ivanov et al., 2019), COX2 (Ivanov et al., 2019), DD (Ivanov et al., 2019), ENZYMES (Ivanov et al., 2019), IMDB-M (Ivanov et al., 2019)). Our design space follows (You et al., 2020), and we extend the design space to include various commonly adopted graph convolution and activation layers. We extensively run 10,000 different models for each task, leading to 120,000 total task-model combinations, and record all training information including train/val/test loss. **Benchmark Datasets.** We benchmark AutoTransfer on six different datasets following prior work (Qin et al., 2021). Our datasets include three standard node classification datasets (CoauthorPhysics (Shchur et al., 2018), CoraFull (Bojchevski and Gunnemann, 2017) and OGB-Arxiv (Hu et al., 2020)), as well as three standard benchmark graph classification datasets, (COX2 (Ivanov et al., 2019), IMDB-M (Ivanov et al., 2019) and PROTEINS (Ivanov et al., 2019)). CoauthorPhysics and CoraFull are transductive node classification datasets, so we randomly assign nodes into train/valid/test sets following a 50%:25%:25% split (Qin et al., 2021). We randomly split graphs following a 80%:10%:10% split for the three graph classification datasets (Qin et al., 2021). We follow the default train/valid/test split for the OGB-Arxiv dataset (Hu et al., 2020). To make sure there is no information leakage, we temporarily _remove_ all records related to the task from our task-model bank if the dataset we benchmark was collected in the task-model bank. **Baselines.** We compare our methods with the state-of-the-art approaches for GNN AutoML. We use GCN and GAT with default architectures following their original implementation as baselines. For multi-trial NAS methods, we consider GraphNAS (Gao et al., 2020). For one-shot NAS methods, we include DARTS (Liu et al., 2018) and GASSO (Qin et al., 2021). GASSO is designed for transductive settings, so we omit it for graph classification benchmarks. We further provide results of HPO algorithms based on our proposed search space as baselines: Random, Evolution, TPE (Bergstra et al., 2011) and HyperBand (Li et al., 2017). We by default allow searching 30 trials maximum for all the algorithms, _i.e._, an algorithm can train 30 different models and collect the model with the best accuracy. We use the default setting for one-shot NAS algorithms (DARTS and GASSO), as they only train a super-model once and can efficiently evaluate different architectures. We are mostly interested in studying the few-trial regime where most advanced search algorithms degrade to random search. Thus we additionally include a random search (3 trials) baseline where we pick the best model out of only 3 trials. ### Experiments on Search Efficiency We evaluate AutoTransfer by reporting the average best test accuracy among all trials considered over ten runs of each algorithm in Table 1. The test accuracy collected for each trial is selected at the epoch with the best validation accuracy. By comparing results from random search (3 trials) and AutoTransfer (3 trials), we show that our transferred task-informed design prior significantly improves test accuracy in the few-trial regime, and can be very useful in environments that are where computationally constrained. Even if we increase the number of search trials to 30, AutoTransfer still demonstrates non-trivial improvement compared with TPE, indicating that our proposed pipeline has advantages even when computational resources are abundant. Notably, with only 3 search trials, AutoTransfer surpasses most of the baselines, even those that use 30 trials. To better understand the sample efficiency of AutoTransfer, we plot the best test accuracy found at each trial in Figure 3 for OGB-Arxiv and TU-PROTEINS datasets. We notice that the advanced search algorithms (Evolution and TPE) have no advantages over random search at the few-trial regime since the amount of prior search data is not yet sufficient to infer potentially better design configurations. On the contrary, by sampling from the transferred design prior, AutoTransfer achieves significantly better average test accuracy in the first few trials. The best test accuracy at trial 3 of AutoTransfer surpasses its counterpart at trial 10 for every other method. ### Analysis of Task Embeddings **Qualitative analysis of task features.** To examine the quality of the proposed task features, we visualize the proposed task similarity matrix (Figure 4 (b)) along with the task similarity matrix (Figure 4 (a)) proposed in GraphGym. We show that our proposed task similarity matrix captures similar patterns as GraphGym's task similarity matrix while being computed much more efficiently \begin{table} \begin{tabular}{c|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{Node} & \multicolumn{3}{c}{Graph} \\ Method & Physics & CoraFull & OGB-Arxiv & COX2 & IUBB & PROTEINS \\ \hline GCN (30 trials) & 95.88\(\pm\)0.16 & 67.12\(\pm\)0.52 & 70.46\(\pm\)0.18 & 79.23\(\pm\)2.19 & 50.40\(\pm\)3.02 & 74.84\(\pm\)2.82 \\ GAT (30 trials) & 95.71\(\pm\)0.24 & 65.92\(\pm\)0.68 & 68.82\(\pm\)0.32 & 81.56\(\pm\)4.17 & 49.67\(\pm\)4.30 & 75.30\(\pm\)3.72 \\ GraphNAS (30 trials) & 92.77\(\pm\)0.84 & 63.13\(\pm\)3.28 & 65.90\(\pm\)2.64 & 77.73\(\pm\)4.04 & 46.93\(\pm\)3.94 & 72.51\(\pm\)3.36 \\ DARTS & 95.28\(\pm\)1.67 & 65.79\(\pm\)2.85 & 69.02\(\pm\)1.18 & 79.82\(\pm\)3.15 & 50.26\(\pm\)4.08 & 75.04\(\pm\)3.81 \\ GASSO4 & 96.38 & 68.89 & 70.52 & - & - & - \\ \hline Random (3 trials) & 95.16\(\pm\)0.55 & 61.24\(\pm\)4.04 & 67.92\(\pm\)1.92 & 76.88\(\pm\)3.17 & 45.79\(\pm\)4.39 & 72.47\(\pm\)2.57 \\ TPE (30 trials) & 96.41\(\pm\)0.36 & 66.37\(\pm\)1.73 & 71.35\(\pm\)4.04 & 82.27\(\pm\)2.00 & 50.33\(\pm\)4.00 & 79.46\(\pm\)1.28 \\ HyperBand (30 trials) & 96.56\(\pm\)0.30 & 67.75\(\pm\)1.24 & 71.60\(\pm\)0.36 & 82.21\(\pm\)1.79 & 50.86\(\pm\)3.45 & 79.32\(\pm\)1.16 \\ \hline \hline **AutoTransfer (3 trials)** & 96.64\(\pm\)0.42 & 69.27\(\pm\)0.76 & 71.24\(\pm\)0.39 & 82.13\(\pm\)1.59 & 52.33\(\pm\)2.13 & 77.81\(\pm\)2.19 \\ **AutoTransfer (30 trials)** & 96.91\(\pm\)0.27 & 70.05\(\pm\)0.42 & 72.21\(\pm\)0.27 & 86.52\(\pm\)1.58 & 54.93\(\pm\)1.23 & 81.25\(\pm\)1.17 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance comparisons of AutoTransfer and other baselines. We report the average test accuracy and the standard deviation over ten runs. With only 3 trials AutoTransfer already outperform most SOTA baselines with 30 trials. Figure 3: Performance comparisons in the few-trial regime. At trial \(t\), we plot the best test accuracy among all models searched from trial \(1\) to trial \(t\). AutoTransfer can reduce the number of trials needed to search by an order of magnitude (see also Table 4 in Appendix). by omitting training. We notice that the same type of tasks, _i.e._, node classification and graph classification, share more similarities within each group. As a sanity check, we examined that the closest task in the bank with respect to CoraFull is Cora. The top 3 closest tasks for OGB-Arxiv are AmazonComputers, AmazonPhoto, and CoauthorPhysics, all of which are node classification tasks. **Generalization of projection function \(g(\cdot)\).** To show the proposed projection function \(g(\cdot)\) can generate task embeddings that can generalize to novel tasks, we conduct leave-one-out cross validation with all tasks in our task-model bank. Concretely, for each task considered as a novel task \(T^{(n)}\), we use the rest of the tasks, along with their distance metric \(d_{g}(\cdot,\cdot)\) estimated by GraphGym's exact but computationally expensive metric space, to train the projection function \(g(\cdot)\). We calculate Kendall rank correlation over task similarities for Task Feature (without \(g(\cdot)\)) and Task Embedding (with \(g(\cdot)\)) against the exact task similarities. The average rank correlation and the standard deviation over ten runs is shown on Figure 4 (c). We find that with the proposed \(g(\cdot)\), our task embeddings indeed correlate better with the exact task similarities, and therefore, generalize better to novel tasks. **Ablation study on alternative task space design.** To demonstrate the superiority of the proposed task embedding, we further compare it with alternative task features. Following prior work (Yang et al., 2019), we use the normalized losses over the first 10 steps as the task feature. The results on OGB-Arxiv are shown in Table 2. Compared to AutoTransfer's task embedding, the task feature induced by normalized losses has a lower ranking correlation with the exact metric and yields worse performance. Table 2 further justifies the efficacy of using the Kendall rank correlation as the metric for task embedding quality, as higher Kendall rank correlation leads to better performance. ## 6 Conclusion In this paper, we study how to improve AutoML search efficiency by transferring existing architectural design knowledge to novel tasks of interest. We introduce a _task-model bank_ that captures the performance over a diverse set of GNN architectures and tasks. We also introduce a computationally efficient _task embedding_ that can accurately measure the similarity between different tasks. We release GNN-Bank-101, a large-scale database that records detailed GNN training information of 120,000 task-model combinations. We hope this work can facilitate and inspire future research in efficient AutoML to make deep learning more accessible to a general audience. \begin{table} \begin{tabular}{c|c c} \hline \hline & Kendall rank correlation & Test accuracy \\ \hline Alternative: Normalized Loss & -0.07\(\pm\)0.43 & 68.13\(\pm\)1.27 \\ AutoTransfer’s Task Feature & 0.18\(\pm\)0.30 & 70.67\(\pm\)0.52 \\ AutoTransfer’s Task Embedding & 0.43\(\pm\)0.22 & 71.42\(\pm\)0.39 \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study on the alternative task space design versus AutoTransfer’s task embedding. We report the average test accuracy and the standard deviation OGB-Arxiv over ten runs. Figure 4: (a) GraphGym’s task similarity between all pairs of tasks (computed from the Kendall rank correlation between performance rankings of models trained on the two compared tasks), a higher value represents a higher similarity. (b) The proposed task similarity computed by computing the dot-product between extracted task features. (c) The Kendall rank correlation of similarity rankings of the other tasks with respect to the central task between the proposed method and GraphGym. ## Acknowledgements We thank Xiang Lisa Li, Hongyu Ren, Yingxin Wu for discussions and for providing feedback on our manuscript. We also gratefully acknowledge the support of DARPA under Nos. HR00112190039 (TAMI), N660011924033 (MCS); ARO under Nos. W911NF-16-1-0342 (MURI), W911NF-16-1-0171 (DURIP); NSF under Nos. OAC-1835598 (CINES), OAC-1934578 (HDR), CCF-1918940 (Expeditions), NIH under No. 3U54HG010426-04S1 (HuBMAP), Stanford Data Science Initiative, Wu Tsai Neurosciences Institute, Amazon, Docomo, GSK, Hitachi, Intel, JPMorgan Chase, Juniper Networks, KDDI, NEC, and Toshiba. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding entities.
2310.07751
Wigner crystallization in Bernal bilayer graphene
In Bernal bilayer graphene (BBG), a perpendicular displacement field flattens the bottom of the conduction band and thereby facilitates the formation of strongly-correlated electron states at low electron density. Here, we focus on the Wigner crystal (WC) state, which appears in a certain regime of sufficiently large displacement field, low electron density, and low temperature. We first consider a model of BBG without trigonal warping, and we show theoretically that Berry curvature leads to a new kind of WC state in which the electrons acquire a spontaneous orbital magnetization when the displacement field exceeds a critical value. We then consider the effects of trigonal warping in BBG, and we show that they lead to an unusual ``doubly re-entrant" behavior of the WC phase as a function of density. The rotational symmetry breaking associated with trigonal warping leads to a nontrivial ``minivalley order" in the WC state, which changes abruptly at a critical value of displacement field. In both cases, we estimate the phase boundary of the WC state in terms of density, displacement field, and temperature.
Sandeep Joy, Brian Skinner
2023-10-11T18:00:00Z
http://arxiv.org/abs/2310.07751v1
# Wigner crystallization in Bernal bilayer graphene ###### Abstract In Bernal bilayer graphene (BBG), a perpendicular displacement field flattens the bottom of the conduction band and thereby facilitates the formation of strongly-correlated electron states at low electron density. Here we focus on the Wigner crystal (WC) state, which appears in a certain regime of sufficiently large displacement field, low electron density, and low temperature. We first consider a model of BBG without trigonal warping, and we show theoretically that Berry curvature leads to a new kind of WC state in which the electrons acquire a spontaneous orbital magnetization when the displacement field exceeds a critical value. We then consider the effects of trigonal warping in BBG, and we show that they lead to an unusual "doubly re-entrant" behavior of the WC phase as a function of density. The rotational symmetry breaking associated with trigonal warping leads to a nontrivial "minivalley order" in the WC state, which changes abruptly at a critical value of displacement field. In both cases, we estimate the phase boundary of the WC state in terms of density, displacement field, and temperature. ## I Introduction Electron systems can assume a plethora of strongly correlated phases if one can engineer a large density of states near the Fermi level. Twisted bilayer graphene (TBG) is a poster child of this approach: an engineered moire structure in TBG leads to cascades of phase transition between correlated insulating and superconducting phases [1; 2; 3; 4; 5; 6; 7]. This dramatic recent success has motivated condensed matter physicists to revisit Bernal bilayer graphene (BBG), the untwisted AB stack of two monolayer graphene sheets, for which the application of a perpendicular displacement field opens a band gap and simultaneously creates a large density of states near the band edge. These efforts have also uncovered remarkable results. Experiments in the last couple of years have demonstrated a variety of correlated electron phases and phase transitions in BBG, including a cascade of phase transitions between isospin-polarized phases [8; 9; 10; 11] and superconductivity [8; 12]. These experiments have prompted intense theoretical interest, with most theory works focusing on either the nature of the superconducting state (see, e.g., Ref. [13] for a recent review) or on the transition between isospin-polarized states within the metallic phase [14; 15; 16; 17]. Here we focus our attention on the nature of the Wigner crystal (WC) phase in BBG. The WC is perhaps the prototypical strongly correlated electron phase, first proposed in 1934 [18]. It arises because, at low electron density, the typical Coulomb interaction between neighboring electrons (\(\sim e^{2}n^{1/2}\), where \(n\) is the two-dimensional electron concentration and \(-e\) is the electron charge) becomes much larger than the Fermi energy (\(\sim\hbar^{2}n/2m\), where \(\hbar\) is the reduced Planck constant and \(m\) is the effective mass). Thus, at sufficiently small \(n\) and low temperature, the electron gas minimizes its Coulomb energy by spontaneously breaking translation symmetry and crystallizing into a solid-like arrangement of atoms called the Wigner lattice. From a naive perspective, the flattening of the conduction band edge in BBG by a perpendicular displacement field corresponds to an increase in the effective mass of the electrons and thereby enhances the prominence of the Wigner crystal phase. But BBG presents a number of features that are not present in the usual context of a simple parabolic dispersion. For example, BBG has a gate-tunable band gap, strong dielectric screening associated with virtual interband excitations, and a nontrivial band structure that is not equivalent to the usual parabolic electron band. Electron bands in BBG also have significant Berry curvature, with each of the two valleys in its dispersion relation having a net Berry flux of \(2\pi\); as we show below, the Berry curvature can strongly modify the WC state. Our goal in this paper is to consider the basic question: what becomes of the WC state in BBG? Where can it exist in the parameter space of electron density, displacement field, and temperature? And how are its properties different from those of the conventional WC phase? Our basic results can be summarized as follows. At zero displacement field, the WC state is precluded due to the vanishing band mass at low energy and the strong interband dielectric response. Instead, the WC phase exists within a regime of a large enough displacement field, low enough density, and low enough temperature. We describe, theoretically, the phase boundary of the WC state in two ways: first, by ignoring the effects of trigonal warping so that the low-energy dispersion relation is rotationally symmetric around the K and K' points; and second, by including the effects of trigonal warping, so that the low-energy band structure is split into either three or four low-energy "pockets" or "mini-valleys." In the former case, we show that when the displacement field is strong enough, Berry curvature drives a spontaneous orbital polarization of the WC state, which appears as a discontinuous jump in the magnetization as a function of the displacement field. When the effects of trigonal warping are included, however, the ground state of the WC involves a nontrivial spatial pattern of minivalley ordering. Trigonal warping also produces an unusual "doubly-reentrant melting" of the WC phase, in which the WC phase first melts, then freezes again, then melts again with increasing density in the limit of zero temperature. The remainder of this paper is organized as follows. In Sec. II, we discuss the dispersion relation of BBG and present our criterion for estimating the stability of the WC phase. Section III presents analytical derivations and numeric calculations for the case without trigonal warping and Sec. IV considers the effects of trigonal warping. We close in Sec. V with a summary and a discussion of possible experimental realizations. ## II BBG dispersion and harmonic oscillator description of the WC state Before discussing the WC state, we briefly review the electron dispersion relation in BBG and its dependence on the displacement field. BBG consists of two parallel copies of monolayer graphene, arranged so that the A-sublattice carbon atoms of one graphene sheet are vertically atop the B-sublattice atoms of the other. Therefore, the low-energy band structure can be seen as two copies of the Dirac cone that are hybridized by inter-layer tunneling (e.g., Ref. [19] for a review). In the simplest description, where one accounts only for the hopping of electrons between nearest-neighboring atoms (both in-plane and vertical), the low-energy conduction and valence band energies are given by \[\varepsilon(p)=\pm\left(\frac{\gamma_{1}^{2}}{2}+v^{2}p^{2}-\gamma_{1}\left( \frac{\gamma_{1}^{2}}{4}+v^{2}p^{2}\right)^{1/2}\right)^{1/2}, \tag{1}\] where \(\gamma_{1}\) is the interlayer tunneling amplitude and \(v\) is the single-layer graphene Dirac velocity [19]. Equation 1 describes a set of bands that meet at a point in momentum space (the \(K\) or \(K^{\prime}\) point) and, at low energy, disperse parabolically with the momentum \(p\) relative to that point. In the remainder of this paper, except where noted explicitly, we use dimensionless units where \(\hbar=v=\gamma_{1}=1\) so that all energies are in units of \(\gamma_{1}\approx 400\,\)meV, and all densities are in units of \((\gamma_{1}/\hbar v)^{2}\approx 4\times 10^{13}\,\)cm\({}^{-2}\). A perpendicular displacement field creates a difference \(U\) in potential energy between the top and bottom layers, and the dispersion relation for the conduction band becomes: \[\varepsilon\left(p\right)=\left(\frac{1}{2}+\frac{U^{2}}{4}+p^{2}-\left(\frac{ 1}{4}+p^{2}\left(1+U^{2}\right)\right)^{1/2}\right)^{1/2}. \tag{2}\] Equation 2 describes a "Mexican hat" (MH) shape (see Fig. 1), with a ring of minima located at a certain value \(\left|\mathbf{p}\right|=p_{0}\). For momenta close to this ring of minima, the dispersion can be expanded as \[\varepsilon\left(p\right)\simeq\frac{U}{2\sqrt{1+U^{2}}}+\frac{\left(p-p_{0} \right)^{2}}{2m}, \tag{3}\] where \[p_{0}=\frac{U}{2}\sqrt{\frac{2+U^{2}}{1+U^{2}}}\ \ \text{and}\ \ \ m=\frac{\left(1+U^{2}\right)^{3/2}}{2U\left(2+U^{2}\right)}. \tag{4}\] Thus the value of \(p_{0}\) increases approximately proportional to the interlayer potential \(U\). On the other hand, the effective mass \(m\) in the radial direction saturates to \(1/2\) at large \(U\) and diverges as \(1/U\) for small \(U\). Equations 1 - 4 neglect the tunneling of electrons between non-neighboring atoms on opposite layers. Such skew hopping leads to "trigonal warping", in which the rotationally-symmetric low energy band structure described by Eq. 2 loses its rotational symmetry and retains only the lower, \(C_{3}\) symmetry of the parent graphene. Specifically, the conduction band energy is given by \[\varepsilon\left(p\right)= \left[\frac{1}{2}+\frac{U^{2}}{4}+p^{2}\left(\frac{\hat{r}^{2}+2}{ 2}\right)-\right.\] \[\left.\left(2p^{3}t\cos\left(3\phi\right)+p^{2}\left(p^{2}\hat{r}^ {2}+U^{2}+1\right)+\frac{\left(1-p^{2}\hat{r}^{2}\right)^{2}}{4}\right)^{1/2} \right]^{1/2}\right]^{1/2} \tag{5}\] where \(\phi\) is the angle between the momentum vector \(\mathbf{p}\) and the \(p_{x}\) axis. The trigonal warping scale \(t\approx 0.12\)[19]. At \(U=0\), Eq. 5 implies that the parabolic band touching point is split by trigonal warping into four low energy Dirac cones: one "central pocket" at \(p=0\) and three "side pockets". As \(U\) is increased from zero, all four pockets acquire an energy gap. The three side pockets move toward increasingly large \(p\) and remain degenerate in energy, while the central pocket acquires both larger mass and larger energy than the side pockets. When the interlayer potential difference is higher than the energy scale associated with trigonal warping (\(U>t\)), the central pocket becomes a local maximum and disappears. At finite \(U\), trigonal warping turns the ring of minima in the dispersion relation into three discrete minima, "mini-valleys," each with the same energy. At large \(U\gg t\), these Figure 1: Conduction band dispersion \(\varepsilon\left(p\right)\) of BBG (Eq. 2), neglecting trigonal warping, plotted for \(U=2.0\) as a function of the radial momentum \(p\). The quantity \(p_{0}\) represents the radius of the dispersion minimum. The inset shows the full ”Mexican Hat” structure as a 3D plot. The red dashed line shows the approximation in Eq. 3. mini-valleys are located at \(p\) close to the value of \(p_{0}\) given by Eq. 4 and at \(\phi=0,\pm 2\pi/3\). These different limits are illustrated in Fig. 2. We now review the general criterion for the stability of the WC phase. The conventional semiclassical model for the WC in two dimensions describes individual electrons as being localized to points in a triangular lattice that minimizes the classical electrostatic energy (see Fig. 3). In this arrangement, each electron resides in a local minimum of the electrostatic potential created by all other electrons. Thus, deep in the WC phase, the primary contribution to the energy per electron is equal to the classical electrostatic energy. To understand the lowest-order quantum correction, consider that each electron can be described as a harmonic oscillator (HO) residing in a locally parabolic potential whose strength is determined by the Coulomb interaction [20; 21]. One can therefore estimate the lowest-order quantum correction to the energy per electron as the ground state energy of the two-dimensional HO. In the conventional WC state, the HO description correctly gives the lowest-order quantum correction to the energy with an accuracy better than \(10\%\)[20]. The HO picture also provides a straightforward way of estimating the critical density associated with the quantum melting of the WC. For a conventional particle with mass \(m\) in a confining potential \(u(r)=kr^{2}/2\), where \(r\) is the distance from the potential minimum, the HO ground state wavefunction has a characteristic frequency \(\omega=\sqrt{k/m}\) and root-mean-squared radius \(\sqrt{\langle r^{2}\rangle}=\sqrt{\hbar/m\omega}\). Melting is associated with the ratio \(\eta=\sqrt{\langle r^{2}\rangle}/a\), where \(a=(\sqrt{3}n/2)^{-1/2}\) is the lattice constant of the Wigner lattice, becoming larger than a critical value \(\eta_{c}\) (the Lindemann criterion). Empirically, \(\eta_{c}\) is universally in the range \(0.20-0.25\)[22; 23]. While making a precise determination of the critical density associated with WC melting is, in general, very difficult, here we use the Lindemann criterion (with \(\eta_{c}=0.23\)) as a simple proxy to estimate the critical density \(n_{c}\) associated with the melting of the WC state [24]: \[\sqrt{\langle\hat{r}^{2}\rangle}=\eta_{c}\left(\frac{2}{\sqrt{3}n_{c}}\right) ^{1/2}. \tag{6}\] In this way, our discussion of the WC state is reduced to solving a single-particle problem: that of a single-particle harmonic oscillator in a confining potential whose strength is determined by electron-electron interactions. Below we provide more discussion of the limitations of the HO picture for determining the ground state of the WC. Let us briefly recall the structure of energy levels of the conventional 2D HO. The eigenstate of the 2D HO can be labeled by the principle and azimuthal quantum numbers \((n,\ell)\), and the eigenenergies are \(E_{n}=(n+1)\hbar\omega\), with \(n\) a non-negative integer. The \(n^{\text{th}}\) energy level is \((n+1)\)-fold degenerate (ignoring spin degeneracy). For odd \(n\), the allowed values of \(\ell\) are odd, \(\ell\in(-n,-n+2,...,n)\), whereas for even \(n\), only even values of \(\ell\) are allowed, \(\ell\in(-n,-n+2,...,0,...,n)\). It should be noted that the \((n,\ell)=(0,0)\) state remains the ground state even when an external magnetic field is applied [25]. A large magnetic field can bring states with finite \(\ell\) close to the ground state (the energy difference between \(\ell\neq 0\) and \(\ell=0\) is given by \(\Delta E_{\ell}\simeq|\ell|\hbar\omega^{2}/2\omega_{c}\), where \(\omega_{c}=eB/m\) is the cyclotron frequency), but the ground state always has \(\ell=0\). When considering electrons with an arbitrary dispersion relation \(\varepsilon(p)\) in a parabolic confining potential, it is simplest to write the Hamiltonian in momentum space: \[H=\varepsilon(p)+\frac{1}{2}k\hat{r}^{2}, \tag{7}\] where \(\hat{r}\) is the position operator. For a band that has nonzero Berry curvature, one can arrive at the effective Hamiltonian by writing the position operator \(\hat{r}\) in momentum space and then projecting the resulting Hamiltonian to the band of interest. The resulting Hamiltonian (see Appendix A for details) is given by \[H=\varepsilon\left(\mathbf{p}\right)+\frac{k}{2}\left(i\mathbf{\nabla}_{\mathbf{p}}+\mathbf{A} \left(\mathbf{p}\right)\right)^{2}, \tag{8}\] Figure 2: Color plots of the conduction band dispersion \(\varepsilon\left(p\right)\), including the effects of trigonal warping (see Eq. 5), presented for (a) \(U=0.02\) and (b) \(U=1.0\). The energy is measured (color bar) in the units of \(\gamma_{1}\). There is a dispersion minimum at the center, which exists only when \(U<t\). At small \(U\) (specifically \(U\lesssim 9t/2\sqrt{2}\)), the side pockets are elongated toward the \(K\) point. At large \(U\) (specifically \(U\gtrsim 9t/2\sqrt{2}\)), the side pockets are elongated in the direction perpendicular to the \(K\) point. Note the difference in energy scales at the color bar where \(\mathbf{A}\left(\mathbf{p}\right)\) is the Berry connection of the band of interest [26; 27; 28; 29]. Comparing Eq. 8 to the usual HO Hamiltonian written in position space, one can say that the dispersion relation \(\varepsilon(p)\) acts like a scalar potential in momentum space, while the Berry connection \(\mathbf{A}(\mathbf{p})\) acts like a magnetic vector potential. In the following sections, we compute the low-energy eigenstates of Eq. 8 for a given strength \(k\) of the confining potential. The value of \(k\) is determined by the interactions between electrons in the Wigner lattice and is, therefore, a function of the electron density, defined as follows. If one takes the origin \(\mathbf{r}=0\) to be a site of the Wigner lattice, then the potential \(u(\mathbf{r})\) experienced by the electron near the origin is \(u(\mathbf{r})=\sum_{i\neq 0}V\left(\mathbf{r}-\mathbf{R}_{i}\right)\), where \(i\) indexes the sites of the Wigner lattice, \(V(r)\) is the Coulomb interaction, \(\mathbf{R}_{i}\) denotes a lattice vector of the Wigner lattice, and \(\mathbf{r}\) is the position of the electron near the origin. Here we have assumed that \(\left|\mathbf{r}\right|\) is much smaller than the Wigner lattice constant so that other electrons in the Wigner lattice can be treated as having a fixed position. The value of \(k\) can then be found by Taylor expanding \(u(\mathbf{r})\) to the second order in \(r\), which gives [24] \[k=\frac{1}{2}\sum_{i\neq 0}\left[V^{\prime\prime}\left(\left|\mathbf{R}_{i}\right| \right)+\frac{V^{\prime}\left(\left|\mathbf{R}_{i}\right|\right)}{\left|\mathbf{R}_{i }\right|}\right]. \tag{9}\] In general, \(V(r)\) should be taken as the screened Coulomb interaction, which we discuss in Sec. III.6. From our calculation of the ground state of Eq. 8, we can estimate whether the WC phase is stable at a particular value of the electron density and displacement field, and we can also assess whether the ground state of the WC has finite or zero orbital angular momentum. ## III Wigner crystal phase with a Mexican hat dispersion In this section, we consider the properties of the WC state under the assumption that the trigonal warping term in the dispersion relation can be neglected so that the dispersion relation is rotationally symmetric around each valley. We show in Sec. IV that including the effects of trigonal warping leaves some of the features of the phase diagram intact but also changes the nature of the WC state at large \(U\). The case of no trigonal warping offers the advantage that most of its properties can be solved analytically, and it provides a clear theoretical example of a WC state with spontaneous orbital polarization. We mention that previous authors have considered the fate of the WC state when the dispersion relation has a MH shape, both in the context of electron systems with spin-orbit coupling [30; 31] and in the context of BBG [32]. These papers argue that the ground state of the WC at low density involves a spontaneous breaking of the Wigner lattice symmetry, with electrons having spatially elongated wavefunctions in real space that are condensed along one region of the minimal ring of the dispersion relation in momentum space. Our analysis here, based on the HO description, suggests instead that at low-density electron wavefunctions remain essentially rotationally symmetric. We discuss this difference in more detail in Sec. III.2, including the limitations of the HO description. However, as we demonstrate below, incorporating the influence of Berry curvature and trigonal warping leads to qualitative changes to the properties of the WC phase that facilitate symmetry breaking similar to the kind suggested in Refs. [30; 31; 32; 33]. ### Without Berry curvature In the absence of Berry curvature, the Schrodinger equation (SE) in momentum space for the Harmonic oscillator potential is given by \[\left[\varepsilon\left(p\right)-\frac{k}{2}\left(\frac{\partial}{\partial p} \left(p\frac{\partial}{\partial p}\right)+\frac{1}{p^{2}}\frac{\partial^{2}}{ \partial\phi^{2}}\right)\right]\psi\left(\mathbf{p}\right)=E\psi\left(\mathbf{p} \right). \tag{10}\] Due to the rotational symmetry of the problem, the eigenstates of the Hamiltonian can be written using the separation of variables as \(\psi\left(\mathbf{p}\right)=f\left(p\right)\exp(i\ell\phi)\), where \(\ell\) is the angular momentum quantum number (throughout this paper we use \(\phi\) to denote the azimuthal angle in the momentum space, while \(\theta\) denotes the azimuthal angle in real space). We can further convert the above SE into an effective one-dimensional SE by defining \(f(p)\equiv g(p)/\sqrt{p}\) and introducing a dimensionless momentum \(u\equiv p/\sigma\), \(u_{0}\equiv p_{0}/\sigma\), with \(\sigma=\sqrt{m\omega}\) being the characteristic momentum of a HO. (As above, \(\omega=\sqrt{k/m}\) is the characteristic HO frequency.) The resulting SE becomes \[\left[-\frac{d^{2}}{du^{2}}+\frac{1}{u^{2}}\left(-\frac{1}{4}+\ell^{2}\right) +\left(u-u_{0}\right)^{2}\right]g(u)=\epsilon g(u), \tag{11}\] where we have defined \(\epsilon\equiv E/(\omega/2)\), and we have used Eq. 2 to approximate the dispersion of \(\varepsilon(p)\). As the harmonic confinement becomes asymptotically weak (i.e., at very low electron density), the value of the constant \(u_{0}\) becomes large, and the eigenstates \(g(u)\) are concentrated around \(u=u_{0}\). Expanding the SE in terms of Figure 3: Schematic representation of the WC lattice, showing the lattice constant \(a\) of the Wigner lattice and the electron wave packet width \(\sqrt{\langle r^{2}\rangle}\). The red curve denotes the harmonic confinement experienced by the corresponding electron, and the blue-shaded regions represent the HO ground state wavepackets. \(x\equiv(u-u_{0})\ll u_{0}\) gives \[\left(-\frac{d^{2}}{dx^{2}}+x^{2}\right)g(x)=\left(\epsilon-\frac{1}{u_{0}^{2}} \left(-\frac{1}{4}+\ell^{2}\right)\right)g(x), \tag{12}\] which is precisely the usual SE for a 1D HO. (This mapping between a 2D bound state for an electron with a Mexican Hat dispersion and an equivalent 1D problem has been pointed out by previous authors, particularly in the context of a hydrogen-like bound state [34; 35; 36].) We can now read off the low-energy spectrum as \[E_{\ell}\simeq\frac{\omega}{2}+\frac{\omega\ell^{2}}{2u_{0}^{2}}-\frac{\omega }{8u_{0}^{2}}. \tag{13}\] Appendix B gives more details about this derivation. Notice that Eq. 13 is qualitatively different from the energy spectrum of the conventional two-dimensional HO. Most notably, when the principal quantum number \(n=0\), there remains a large number of nearly degenerate angular momentum states \(\ell\) with energy level spacing that is proportional to \(1/u_{0}^{2}\). The normalized wavefunctions take the following form in momentum space: \[\psi_{\ell}\left(p,\phi\right)=\left(\frac{2\sqrt{\pi}}{\sigma}\right)^{1/2} \frac{\exp\left[-\frac{\left(p-p_{0}\right)^{2}}{2\sigma^{2}}\right]}{\sqrt{p }}\,\exp\left[i\ell\phi\right]. \tag{14}\] In real space, these eigenstates are given by \[\tilde{\psi}_{\ell}\left(r,\theta\right)=\left(\frac{p_{0}\sigma}{\sqrt{\pi} }\right)^{1/2}J_{\ell}\left(p_{0}r\right)\exp\left[-\frac{\sigma^{2}r^{2}}{2} \right]\exp\left[i\ell\theta\right], \tag{15}\] where \(J_{\ell}\) is the \(\ell^{th}\) order Bessel function of the first kind. The probability distribution corresponding to the ground state wavefunction in momentum space and real space are plotted in Fig. 4. Note that the wavefunction exhibits fast oscillations in real space with wavelength \(\sim 1/p_{0}\) and many nodes of density even in the ground state. This violation of the usual rule that "ground state wavefunctions don't have nodes" arises from the unique ring of minima in the dispersion relation. The width of the wavefunction in real space can be calculated to be \[\left\langle\hat{p}^{2}\right\rangle_{\ell}\simeq\frac{1}{\sigma^{2}}\left( \frac{1}{2}+\frac{\ell^{2}}{u_{0}^{2}}-\frac{1}{4u_{0}^{2}}\right). \tag{16}\] We can use this result, together with the Lindemann criterion (Eq. 6) to estimate the critical density associated with the WC phase. In the limit of large \(U\gg 1\), where the mass approaches \(m\simeq 1/2\) (see Eq. 4) and dielectric screening is unimportant (see Sec. III.6 below), this calculation yields \[n_{c}\approx 2.1\times 10^{-4}. \tag{17}\] (We will show in the following section that including the effects of trigonal warping leads to a lower numerical estimate of \(n_{c}\).) ### Possibility of spontaneous rotational symmetry breaking of the electron wavefunction So far, we have discussed the WC state using the HO description. Under this description, the lowest energy state of the electron is the one derived in the previous subsection, which has rotational symmetry. Other proposals for the WC state with a MH dispersion have suggested that the electron wavefunction at each site of the Wigner lattice condenses along one point of the ring of minima in the dispersion so that, effectively, the electron has a very heavy mass in one direction and a light mass in the perpendicular direction, leading to a highly anisotropic wavefunction [30; 31; 32]. Within the HO description, the radially symmetric electron wavefunction that we have derived has a lower energy than these spatially elongated wavefunctions by an amount \(\sim\omega(\sigma/p_{0})^{2}\). But it is important to note that the HO description alone cannot accurately distinguish between these candidate ground states for the WC because it fails to capture the effects of correlations in the quantum fluctuations between different electrons in the Wigner lattice. To correctly calculate the lowest-order quantum correction to the WC energy, one should, in general, calculate the dispersion relation \(\omega(\mathbf{q})\) of WC phonon modes; the ground state has one quantum of energy \(\omega(\mathbf{q})\) for each possible phonon mode \(\mathbf{q}\)[37]. The HO description provides an overestimate of the WC energy because it does not account for these correlated quantum fluctuations. Alternatively, one can say that the HO energy we derive represents the highest frequency in the WC phonon dispersion - it is the "Einstein phonon" limit. In the conventional WC, the HO description is fairly accurate: it gives an estimate for the numerical value of the quantum correction to the WC energy at low density that is only \(\approx 9\%\) larger than the value given by the phonon calculation [20].1 Footnote 1: It is worth noting that the HO description gives an energy that is equivalent to calculating the Hartree energy of a trial wavefunction that consists of Gaussian wave packets at every site of the Wigner lattice [38; 39; 40; 41]. Unfortunately, in our case, the phonon calculation is difficult to carry out due to the non-monotonic dependence of the electron kinetic energy on momentum, and it cannot be reproduced by a simple extension of the canonical procedure [37]. We are therefore unable to estimate in a precise way the Figure 4: The probability density of the ground state wavefunction is plotted in (a) the momentum space (Eq. 14) and (b) the real space (Eq. 15). quantum correction to the WC energy, which would adjudicate between different candidate ground states. However, we speculate that for the MH dispersion the rotationally symmetric state we are describing is lower in energy than the symmetry-broken states described by Ref. [30; 31; 32], since in the former case the maximum frequency in the WC phonon dispersion is lower, and the electron wavefunctions are more compact in real space, leading to reduced interaction in general. However, for real BBG, this discussion is largely irrelevant because the presence of trigonal warping breaks the rotational symmetry of the dispersion and leads to anisotropic mini-valleys that naturally yield spatially elongated wavefunctions. So, in fact, the WC state in real BBG at low density does consist of a pattern of spatially elongated wave packets with nontrivial mini-valley ordering on the WC lattice [33], as we discuss in the following section. ### With Berry curvature We now return to the HO description and consider the effects of Berry curvature on the WC state by including the Berry curvature in the effective single-particle Schrodinger equation. It should be noted that in BBG, the Berry curvature takes opposite signs in the \(K\) and \(K^{\prime}\) valleys. Here, for simplicity, we focus primarily on the \(K\) valley, where the Berry curvature is positive. The details of the derivation from this subsection can be found in Appendix C. One can include the effects of Berry curvature by noticing that the Berry connection enters the effective single-electron Hamiltonian (Eq. 8) via a minimal substitution to the position operator written in momentum space. The Berry connection \(\mathbf{A}\) is particularly straightforward to write in the Coulomb gauge when the Berry curvature \(\Omega\left(p\right)\) is radially symmetric. In this case \[\mathbf{A}=\frac{\tilde{\Phi}\left(p\right)}{p}\hat{\phi}, \tag{18}\] where \(\tilde{\Phi}\) is the fraction of the total \(2\pi\) Berry flux through a disk in momentum space of radius \(p\): \[\tilde{\Phi}\left(p\right)=\int_{0}^{p}dp^{\prime}\ p^{\prime}\,\Omega\left(p ^{\prime}\right). \tag{19}\] The minimal substitution process of adding the Berry connection amounts to modifying the azimuthal part of the gradient operator in the following way. \[\frac{i}{p}\frac{\partial}{\partial\phi_{p}}\longrightarrow\frac{i}{p}\frac{ \partial}{\partial\phi_{p}}+\frac{\tilde{\Phi}\left(p\right)}{p}. \tag{20}\] The effective one-dimensional SE in this situation can be shown to be \[\left[-\frac{d^{2}}{du^{2}}+\frac{1}{u^{2}}\left(-\frac{1}{4}+\left(-\ell+ \tilde{\Phi}\left(u\sigma\right)\right)^{2}\right)+\left(u-u_{0}\right)^{2} \right]g(u)=\epsilon g(u). \tag{21}\] Taylor expanding this equation for small \(x=u-u_{0}\), as in the previous subsection, gives \[-\frac{d^{2}g}{dx^{2}}+x^{2}g(x)=\left(\epsilon-\frac{1}{u_{0}^{2}}\left(- \frac{1}{4}+\left(\ell-\tilde{\Phi}\left(p_{0}\right)\right)^{2}\right)\right) g(x), \tag{22}\] so that the eigenenergies are given by \[E_{\ell}\simeq\frac{\omega}{2}+\frac{\left(\ell-\tilde{\Phi}\left(p_{0} \right)\right)^{2}\sigma^{2}\omega}{p_{0}^{2}}-\frac{\sigma^{2}\omega}{4p_{0}^ {2}}. \tag{23}\] (Equation 23 was previously derived in Ref. [42] in the context of a spin-1/2 particle with Rashba spin-orbit coupling in harmonic confinement.) Equation 23 implies that the Berry flux enclosed by the wavefunction in momentum space plays a crucial role in determining the electron's energy spectrum.2 In particular, if \(\tilde{\Phi}(p_{0})>1/2\), the ground state has \(\ell=1\) rather than \(\ell=0\). Thus, a transition from zero angular momentum to a finite angular momentum of electrons in the WC can be induced by increasing the displacement field \(U\), which increases \(p_{0}\) and thereby encloses more flux within the wavefunction. (In the \(K^{\prime}\) valley, the transition is from \(\ell=0\) to \(\ell=-1\).) Footnote 2: Notice that the standard semiclassical treatment, in which one treats the wavefunction as a compact wave packet localized around a specific point \(\mathbf{p}\) in momentum space [43], fails to capture the effect we are describing here. In this standard treatment, the Berry curvature \(\tilde{\Omega}\) enters only through the value \(\Omega(\mathbf{p})\), whereas here, due to the ring of minima in the band edge, the electron state crucially depends on the _flux_ of Berry curvature through the wavefunction. The critical field associated with this transition to finite angular momentum can be found by the condition: \[\int_{0}^{p_{0}\left(U_{z}\right)}dp^{\prime}p^{\prime}\Omega\left(p^{\prime} \right)=\frac{1}{2}. \tag{24}\] (In the hypothetical case where the Berry curvature is constant as a function of \(p\), this condition simplifies to \(\Omega\left(p_{0}\right)p_{0}^{2}=1\).) Evaluating this condition numerically gives (again, ignoring the effects of trigonal warping), \[U_{c}\approx 1.07. \tag{25}\] ### Magnetization A sudden change in angular momentum leads to observable effects in the magnetization of the WC state. The magnetization operator can be expressed as \[\hat{M}_{z}=\frac{e}{2}\left(\mathbf{v}_{\mathbf{p}}\times\mathbf{r}\right), \tag{26}\] where \(\mathbf{v}_{\mathbf{p}}=\mathbf{\nabla}_{\mathbf{p}}\epsilon(\mathbf{p})\) is the velocity operator. The expectation value of the magnetization can be written as \[\left\langle\hat{M}_{z}\right\rangle=\frac{e}{2}\int\frac{d^{2}\mathbf{p}}{\left(2 \pi\right)^{2}}\ v_{p}\left(-\ell+\tilde{\Phi}\left(p\right)\right)\left|f \left(p\right)\right|^{2}. \tag{27}\] The magnetization has two contributions, one from the angular momentum of the wavefunction and the other from the underlying Berry curvature. Tuning the value of \(U\) across the \(\ell=0\) to \(\ell=1\) transition results in a jump in the magnetization. In the limit of \(u_{0}\gg 1\), we can evaluate \(\left\langle\hat{M}_{z}\right\rangle\) analytically as \[\left\langle\hat{M}_{z}\right\rangle\simeq\frac{e}{2m}\left(\frac{\sigma^{2}} {2p_{0}^{2}}\right)\left[\ell-\bar{\Phi}\left(p_{0}\right)+\Omega\left(p_{0} \right)p_{0}^{2}\right]. \tag{28}\] Note that the magnetization in this system, for a given angular momentum, is weaker than the usual Bohr magneton \(e\hbar/2m\) by a factor \(\sigma^{2}/2p_{0}^{2}\ll 1\). One can think that this large suppression of the magnetization arises because the group velocity is very small for momenta \(p\) close to \(p_{0}\) due to the flattened MH dispersion, so the electron current is weak for a given angular momentum. Note also that the sign of the magnetization is opposite to the case of the usual parabolic dispersion, which implies that the sign of \(\left\langle\hat{M}_{z}\right\rangle\) for fixed \(\ell\) is inverted as \(p_{0}\) is increased from zero. This inversion of the sign of the magnetization is discussed further in Appendix E. Equation 28 implies that at the critical field \(U_{c}\), which marks the transition between the \(\ell=0\) and \(\ell=1\) phases, the magnetization has a jump of magnitude \(\left(e/2m\right)\left(\sigma^{2}/2p_{0}^{2}\right)\). ### Estimate of exchange integral: ordering temperature Each electron in the WC phase has an Ising-like degree of freedom associated with the valley (\(K\) or \(K^{\prime}\)). Since the sign of the Berry curvature is opposite for the two valleys, ordering to a specific valley is concomitant with an orbital ferromagnetic transition in the \(\left|\ell\right|=1\) phase (ordering to \(\ell=1\) in one valley and \(\ell=-1\) in the opposite valley). At sufficiently low temperatures, valley ordering is driven by the exchange interaction between neighboring electrons. One can arrive at a naive estimate for the temperature scale associated with valley ordering by calculating the usual exchange integral [44] \[\begin{split} J_{ab}&=\int d\mathbf{r_{1}}\int d\mathbf{r_{2 }}\,\left[\psi^{*}\left(\mathbf{r}_{1}\right)\psi^{*}\left(\mathbf{r}_{2}-\mathbf{R} \right)\times\right.\\ &\left.V\left(\mathbf{r}_{1}-\mathbf{r}_{2}\right)\psi\left(\mathbf{r}_{2} \right)\psi\left(\mathbf{r}_{1}-\mathbf{R}\right)\right],\end{split} \tag{29}\] where \(J_{ab}\) is the exchange energy between two electrons (nominally described by single-electron wavefunctions \(\psi_{\ell}\left(r\right)\)): one centered at the origin and the other at a neighboring site \(\mathbf{R}\) of the Wigner lattice. We emphasize that \(J_{ab}\) is only a lower bound for the actual exchange interaction since the presence of higher-order ring exchange processes generally leads to an exponential enhancement of the exchange in the WC and favors ferromagnetic ordering [45; 46; 47; 48]. Evaluating \(J_{ab}\) using the real space wavefunction in Eq. 15 (see Appendix D for the derivation), one arrives at \[J_{ab}\approx\sqrt{\frac{8}{\pi}}\left(\frac{1}{R\sigma}\right)^{2}\left( \frac{e^{2}\sigma}{\epsilon_{r}}\right)\left(\cos\left(p_{0}R-\ell\pi-\frac{ \pi}{2}\right)\right)^{2}\exp\left[-\frac{\sigma^{2}R^{2}}{2}\right]. \tag{30}\] Using this formula at the critical density (presented below) associated with melting of the WC in the \(\left|\ell\right|=1\) phase gives a value of \(J_{ab}\) that is of the order \(\approx 10\) mK. (Which, as mentioned above, should be considered a lower bound for the ordering temperature.) The ordering temperature decreases exponentially as the density is reduced. Note also that Eq. 30 implies an exchange temperature that oscillates as a function of the density \(n\sim 1/R^{2}\). ### Comment on the dielectric function When the interlayer potential \(U\) is small, the gap between conduction and valence bands is also small, so the electron-electron interaction is significantly modified by screening by virtual interband excitations. This effect is captured by the static polarization function \(\Pi(q)\), such that the Fourier-transformed Coulomb interaction is [20] \[V(q)=\frac{V_{0}(q)}{1-\Pi(q)V_{0}(q)}, \tag{31}\] where \(V_{0}(q)=2\pi e^{2}/(\epsilon_{r}q)\) is the unscreened Coulomb interaction. For gapped BBG, \(\Pi(q)\) has the following asymptotic behaviors [32]: \[\Pi\left(q\right)=\left\{\begin{array}{ll}-q/4,&q\gg 1\\ -\ln(4)/\pi,&1\gg q\gg\sqrt{U}\\ -4q^{2}/3\pi U,&q\ll\sqrt{U}\end{array}\right.. \tag{32}\] As we show below, the WC state melts at densities \(n\ll U\), so that the relevant momentum scale \(q\sim n^{1/2}\) corresponds to the final regime in Eq. 32. Thus, we can use the screened interaction: \[V\left(q\right)=\frac{2\pi e^{2}}{\epsilon_{r}(q+q^{2}/q_{0})}, \tag{33}\] where \(q_{0}=3U/(8e^{2})\). For the numerical estimates below, we have used \(\epsilon_{r}\approx 5\), corresponding to a hexagonal boron nitride substrate. Notice that at sufficiently large \(q\) (low electron density), the Coulomb potential \(V(q)\simeq 3U/(8\epsilon_{r}q^{2})\) corresponds to a logarithmic dependence of \(V(r)\) on the spatial distance \(r\). This modified Coulomb interaction significantly modifies the critical density associated with WC melting [24] whenever \(n\gg U^{2}\). In the limit \(U\ll 1\) and \(U\ll q\ll\sqrt{U}\), the relevant electron dispersion is quartic, given by \(\varepsilon(p)\approx U/2+p^{4}/U\)[32]. Using the screened interaction for \(V(q)\), and calculating the confining potential strength via Eq. 9, one can show that \(k=3\pi Un/8\)[24]. The Lindemann criterion for melting then gives a critical density \[n_{c}=C_{2}\times U, \tag{34}\] where \(C_{2}\approx 173\). Thus, unlike in the conventional WC problem, in BBG, the critical density vanishes in the limit \(U\to 0\). That is, interlayer dielectric response precludes the formation of a WC state unless a displacement field is applied that provides an energy gap for virtual electron-hole pairs. ### Numerical results We can map out the full phase diagram for the WC phase by implementing a numerical solution of the effective one-dimensional radial SE (see Eq. 21) for a given density \(n\) and interlayer potential \(U\). The stability of the WC phase is estimated by numerically calculating \(\left<r^{2}\right>\) for the ground state wavefunction and using the Lindemann criterion (Eq. 6). Whether the ground state has \(\ell=0\) or \(\ell=1\) is estimated by comparing the energies of the two solutions. We use the Numerov algorithm [49] to solve the SE numerically. The numerically calculated phase diagram is presented in Fig. 5. The red dashed lines indicate the critical values \(n_{c}\) and \(U_{c}\) associated with melting and orbital polarization of the WC state that were derived analytically in the previous subsections. The extension of the WC state toward large \(n\) at small \(U\) is associated with the effective mass at the bottom of the band becoming very heavy as \(U\) is reduced (Eq. 4). The disappearance of the WC state as \(U\to 0\) arises due to dielectric screening, which truncates the long-ranged part of the Coulomb interaction when the band gap vanishes. We can further validate the jump in magnetization discussed in Section III.4 by numerically computing the magnetization \(\langle\hat{\mathcal{M}}_{z}\rangle\) (Eq. 27) utilizing the numerical solution for the wavefunction \(f(p)\). An example is shown in Fig. 6 for a specific value of \(n\) within the WC phase. The abrupt jump in \(\langle\hat{\mathcal{M}}_{z}\rangle\) as a function of increasing \(U\) is associated with the transition from \(\ell=0\) to \(\ell=1\). So far, we have discussed the phase boundary of the WC state at zero temperature, for which the melting of the WC with increasing density arises from quantum fluctuations. But at finite temperatures, thermal fluctuations can also melt the WC state. At densities not too close to the critical density associated with quantum melting (at a given value of \(U\)), one can estimate the melting temperature by setting the Lindemann ratio to be \(\eta=\sqrt{\left<r^{2}\right>_{\text{thermal}}}/a\), where the amplitude of classical fluctuations is estimated using the equipartition theorem: \(k\left<r^{2}\right>_{\text{thermal}}=k_{B}T\). The maximum melting temperature can be estimated by using the value of \(k\) associated with the largest density \(n_{c}\) of the WC state (here, \(n_{c}\approx 0.0012\), see Fig. 5) and setting \(\eta=\eta_{c}\approx 0.23\). This procedure gives a maximum melting temperature on the order of \(\sim 10\,\text{K}\), with the melting temperature decreasing proportional to \(n^{1/2}\) as the density is reduced. ## IV Wigner crystal phase with trigonal warping In the previous section, we derived the properties of the WC state - including the critical density associated with melting and the critical displacement field associated with orbital magnetization - using a description that neglects trigonal warping. While this assumption renders the problem more analytically tractable, at low electron density, the trigonal warping scale in BBG can easily become larger than the electron kinetic energy so that the electron system breaks up into small "pockets" of carriers at each valley [8; 9; 11; 16; 50]. In this section, we consider how the results from the previous section are modified by trigonal warping. We first study the phase diagram of the WC state and show that the structure of central and side pockets in the dispersion relation leads to an unusual doubly-reentrant behavior of the WC/FL phase boundary. We also show that trigonal warping precludes the possibility of the state with spontaneous orbital polarization (\(\ell=1\)) presented in the previous section and instead leads to a state with nontrivial mini-valley ordering Figure 5: The phase diagram of the low-density electron gas in BBG in the space of electron density (\(n\)) and displacement field (\(U\)). The blue lines represent the phase boundary between Wigner Crystal (WC) and Fermi Liquid (FL), with the critical value of \(U_{c}\) associated with the orbital polarization calculated numerically. The dashed red lines correspond to the analytically calculated melting densities \(n_{c}\) in both the small and large \(U\) limits (see Eqs. 16 and 34), as well as the critical \(U_{c}\) (see Eq. 25), which distinguishes between WC states with angular momentum \(\ell=0\) and \(\ell=1\). Figure 6: Magnetization per electron plotted as a function of displacement field \(U\) at \(n=2\times 10^{-4}\). The blue dots represent the numerical results, and the red dashed line corresponds to the analytical result from Eq. 28. At the critical value \(U=U_{c}\), the magnetization has a jump corresponding to the phase transition from \(\ell=0\) to \(\ell=1\). in the Wigner lattice [33]. Throughout this section, we are able to ignore the effects of interband dielectric screening discussed in Sec. III.6, since even at \(U=0\) the trigonal warping leads to a linear rather than a parabolic dispersion near the band touching point, and in this case dielectric screening produces only a weak renormalization of the effective dielectric constant [51, 52, 24]. ### Phase diagram of the WC and doubly-reentrant melting We can estimate the boundary between the WC and FL phases using the HO description, as in the previous section. We first consider the case of a relatively low displacement field, \(U<t\). In real units, this inequality corresponds to \(U\lesssim 50\,\mathrm{meV}\), which for a boron nitride substrate translates to a displacement field smaller than \(\approx 0.5\,\mathrm{V/nm}\)[8]. When \(U<t\), the low energy conduction band comprises three side pockets and a central pocket having a dispersion given by \[\varepsilon_{\mathrm{central}}\left(\mathbf{p}\right)\simeq\frac{U}{2}+\frac{ \left(t^{2}-U^{2}\right)}{U}\left(p_{x}^{2}+p_{y}^{2}\right). \tag{35}\] Notice that when \(U\) exceeds \(t\), the central pocket disappears and transforms into a dispersion maximum. The side pockets, meanwhile, have an anisotropic dispersion, which in the limit \(U\ll t\ll 1\) can be approximated by \[\varepsilon_{\mathrm{side}}\left(\mathbf{p}\right)\simeq\frac{U}{2}-t^{2}U+\frac{ t^{2}}{U}\tilde{p}_{x}^{2}+\frac{9t^{2}}{U}\tilde{p}_{y}^{2}. \tag{36}\] Here, \(\tilde{p}_{y}\) and \(\tilde{p}_{y}\) represent the components of momentum relative to the minimum of the pocket, with \(\tilde{p}_{x}\) being in the direction directly away from the K point and \(\tilde{p}_{y}\) being in the perpendicular direction. Notice that the central pocket is slightly higher in energy than the side pockets but has a heavier mass (see Fig. 7a). Thus, whether the electron wavefunction lives primarily in the central or side pockets depends on the strength of the confining potential and, therefore, on the electron density. When the electron density is low, each electron in the WC resides primarily in the lower-energy side pockets. But when the electron density is high, the wavefunction has most of its weight in the heavier central pocket, which provides a lower confinement energy \(\omega\). One can estimate the critical density \(n^{\star}\) associated with the crossover from the side pockets to the central pocket by equating the energies of the corresponding HO ground states. In the limit \(U\ll t\ll 1\), this procedure gives \[n^{\star}\simeq 0.023U^{2}. \tag{37}\] At \(n\ll n^{\star}\), the electron wavefunction resides primarily in the side pockets, while at \(n\gg n^{\star}\), the electron wavefunction resides primarily in the central pocket. One can also estimate the critical densities associated with WC melting (via the Lindemann criterion) for both the central and side pockets. These give \[n_{c}^{\mathrm{central}}=(1.32\times 10^{-5})\frac{U^{2}}{(t^{2}-U^{2})^{2}}. \tag{38}\] and \[n_{c}^{\mathrm{side}}\simeq 0.004U^{2}, \tag{39}\] respectively. Notice, however, that \(n_{c}^{\mathrm{side}}<n^{\star}<n_{c}^{\mathrm{central}}\). This hierarchy suggests a scenario we refer to as "doubly-reentrant melting" (see Fig. 7b): as the density \(n\) is increased from zero, the WC (in the side pocket) first melts at \(n\approx n_{c}^{\mathrm{side}}\), then the electron system transitions to the central pocket and freezes at \(n\approx n^{\star}\), and then melts again at \(n\approx n_{c}^{\mathrm{central}}\). We confirm this scenario with a numerical solution in Fig. 8 (described below). We note that Eq. 38 describes a critical density \(n_{c}^{\mathrm{central}}\) for the central pocket that diverges when \(U\) approaches \(t\) due to the diverging band mass of the central pocket. However, when the density \(n\) is sufficiently large, the width \(\sigma\) of the associated wave packet in momentum space becomes sufficiently large that one can no longer use a simple parabolic band approximation to describe the central pocket. Instead, when \(\sigma\) becomes of the order of the distance between the central and side pockets (\(\sim t\)), the electron wavefunction leaks into the side pockets, and the WC state is eliminated. This effect truncates the divergence of \(n_{c}^{\mathrm{central}}\) and sets the Figure 7: (a) Schematic depiction of the side and central pocket dispersions and the corresponding energy of the HO ground state in each. In the situation depicted here, the electron resides in the side pocket despite its lighter mass. (b) Schematic line cut of the phase diagram at a fixed \(U<t\). At very low density, Wigner crystallization occurs in the side pockets due to their lower band edge energy. As the density \(n\) increases, the WC undergoes quantum melting at \(n_{c}^{\mathrm{side}}\), transitioning into a putative Fermi liquid phase in the side pockets. However, beyond a critical density \(n^{\star}\), it becomes energetically favorable for crystallization to take place in the higher-energy central pocket. This central pocket WC eventually undergoes melting as the density exceeds \(n_{c}^{\mathrm{central}}\). maximum density \(n_{\rm max}\) associated with the WC state. Setting \(\sigma\sim t\) and using the Lindemann criterion gives \(n_{\rm max}\sim\eta_{c}^{2}t^{2}\), which is consistent with the maximum density of the WC state calculated numerically and shown in Fig. 8. The corresponding maximum melting temperature for the WC state, estimated using the procedure described in Sec. III.7, is of order \(\sim 1\,\)K. On the other hand, in the limit of large \(U\), where Wigner crystallization occurs only in the side pockets, we can estimate the melting density \(n_{c}\) using the HO approximation for a single pocket. To zeroth order in \(t\) (where the trigonal warping is ignored), \(n_{c}\) remains unchanged from Eq. 16. Taking into account the correction arising from a large but finite mass along the \(\tilde{p}_{y}\) direction, the modified critical density \(n_{c}\) at large \(U\) is given by \[n_{c}\approx 2.1\times 10^{-4}\left\{1-6\sqrt{2}\sqrt{\frac{t}{U}}\right\}. \tag{40}\] We discuss the density range associated with the WC state in real units in Sec. V. ### Magnetization and orbital ordering In Sec. III.3 we showed that, for the case where there is no trigonal warping, Berry curvature induces a jump in the magnetization as a function of increasing \(U\). This jump is associated with a transition from zero to finite angular momentum of the electron state at each Wigner lattice site. The presence of trigonal warping removes the rotational symmetry of the dispersion relation so that angular momentum is no longer a good quantum number. Nonetheless, within the HO description, there is still a jump in magnetization as a function of \(U\), which can be understood as follows. At sufficiently large \(U>t\), only the side pockets are occupied by the electron wavefunction, and in general, the electron wavefunction resides in a superposition of these three pockets. Within the HO description, where the confining potential is rotationally symmetric and in the absence of Berry curvature, the ground state represents a symmetric superposition of the three pockets. The first excited state, in this case, is doubly degenerate and corresponds to a state in which the phase of the wavefunction (in momentum space) winds as a function of the angle \(\phi\) by \(2\pi\) in either the clockwise or counterclockwise direction. These two excited states with nontrivial winding are associated with a nonzero orbital magnetization of the electron. In the presence of nonzero Berry curvature, the ordering of the ground state and first excited state is inverted if the Berry flux through the interior of the three pockets is greater than \(\pi\), such that the state with nonzero orbital magnetization aligned with the Berry flux becomes the lowest energy state. This result can be viewed as the momentum space analog of the problem of an electron hopping among sites on an equilateral triangle that is pierced by magnetic flux. A numerical calculation of the corresponding value of \(U\) associated with the winding transition gives \(U_{c}\approx 1.05\). A detailed quantitative discussion of the magnetization transition within the HO description is presented in Appendix F. In practice, however, the energy scale \(\tau\) associated with splitting between the winding and non-winding energy states is exponentially small in the inverse density: \(\log\tau\propto-(p_{0}/\sigma)^{2}\propto-1/n^{3/4}\). Consequently, the actual minivalley ordering of electrons in the WC is dominated by effects that are beyond the HO description. In particular, the correction to the WC energy arising from spatial anisotropy of the electron wavefunction is of order \(n^{3/4}\) (commonly written as \(1/r_{s}^{3/2}\), where \(r_{s}\) is the usual interaction parameter). The optimal minivalley ordering that minimizes the WC energy is a nontrivial problem, but for the case of three pockets related by \(C_{3}\) symmetry, this problem was, in fact, studied recently by Calvera et. al. in Ref. [33]. These authors found that the minimal energy state of the WC is for each electron to be polarized into one of two minivalleys, with an alternating stripe pattern of minivalley polarization on the Wigner lattice. This arrangement is depicted schematically in Fig. 9. Thus, in our problem, the regime of the "side pocket WC" presumably corresponds to the nontrivial minivalley-ordered state depicted in Fig. 9. The regime of larger \(n\) and \(U<t\), for which only the central pocket is occupied, is trivially ordered. It is interesting to note that the anisotropy of the three side pockets changes sign as a function of \(U\). When \(U\) is smaller than a specific value \(U_{\rm iso}=9t/2^{3/2}\), the pockets are elongated toward the \(K\) point, while at \(U\) larger than this value the pockets are elongated in the direction perpendicular to the \(K\) point (see Fig. 2). This geometric change implies that when \(U\) is close to \(U_{\rm iso}\) the pockets become isotropic, and the system is indifferent with respect to minivalley ordering. At this point, one can expect the WC to acquire a particularly large entropy associated with the minivalley degree of freedom. In principle, this increase in entropy of the WC state Figure 8: The phase diagram of the low-density electron gas in BBG in the space of electron density (\(n\)) and displacement field (\(U\)) in the presence of trigonal warping. The blue line represents the phase boundary between Wigner crystal (WC) and Fermi Liquid (FL) and is calculated using setting Lindemann ratio \(\eta=\eta_{c}=0.23\). Inset shows that at small \(U\) there is a doubly reentrant Wigner crystallization transition. should lead to an increased melting temperature of the WC state when \(U\) is close to \(U_{\rm iso}\), as in the Pomeranchuk effect [53; 54]. ### Numerical results In order to numerically calculate the phase diagram with trigonal warping, it is necessary to solve the two-dimensional Schrodinger equation (see Eq. 8) in momentum space. (Unlike in the case without trigonal warping, here we do not have rotational symmetry and therefore cannot reduce the problem to an effective 1D Schrodinger equation.) We include the effects of Berry curvature by defining a Berry connection relative to an arbitrarily chosen point in momentum space (we choose \(\mathbf{p}=0\)) via an integral of the Berry curvature. The details of this method are described in Appendix G. We diagonalize the Schrodinger equation on a discrete triangular grid in momentum space (with a size of approximately \(200\times 200\) points). From the corresponding lowest-energy wavefunction, we calculate the Lindemann ratio, which allows us to estimate the phase diagram associated with the melting of the WC state. The result is shown in Fig. 8. The double re-entrance of the WC phase as a function of density, associated with electrons crystallizing first in the side pockets and then the central pocket, can be seen clearly at small \(U\). ## V Conclusions and comments on experiments In summary, here we have considered the WC state in BBG, providing a discussion of its ground state ordering and an estimation of its phase boundary in the space of density and displacement field. One of our more remarkable conclusions is that if one ignores trigonal warping, then the radially-symmetric low energy band structure enables a magnetization transition of the WC state as a function of the increasing displacement field, driven by the coupling between the Berry flux \(\tilde{\Phi}\) and the orbital angular momentum \(\ell\). Unfortunately, trigonal warping drastically lowers the energy scale associated with this coupling since it splits the rotationally-symmetric "Mexican hat" band structure into three discrete mini-valleys. In principle, an electron in a radially symmetric confining potential still undergoes a magnetization transition as a function of the displacement field, even when trigonal warping is present, associated with a winding transition in the momentum space wavefunction. Such a transition may be relevant for quantum dots in BBG [55]. In the WC, however, the small energy scale associated with the coupling between \(\tilde{\Phi}\) and \(\ell\) is overwhelmed by the non-radially-symmetric component of the confining potential arising from the Wigner lattice. Instead, we predict (using the results of Ref. [33]) that the ground state of the WC is associated with a nontrivial patterning of the minivalley order in space (Fig. 9). In general, the existence of a WC state can be inferred experimentally by a set of distinctive features in transport and thermodynamic measurements. WCs are insulators in terms of their temperature-dependent conductivity [56], with the I-V curve at low temperature exhibiting a sharp "pinning voltage" that is often hysteretic [57; 58; 59]. WCs also exhibit "negative compressibility" in capacitance or penetration field measurements, which is a hallmark of strong positional correlations between electrons (see, e.g., Refs. [60; 61; 62; 63; 64; 65; 66; 67]). The experiments of Ref. [8] observed negative compressibility emerging at low temperature and high displacement field but not coexisting with an insulating temperature dependence. Ref. [9] reports a state with insulating-like temperature dependence that emerges within a window of relatively high displacement field and low densities \(n\approx 1.5\times 10^{11}\,\mathrm{cm}^{-2}\), which the authors characterize as being consistent with a WC. Our calculations here, on the other hand, suggest that the maximum density associated with the WC state (occurring at displacement field \(D\approx 0.7\,\mathrm{V/nm}\)) is only of order \(1.2\times 10^{10}\,\mathrm{cm}^{-2}\) (see Fig. 8). Such low densities are typically difficult to probe experimentally since disorder in the sample introduces an energy scale that randomly modulates the electron density. Experiments on BBG in a displacement field all observe a strongly insulating state with positive compressibility at sufficiently low density and a large displacement field, which presumably corresponds to this disorder-dominated situation. At \(D\approx 0.7\,\mathrm{V/nm}\) the strongly insu Figure 9: Illustration of the pattern of minivalley polarization in the ground state of the WC (following Ref. [33]). This pattern involves alternating stripes of two minivalleys (labeled “1”/”2” and colored red/blue, respectively). Here we depict the WC in real space (main figures) as well as the minivalleys/pockets in momentum space (insets). (a) When \(U<U_{\rm iso}\), the pockets/minivalleys are elongated toward the \(K\) point. (b) When \(U>U_{\rm iso}\), the pockets/minivalleys are elongated in the direction perpendicular to the \(K\) point. lating state occupies a range of density \(2-6\times 10^{10}\,\mathrm{cm^{-2}}\), varying from one experiment to another [8, 9, 10, 11]. Thus the WC regime that we predict seems to be only barely outside the range of current experiments. It will be interesting to see whether future experiments can confirm the re-entrant melting of the WC state that we predict. In principle, one can also infer the existence of a WC optically, using features of the exciton absorption spectrum [68, 69]. Unfortunately, such experiments are technically difficult in BBG due to the very small band gap. Direct observation of the nontrivial pattern of minivalley order depicted in Fig. 9 could, in principle, be accomplished by scanning microscopy. But a perhaps more salient feature would be to observe the associated Pomeranchuk effect that arises at \(U=U_{\mathrm{iso}}\approx 150\,\mathrm{meV}\) (\(D\approx 1.5\,\mathrm{V/nm}\)) due to the large configurational entropy of the WC when the minivalleys become isotropic. Unfortunately, this effect appears only in the lower-density regime of the WC associated with crystallization in the side pockets, which by our estimation, is limited to very low densities \(n\lesssim 2\times 10^{9}\,\mathrm{cm^{-2}}\). ###### Acknowledgements. My thanks for this work go primarily to Leonid Levitov, whose ideas and suggestions were instrumental in shaping this project. The authors are grateful to Zachariah Addison, Zhiyu Dong, Liang Fu, Aaron Hui, Kyle Kawagoe, Steve Kivelson, and Jairo Velasco for useful discussions. This work was supported by the NSF under Grant No. DMR-2045742.
2301.11672
Excitons and trions with negative effective masses in two-dimensional semiconductors
We study theoretically fundamental Coulomb-correlated complexes: neutral and charged excitons, also known as trions, in transition metal dichalogenides monolayers. We focus on the situation where one of the electrons occupies excited, high-lying, conduction band characterized by a negative effective mass. We develop the theory of such high-lying excitons and trions with negative effective mass and demonstrate the key role of the non-parabolicity of the high-lying conduction band dispersion in formation of the bound exciton and trion states. We present simple, accurate and physically justified trial wavefunctions for calculating the binding energies of Coulomb-bound complexes and compare the results of variational calculations with those of a fully numerical approach. Within the developed model we discuss recent experimental results on observation of high-lying negative effective mass trions [K.-Q. Lin et al., Nat. Commun. 13, 6980 (2022)].
M. A. Semina, J. V. Mamedov, M. M. Glazov
2023-01-27T12:12:33Z
http://arxiv.org/abs/2301.11672v2
# Excitons and trions with negative effective masses in two-dimensional semiconductors ###### Abstract We study theoretically fundamental Coulomb-correlated complexes: neutral and charged excitons, also known as trions, in transition metal dichalcogenides monolayers. We focus on the situation where one of the electrons occupies excited, high-lying, conduction band characterized by a negative effective mass. We develop the theory of such high-lying excitons and trions with negative effective mass and demonstrate the key role of the non-parabolicity of the high-lying conduction band dispersion in formation of the bound exciton and trion states. We present simple, accurate and physically justified trial wavefunctions for calculating the binding energies of Coulomb-bound complexes and compare the results of variational calculations with those of a fully numerical approach. Within the developed model we discuss recent experimental results on observation of high-lying negative effective mass trions [K.-Q. Lin et al., Nat. Commun. **13**, 6980 (2022)]. transition metal dichalcogenides, exciton, trion, negative effective mass, non-parabolic dispersion, mexican hat dispersion, high-lying trions ## I Introduction Atomically thin transition-metal dichalcogenides (TMDC) provide a versatile platform for two-dimensional (2D) materials with tailored functionalities and fascinating physical properties [1]. These semiconducting materials demonstrate outstanding optical properties - absorption, reflection, emission - due to excitons and trions, the Coulomb-correlated states of electrons and holes [2; 3; 4], see Refs. [5; 6; 7] for review. Controllable light-matter interaction [8; 9; 10], ability to form van der Waals heterostructures [11], high and tunable exciton binding energies of the excitons and trions in 2D semiconductor based systems [12; 13; 14; 15; 16; 17] make these materials prime candidates for nanophotonics applications [18; 19; 20]. Usually, excitons, bound electron-hole pairs, and trions, three particle complexes formed of the electron and two holes or two electrons and a hole, involve charge carriers from the bottom conduction and topmost valence band [21; 22; 23]. In specific cases, like bulk cuprous oxide, several excitonic series are observed that originate from closely-lying bands [24; 25]. In this respect, TMDC monolayers (MLs) show unique properties. In recent experiments, the high-lying excitons and trions were observed [26; 27] that originate from the topmost valence band holes and electrons in the excited conduction band. Corresponding optical transitions lie in the ultraviolet spectral range and can be advantageous for various applications. Interestingly, the effective mass of the electron in this excited conduction band is negative. It makes energy spectrum and structure of the Coulomb-correlated complexes different from that in conventional situation where the effective masses of the involved charge carriers are positive. Such situation calls for special investigation. Here, motivated by recent experiments [26; 27], we study the excitons and trions where one of the charge carriers, namely, the electron, has a negative effective mass. We demonstrate the importance of non-parabolic \(k^{4}\) terms in the high-lying electron dispersion and present numerical and analytical results of the binding energies and wavefunctions of excitons and trions with negative-mass electrons. The paper is organized as follows: After brief introduction (Sec. I) we formulate the model in Sec. II and present the results for the excitons in Sec. III and trions in Sec. IV. Main results are summarized and a brief outlook is given in Sec. V. ## II Model We consider a simplified band structure of the TMDC monolayer that includes the topmost valence band \(vb\), bottom conduction band \(cb\) and the high-lying conduction band \(cb+2\) in notations of Refs. [26; 27; 28; 6]. Figure 1 shows schematics of the band structure in the vicinity of the \(\mathbf{K}_{\pm}\) points of the Brillouin zone where the direct band gap of TMDC monolayers is realized. The dispersion of the bands nearest conduction and valence bands (\(vb\) and \(cb\)) is taken in the isotropic parabolic form: \[E_{k}^{vb}=-E_{g}-\frac{\hbar^{2}k^{2}}{2m_{h}},\quad E_{k}^{cb}=\frac{\hbar^{2}k^ {2}}{2m_{1}}, \tag{1}\] while in the dispersion of the high-lying \(cb+2\) we take into account also a non-parabolic contribution in the form of the \(k^{4}\) term: \[E_{k}^{cb+2}=E_{g}^{\prime}+\frac{\hbar^{2}k^{2}}{2m_{2}}+Bk^{4}. \tag{2}\] Here \(k\) is the electron wavevector, \(E_{g}>0\) and \(E_{g}^{\prime}>0\) are the band gaps between \(cb\leftrightarrow vb\) and \(cb+2\leftrightarrow cb\), respectively, \(m_{1}>0\) and \(m_{2}<0\) are, respectively, the electron effective masses in the bottom conduction band and high-lying band, and \(m_{h}>0\) is the effective mass of the valence band hole; the electron effective mass in \(vb\)\(m_{vb}=-m_{h}<0\). The coefficient \(B>0\) describes the non-parabolic contribution to the dispersion of the high-lying electron. Note that in absence of \(k^{4}\) terms the energy \(E_{k}^{cb+2}\) can become lower than \(E_{k}^{cb}\) making the band notations meaningless, while \(Bk^{4}\) renders the problem well-defined. Thus, the dispersions (1) and (2) with \(B>0\) represent a minimum model that allows us to have a consistent picture of the high-lying excitons and trions. As a result of the interplay of the \(k^{2}\) and \(k^{4}\) the dispersion in the \(cb+2\) band has a loop (ring) of externa at \(k_{*}=\sqrt{-\hbar^{2}/(4m_{2}B)}\) demonstrating a mexican hat shape. In real TMDC monolayers characterized by the three-fold rotational symmetry, the dispersion of the charge carriers is anisotropic in the plane and, instead of the extrema loop, three minima can be formed. We briefly discuss the effects of anisotropy in the end of the paper. Note that a non-parabolicity in the nearest \(cb\) and \(vb\) is related to the interband \(\mathbf{k}\cdot\mathbf{p}\)-mixing [29; 30], see also Ref. [31] for a comparative study between a single and multiband approaches; we disregard such effects for simplicity. To describe the excitons and trions we need to introduce the Coulomb interaction. We use it in the Rytovak-Keldysh form [32; 33] \[V_{ij}(\rho)=\frac{\pi q_{i}q_{j}}{2r_{0}\varkappa}\left[\mathbf{H}_{0}\left( \frac{\rho}{r_{0}}\right)-\mathrm{Y}_{0}\left(\frac{\rho}{r_{0}}\right)\right]. \tag{3}\] Here \(q_{i,j}\) are the charges of the corresponding carriers (\(q_{e}=e<0\) is the electron charge, \(q_{h}=-e>0\) is the hole charge), \(\varkappa\) is the effective dielectric constant of the environment, \(\rho\) is the interparticle distance, \(r_{0}\) is the dielectric screening radius, \(\mathbf{H}_{0}(x)\) and \(\mathrm{Y}_{0}(x)\) are the Struve and Neumann functions. At large distances and/or small screening radius \(\rho/r_{0}\gg 1\) the potential energy takes the Coulomb form \(\propto 1/\rho\), while at small distances and/or large screening radius the potential is logarithmic function of the distance \(\propto\ln\rho/r_{0}\). The potential energy in the form of Eq. (3) is adequate for describing the Coulomb interaction in atomically thin semiconductors, see Refs. [34; 35; 13; 17; 36] for details. ## III Excitons We start with the theory of the two-particle bound states - high-lying excitons (HX) - formed from the valence band hole and high-lying electron. The effective Hamiltonian describing the relative motion of the electron and hole in the HX reads \[\mathcal{H}=-\frac{\hbar^{2}}{2\mu_{2}}\Delta+B\Delta^{2}+V_{eh}(\rho), \tag{4}\] where \(\mu_{2}\) is the high-lying electron and hole reduced mass, \[\mu_{1}=\frac{m_{1}m_{h}}{m_{1}+m_{h}},\quad\mu_{2}=\frac{m_{2}m_{h}}{m_{2}+m _{h}}, \tag{5}\] and \(\Delta\) is the Laplace operator acting on a wavefunction \(\psi(\rho)\) with the relative electron-hole coordinate \(\rho\). Since the contribution \(E_{g}+E_{g^{\prime}}\) is excluded from the Hamiltonian (4) the total energy of the high-lying exciton is \(E_{g}+E_{g^{\prime}}-E_{b,\mathrm{HX}}\) where \(E_{b,\mathrm{HX}}\) is the binding energy. We recall that in the parabolic approximation, \(B=0\), the HX can be bound only if \(\mu_{2}>0\) for attractive \(V_{eh}(\rho)<0\): Indeed, the inversion of the sign of the mass can be formally considered as an inversion of the interaction potential energy sign [37]. Hence, for \(\mu_{2}<0\) and \(V_{eh}<0\) a bound HX state is absent. By contrast for positive \(\mu_{2}>0\), the binding energy is given by \[E_{b,\mathrm{HX}}=\frac{2\mu_{2}e^{4}}{\varkappa^{2}\hbar^{2}}\zeta\left(\frac {r_{0}\mu_{2}e^{2}}{\varkappa\hbar^{2}}\right),\quad\mu_{2}>0, \tag{6}\] Figure 1: Schematic illustration (not to scale) of the band structure of TMDC monolayer in the vicinity of the \(\mathbf{K}_{\pm}\) points of the Brillouin zone. The topmost valence, bottom and high-lying conduction bands are denoted as \(vb\), \(cb\), and \(cb+2\), respectively. Arrows denote the electron spin orientation; states with the opposite spin orientations are not shown for clarity. where the function \(0\leqslant\zeta(x)\leqslant 1\) takes into account the dielectric screenig effect: At \(x\to 0\) the function \(\zeta(x)\to 1\) recovering the two-dimensional hydrogen model and at \(x\to\infty\) we have \(\zeta(x)\sim\ln(x)/x\)[6; 33]. Interestingly, for the negative reduced mass a two-electron state can be bound despite the Coulomb repulsion between them [37], see also Ref. [38] where the electron pairing due to the spin-orbit interaction is discussed. Note that if \(\mu_{2}>0\), but \(m_{2}<0\) the HX translational mass \(m_{\text{HX}}=m_{2}+m_{h}<0\), cf. Ref. [39]. The presence of non-parabolic contribution to the dispersion \(B>0\) makes HX bound for any sign and value of the reduced mass \(\mu_{2}\), and, hence, for any value of the high-lying electron effective mass \(m_{2}\), both positive and negative. To illustrate it we consider, instead of a Coulomb potential, a shallow short-range potential \(V_{0}(\rho)\) \[V_{eh}(\rho)<V_{0}(\rho)<0. \tag{7}\] The presence of the bound state for \(V_{0}\) naturally implies the bound state for a deeper (Rytova-Keldysh) potential. For a shallow short-range interaction potential we transform the Schodinger equation \(\mathcal{H}\psi=\mathcal{E}\psi\) to the \(k\)-space and approximate the potential energy as \[\sum_{\mathbf{k}^{\prime}}V_{0;\mathbf{k}-\mathbf{k}^{\prime}}\psi_{\mathbf{k}^{\prime}} \approx V_{0;0}\sum_{\mathbf{k}^{\prime}}\psi_{\mathbf{k}^{\prime}},\] where \(V_{0,\mathbf{q}}=\int d^{2}\rho\,V_{0}(\mathbf{\rho})\exp\left(\mathrm{i}\mathbf{q}\mathbf{ \rho}\right)\), \(\psi_{\mathbf{k}}=\int d^{2}\rho\,\psi(\mathbf{\rho})\exp\left(\mathrm{i}\mathbf{k}\mathbf{ \rho}\right)\) are the Fourier-components of the potential energy and wavefunction, respectively, and the normalization area is set to unity; \(V_{0,0}=V_{0,\mathbf{q}=0}<0\). Thus, \[\psi_{k}\propto\frac{1}{\mathcal{E}-E_{k}}, \tag{8}\] and the Schrodinger equation reduces to an algebraic equation; the bound state energy is found from the self-consistency requirement (see Supplementary Materials for Ref. [27]): \[V_{0;0}\sum_{\mathbf{k}}\frac{1}{\mathcal{E}-E_{k}}=1,\quad E_{k}=Ak^{2}/2+Bk^{4}, \tag{9}\] with \(A=\hbar^{2}/\mu_{2}\). In the case of \(A>0\) we obtain the bound-state energy in the form \[\mathcal{E}=-\frac{A^{2}}{4B}\frac{1}{1-\exp\left(-A/V_{0;0}\right)}\approx- \frac{A^{2}}{4B}e^{A/V_{0;0}}, \tag{10}\] where the approximate equality holds for \(V_{0;0}\to 0\). The binding energy is \(E_{b}=-\mathcal{E}\). In this situation we recover exponentially shallow bound state as expected for two-dimensional system with parabolic dispersion [40]. The non-parabolicity terms play a role of the high-momentum cut-off and determine the prefactor in the exponent in Eq. (10). At \(A<0\) (negative reduced mass) Eq. (9) can be transformed to the following form \[\arctan\frac{A}{\sqrt{-16B\mathcal{E}-A^{2}}}=\frac{\pi}{2}+\frac{\sqrt{-16B \mathcal{E}-A^{2}}}{2V}. \tag{11}\] The minimum of the relative motion dispersion is in this case \(E_{*}=-A^{2}/(16B)\) corresponding to \[k_{*}=\sqrt{-\frac{A}{4B}}. \tag{12}\] Thus the binding energy is \(E_{b}=E_{*}-\mathcal{E}\). One can check that Eq. (11) has solutions with \(\mathcal{E}<0\) for any relation between \(A\) and \(B\) in the reduced motion dispersion. In the important limits, \[E_{b}=\begin{cases}\frac{(\pi V_{0,0})^{2}}{4B},\quad|V_{0,0}|\ll|A|,\\ \frac{(\pi V_{0,0})^{2}}{16B}+\frac{AV_{0,0}}{4B},\quad|V_{0,0}|\gg|A|. \end{cases} \tag{13}\] For the negative reduced mass case the bound state is formed in the vicinity of the minima loop in the \(k\)-space with the relevant wavevectors \(k\approx k_{*}\), Fig. 2(a). Thus, as shown in Fig. 2(b), the relative motion wavefunction oscillates in the real space. Another specific feature of the wavefunctions is their behavior at \(\rho\to 0\): \(\psi(\rho)=\text{const}+\rho^{2}\ln\rho\) owing to the presence of \(k^{4}\) terms in the dispersion. This function is sufficiently smooth at \(\rho\to 0\) in contrast to the of the parabolic dispersion where the wavefunction for the shallow short-range potential well diverges as \(\ln\rho\). Figure 2: (a) Relative motion dispersion, Eq. (9) (dark red), and the wavefunction absolute value squared, Eq. (8) (dark blue), in the \(\mathbf{k}\)-space. (b) Absolute value squared of the relative motion wavefunction in the real space shown in the log-linear scale to make oscillations more pronounced. For illustrative purposes we use arb. units. The oscillations in the real space have the period of approximately \(2\pi/k_{*}\). The analysis performed above forms a basis for calculating the excitonic states in the case of the Coulomb, \(-e^{2}/(\varkappa\rho)\), and Rytova-Keldysh potential (3) and allows us to formulate convenient trial functions to calculate the high-lying exciton binding energy. Namely, the ground state wavefunctions both for \(\mu_{2}>0\) and \(\mu_{2}<0\) should behave as \(\mathrm{const}+\rho^{2}\) at \(\rho\to 0\), otherwise divergence occurs due to \(k^{4}\) terms and, for \(\mu_{2}<0\), the wavefunction should oscillate in space. Naturally, the bound state wavefunctions should decay at \(\rho\to\infty\). We use the following trial functions for the HX \[\psi^{\pm}_{\mathrm{HX}}(\rho;a,b)\propto\begin{cases}\exp{(-a\sqrt{b^{2}+\rho ^{2}})},&\mu_{2}>0,\\ J_{0}(a\rho)\exp{(-b\rho^{2})},&\mu_{2}<0,\end{cases} \tag{14}\] with \(a\) and \(b\) being the variational parameters and superscript \(\pm\) corresponds to the sign of \(\mu_{2}\); hereafter the normalization factors are omitted. Both functions are smooth at \(\rho\to 0\), the wavefunction for \(\mu_{2}<0\) oscillates as a function of \(\rho\). We used the Bessel function \(J_{0}(\rho)\) as it is convenient oscillating function with decaying amplitude with increase in \(\rho\), which reasonably matches the oscillating behavior of the exact solution (8) in the short-range interaction model with variational parameter \(a\) controlling the period of oscillations, see also Ref. [41] where detailed analytical theory of the Coulomb-bound states in two-dimensional systems with the mexican hat dispersion is presented. We have checked accuracy of these trial functions by comparing the exciton energy found by minimizing the expectation value of the Hamiltonian (4) with the results of numerical diagonalization of Hamiltonian matrix using the non-orthogonal basis of Gaussian functions \(\phi_{i}(\rho)=\exp(-\alpha_{i}\rho^{2})\). Here the parameters \(\alpha_{i}\) were taken as geometric progression. The total number \(N\) of basic functions and specific values of \(\alpha_{i}\) were chosen to optimize both the numerical convergence and computational costs [42; 43; 44], typically, \(N\approx 50-100\) was sufficient for excitons, further increase of \(N\) did not affect the result. Note, that with the chosen basis we can obtain only exciton ground state and axially-symmetric (\(s\)-shell) excited states. In Fig. 3 the dotted and solid lines show \(E_{b,\mathrm{HX}}\) as a function of \(\mu_{2}\) for different values of \(B\) calculated variationally (dots) and numerically. Here and in what follows we use \[\mathrm{E}=\mu_{1}e^{4}/(\varkappa^{2}\hbar^{2}),\quad\mathrm{a}=\varkappa \hbar^{2}/(\mu_{1}e^{2}), \tag{15}\] as units of the energy and length. Accordingly, the non-parabolic term in the dispersion is given by the dimensionless value \(B^{*}=B\epsilon^{4}\mu_{1}^{3}/(\varkappa^{2}\hbar^{6})\). Overall, very good agreement between the two approaches is seen. The exciton state is bound for any \(\mu_{2}\) (positive or negative) in agreement with the analysis above. Figure 4 shows the HX binding energy as a function the non-parabolicity parameter \(B\) for several values of \(\mu_{2}\): solid lines correspond to \(\mu_{2}>0\) and dashed lines to \(\mu_{2}<0\). For large \(B^{*}\) the HX binding energy approaches the asymptotic behavior \[E_{b,\mathrm{HX}}=\mathcal{C}\frac{\mathrm{E}}{(B^{*})^{1/3}}, \tag{16}\] with the numerical coefficient \(C\approx 0.8\). The \(B^{-1/3}\) power law dependence follows from the dimensional arguments taking into account that for a bound state the mean values of kinetic and potential energies of the exciton should be of the same order of magnitude and the coefficient \(C\) has been found by variational approach with the Gaussian trial function. At small \(B\) the HX binding energy saturates: for \(\mu_{2}>0\) it reaches the value Figure 3: Exciton binding energy as a function of the high-lying electron-hole reduced mass calculated for the Coulomb potential (\(r_{0}=0\) in Eq. (3)) using the variational approach with the trial functions (14) (dots) and numerical diagonalization (solid lines). Figure 4: Exciton binding energy as function of the parameter \(B^{*}=Be^{4}\mu_{1}^{3}/(\varkappa^{2}\hbar^{6})\) characterizing the non-parabolicity of the dispersion. for the parabolic dispersion, Eq. (6) with a correction in the form \(\sim(\mu_{2}/\mu_{1})\mathrm{E}B^{*}\ln B^{*}\). The \(\ln B^{*}\) factor arises because, strictly speaking, the first-order perturbation theory contribution related to the quantum mechanical average of \(Bk^{4}\) term logarithmically diverges for hydrogenic wavefunction. Interestingly, for \(\mu_{2}<0\) the \(E_{b,\mathrm{HX}}\) also approaches a constant value that depends on \(\mu_{2}\). In this case, for sufficiently small \(B\) the radial motion takes place in the vicinity of the minimum in the dispersion with \(k\approx k_{*}\) where the dispersion is parabolic and does not depend on \(B\). Furthermore, the motion is essentially one-dimensional. As a result, the radial wavefunction of the exciton takes form of the bottom line in Eq. (14) with \(a=k_{*}\)[41], resulting in the spatial oscillations with the period \(\sim 2\pi/k_{*}\), see Fig. 2. Hence, \(E_{b,\mathrm{HX}}=\Lambda|\mu_{2}/\mu_{1}|\)E, where \(\Lambda\) is a logarithmic factor that depends on the details of dispersion and screening of the potential. It is in agreement with results for the Coulomb problem in the two-dimensional electron gas with strong spin-orbit coupling [45] and in the bilayer graphene [41] where similar dispersion can be realized [46]. Finally, Fig. 5 shows the binding energies of HX ground and excited states for two values of \(\mu_{2}/\mu_{1}=\pm 1\) and three values of \(B^{*}\). The figure shows the energies of axially-symmetric (\(s\)-shell) HX states with the principal quantum numbers up to \(n=10\). The effect of non-parabolicity is clearly seen. Deviations from the 2D hydrogenic model in the case of the Coulomb potential [black squares in Fig. 5(a)] are clearly visible. Particularly, for positive \(\mu_{2}\) and \(B\neq 0\) the binding energies of excitonic states are smaller than for the parabolic dispersion: It is because the dispersion is steeper and hence the kinetic energy contribution which reduces the binding energy is larger. For negative \(\mu_{2}\) the exciton energies are higher than for the parabolic case with positive \(\mu_{2}\), this is because the dispersion for small \(k\lesssim k_{*}\) is smoother. ## IV Trions Now we study the high-lying trions, the three particle complexes consisting of two holes occupying the topmost valence bands and one electron in the high-lying \(cb+2\) band (HX\({}^{+}\) trion) or a hole in \(vb\) and two electrons one of those occupying the conduction band \(cb\) and another one occupying the high-lying band \(cb+2\) (HX\({}^{-}\) trion). We consider here only symmetric trions where the envelope function is symmetric with respect to the permutations of identical particles while the correspondig two-particle Bloch function is antisymmetric with respect to the permutations [15]; these states are optically active at low carrier densities. Note that antisymmetric trions can also manifest themselves in the optical response but their oscillator strength is proportional to the second power of the free carrier density [47]. Similarly to the band edge trions where are two HX\({}^{-}\) states: intravalley (or so-called singlet) and intervalley (or triplet) ones where two electrons are, respectively, in the same valley, or in the different valleys [48; 49; 15; 50] resulting in the fine structure of the HX\({}^{-}\). Since the fine structure of high-lying negative trion is related to the short-range part of the electron-electron interaction [cf. Ref. [15]] and, consequently, the splitting between the intra- and intervalley states is by far smaller than the trion binding energy (note that this splitting has not been observed in Ref. [27]), we disregard the difference between the intra- and intervalley trions in what follows. Figure 5: Excitonic series for Coulomb potential (\(r_{0}=0\) in Eq. (3)), panel (a), and for the screened potential (\(r_{0}/a=1\)), panel (b). \(s\)-shell exciton binding energies as a function of the principal quantum number \(n\) are shown. ### Parabolic dispersion It is instructive to start with the parabolic dispersion model neglecting \(Bk^{4}\) terms in the \(cb+2\) dispersion. Let us consider first the HX\({}^{+}\) state. The relative motion of the holes with respect to an electron is governed by the Hamiltonian \[\mathcal{H}_{\text{HX}^{+}}=-\frac{\hbar^{2}}{2\mu_{2}}\left(\Delta _{1}+\Delta_{2}+\frac{2\sigma_{2}}{\sigma_{2}+1}\mathbf{\nabla}_{1}\mathbf{\nabla}_{2}\right) \\ +V_{hh}(\mathbf{\rho}_{1}-\mathbf{\rho}_{2})+V_{eh}(\mathbf{\rho}_{1})+V_{eh}( \mathbf{\rho}_{2}), \tag{17}\] where \(\mathbf{\rho}_{i}\) are the relative coordinates of two holes (\(i=1,2\)) with respect to the electron, \(\mathbf{\nabla}_{i}\) and \(\Delta_{i}\) are the gradient and Laplace operators acting on functions of \(\rho_{i}\), \(\mu_{2}\) is the reduced mass of the high-lying electron and a hole, Eq. (5), and \(\sigma_{2}=m_{h}/m_{2}\) is the hole-to-electron mass ratio, cf. Refs. [15; 51; 12]. We recall that for the neutral HX to be bound \(\mu_{2}\) should be positive in the parabolic approximation, see Eq. (6). In this case HX\({}^{+}\) is bound as well, since Eq. (17) describes the positive-mass situation, see [15] for details. Its binding energy is a fraction of the high-lying exciton binding energy \[E_{b,\text{HX}^{+}}=\chi E_{b,\text{HX}}, \tag{18}\] where the coefficient \(\chi\sim 0.1\) depends on the screening radius \(r_{0}\) and effective masses via \(\mu_{2}\) and \(\sigma_{2}\). The situation with HX\({}^{-}\) is more involved. The relative motion Hamiltonian within a parabolic approximation takes the form \[\mathcal{H}_{\text{HX}^{-}}=-\frac{\hbar^{2}}{2\mu_{1}}\Delta_{1} -\frac{\hbar^{2}}{2\mu_{2}}\Delta_{2}-\frac{\hbar^{2}}{m_{h}}\mathbf{\nabla}_{1} \mathbf{\nabla}_{2}\\ +V_{ee}(\mathbf{\rho}_{1}-\mathbf{\rho}_{2})+V_{eh}(\mathbf{\rho}_{1})+V_{eh} (\mathbf{\rho}_{2}). \tag{19}\] In this case \(\mathbf{\rho}_{i}\) are the relative coordinates of two electrons with respect to a hole. Taking into account that the HX\({}^{-}\) envelope function is symmetric with respect to permutation of electrons \[\psi_{\text{HX}^{-}}(\mathbf{\rho}_{1},\mathbf{\rho}_{2})=\psi_{\text{HX}^{-}}(\mathbf{ \rho}_{2},\mathbf{\rho}_{1}),\] the Hamiltonian can be mapped to the symmetrized one (cf. Eq. (17) in supplement to Ref. [27]) \[\mathcal{H}=-\frac{\hbar^{2}}{2\bar{\mu}}\left(\Delta_{1}+\Delta _{2}+\frac{2\bar{\sigma}}{\bar{\sigma}+1}\mathbf{\nabla}_{1}\mathbf{\nabla}_{2}\right) \\ +V_{ee}(\mathbf{\rho}_{1}-\mathbf{\rho}_{2})+V_{eh}(\mathbf{\rho}_{1})+V_{eh}( \mathbf{\rho}_{2}), \tag{20}\] with the renormalized values of the parameters \[\frac{1}{\bar{\mu}}=\frac{1}{2}\left(\frac{1}{\mu_{1}}+\frac{1}{\mu_{2}}\right),\quad\bar{\sigma}=\frac{\bar{\mu}}{m_{h}-\bar{\mu}}=\frac{2m_{1}m_{2}}{m_{h}( m_{1}+m_{2})}. \tag{21}\] Similarly to the case of the HX\({}^{+}\) one can find square-integrable eigenfunction of Hamiltonian (20). However, it does not automatically mean that the corresponding negative high-lying trion is bound, since its energy can be above the energy of a neutral HX energy. Formally this is because such a trion is bound with respect to the exciton with the reduced mass \(\bar{\mu}\) [with corresponding "effective" binding energy \(\tilde{E}_{b,\text{HX}^{-}}=2\chi\bar{\mu}e^{4}/(\hbar\varkappa)^{2}\), cf. Eq. (18)] rather than HX with the reduced mass \(\mu_{2}\). Following Supplementary Materials to Ref. [27] we obtain for the HX\({}^{-}\) binding energy \[E_{b,\text{HX}^{-}}=\frac{2\mu_{2}e^{4}}{\hbar^{2}\varkappa^{2}}\left[\frac{ \bar{\mu}}{\mu_{2}}(1+\chi)-1\right]. \tag{22}\] The binding energy should be positive, thus, in addition to \(\mu_{2}>0\), the following conditions should hold \[\begin{cases}0<m_{2}<m_{*}\equiv\frac{m_{1}m_{h}(1+2\chi)}{m_{h}-2\chi m_{1}},\quad\text{if}\quad m_{*}>0,\\ 0<m_{2}\quad\text{or}\quad m_{2}<m_{*},\quad\text{if}\quad m_{*}<0.\end{cases} \tag{23}\] Thus, for negative \(m_{2}\) the condition for HX\({}^{-}\) to be bound requires \(|m_{2}|\) to be sufficiently large. This condition can be understood from the following qualitative arguments: to form a bound trion state the HX considered as a rigid particle should bind with the \(cb\)-electron. The interaction between HX and electron is typically attractive due to both the exchange and polarization contributions [52; 53; 54]. Hence, corresponding reduced mass of HX and electron should be positive yielding \(m_{2}<-m_{1}-m_{h}<0\) where we made use of the fact that the translational mass of the HX is \(m_{\text{HX}}=m_{2}+m_{h}<0\) (for \(\mu_{2}>0\)) and \(\mu_{e-\text{HX}}=m_{1}m_{\text{HX}}/(m_{1}+m_{\text{HX}})>0\). ### HX\({}^{-}\) with non-parabolic dispersion Next we address the effects of \(cb+2\) band nonparabolicity on trions. We focus here mainly on the negatively charged high-lying trion, because this situation is particularly interesting due to an interplay of the exciton and trion binding for \(\mu_{2}<0\). We perform two types of calculations of the HX\({}^{-}\) ground state. The first type of calculations is variational. In the variational calculation we use symmetrized combinations of HX trial functions, Eq. (14) in the form \[\psi_{\text{HX}^{-}}(\rho_{1},\rho_{2};a_{1},a_{2},b_{1},b_{2}) \propto\psi_{\text{HX}}^{\alpha}(\rho_{1};a_{1},b_{1})\psi_{\text{HX}}^{\beta} (\rho_{2};a_{2},b_{2})\\ +\{1\leftrightarrow 2\}, \tag{24}\] where \(a_{i}\), \(b_{i}\) (\(i=1,2\)) are the variational parameters, and \(\alpha,\beta=\pm\) determine the particular form of the high-lying exciton wavefunction in Eq. (14): For \(\mu_{2}>0\) we use \(\alpha=\beta=+\), while for \(\mu_{2}<0\) we use \(\alpha=+\) and \(\beta=-\). In the latter case such sign convention allows us to take into account that one of the electrons in the HX\({}^{-}\) (from \(cb\)) has a positive effective mass and the other one (from \(cb+2\)) has a negative mass such that reduced mass \(\mu_{2}<0\). We have also checked a trial function with both \(\alpha=\beta=-\) for trions with the negative reduced mass, \(\mu_{2}<0\), and the resulting energies were very close to obtained with function with \(\alpha=+\) and \(\beta=-\). The second type of calculations is used to test the variational approach and provide more accurate numerical framework for determining the high-lying trion states. In this calculation the \(\text{HX}^{-}\) wavefunction is decomposed as \[\psi_{\text{HX}^{-}}(\mathbf{\rho}_{1},\mathbf{\rho}_{2})\] \[=\sum_{i,j,k}C_{ijk}\left(e^{-\alpha_{i}\rho_{1}^{2}-\beta_{j} \rho_{2}^{2}}+e^{-\alpha_{i}\rho_{2}^{2}-\beta_{j}\rho_{1}^{2}}\right)e^{- \delta_{k}|\mathbf{\rho}_{1}-\mathbf{\rho}_{2}|^{2}}, \tag{25}\] where \(\alpha_{i},\beta_{j},\delta_{k}\) are parameters whose values were taken as geometric progression. Similarly to calculation of the high-lying excitons presented above, the total number of basic functions and specific values of \(\alpha_{i},\beta_{j},\delta_{k}\) were chosen for the best combination of convergence and computational costs. The coefficients \(C_{ijk}\) were determined by minimizing the total energy. The wavefunction (25) provides rather accurate form of the radial wavefunctions for relative motion of electrons and, importantly, takes into account, via the factor \(\exp\left(-\delta_{k}|\mathbf{\rho}_{1}-\mathbf{\rho}_{2}|^{2}\right)\), correlation between the electron motion. Figure 6 shows the dependence of the \(\text{HX}^{-}\) binding energy on the high-lying electron to hole reduced mass \(\mu_{2}\) calculated for several values of the non-parabolicity parameter \(B\). Solid lines show the results of the full numerical calculation, while dotted lines in Fig. 6(a) demonstrate the results of the variational approach with the trial functions (24). The variational calculation gives reasonable estimate of the binding energy being by \(10\%\ldots 30\%\) lower than the "exact" value found using the wavefunction (25). We have also performed the numerical calculation with the function in the form of Eq. (25) but without correlation factors, i.e., setting \(\delta_{k}\equiv 0\). These results turn out to be almost indistinguishable from the results of variational calculation, which justifies the choice of the trial functions (24) for the variational calculation. For negative \(\mu_{2}\) the larger is \(|\mu_{2}|\), the larger is the high-lying trion binding energy. For positive \(\mu_{2}\) a maximum in the dependence of \(E_{b,\text{HX}^{-}}\) on \(\mu_{2}\) is seen for small \(B\). Qualitatively, this maximum can be understood within the framework of the analytical expression for the \(\text{HX}^{-}\) binding energy in the parabolic approximation, Eq. (22). It appears as a result of an interplay of two terms: the first, positive term, weakly increases with increase in \(\mu_{2}\), while the absolute value of the second, negative term, increases linearly with \(\mu_{2}\). This maximum becomes more pronounced in the case of the screened Rytova-Keldysh potential, Fig. 6(b). Figure 6: \(\text{HX}^{-}\) binding energy as a function of the high-lying electron-hole reduced mass calculated for the Coulomb potential (a) and for screened potential (b). Dotted lines in (a) show the results of variational calculation and solid lines [in panels (a) and (b)] show the results of full numerical approach. Figure 7: High-lying trion binding energy as a function of non-parabolicity parameter \(B^{*}\). The dependence of the high-lying trion binding energy on the non-parabolic contribution to the dispersion characterized by the parameter \(B\) is shown in Fig. 7. For small \(B\) (\(B^{*}\ll 1\)) the \(\mathrm{HX}^{-}\) binding energy increases with increase in \(B\) and strongly depends on \(\mu_{2}\). Hence, the presence of \(k^{4}\) terms in the high-lying electron dispersion makes trions more stable. For large non-parabolic term (\(B^{*}\gg 1\)) the \(E_{b,\mathrm{HX}^{-}}\) decreases with increasing \(B\) regardless the value of \(\mu_{2}\) following the same \(B^{-1/3}\) power law as for the HX, Eq. (16) with a different coefficient yielding relatively large high-lying trion-to high-lying exciton binding energy ratio \[\frac{E_{b,\mathrm{HX}^{-}}}{E_{b,\mathrm{HX}}}\approx 0.3. \tag{26}\] To summarize, the presence of \(k^{4}\) terms in the high-lying electron dispersion significantly expand the range \(\mu_{2}/\mu_{1}\) where the high-lying \(\mathrm{HX}^{-}\) trion is bound. As for positive trion, \(\mathrm{HX}^{+}\) it is bound already in the parabolic approximation and our estimates show that it remains bound in the presence of non-parabolic contributions to the dispersion. ### Discussion of the results Let us now briefly discuss the obtained results in view of experimental data reported in Ref. [27]. For rough estimates we note that in TMDC monolayers the valence band hole and the conduction band (\(cb\)) electron effective masses are about the same, \(m_{h}\approx m_{1}\). The electron effective mass in \(cb+2\) conduction band has a similar absolute value, but it is negative. Estimates based on the DFT approach presented in Ref. [26] show that for WSe\({}_{2}\) monolayers \(|m_{2}|\approx 0.46m_{0}>m_{h}\approx 0.36m_{0}\) with \(m_{0}\) being the free-electron mass, making the reduced mass \(\mu_{2}\approx 1.66m_{0}>0\) in Eq. (5). Additional evidence for \(|m_{2}|>m_{h}\) follows from the strong phonon progression of HX observed in Ref. [26] indicating that the translational mass of the high-lying exciton is negative. Thus, neutral high-lying exciton, HX, is bound even if \(Bk^{4}\) terms are neglected in the \(cb+2\) dispersion. For such parameters, however, \(m_{*}>0\) in Eq. (23) (for \(m_{1}\approx m_{h}\) and reasonable \(\chi\approx 0.2\) the \(m_{*}\approx 2.3m_{1}\)). Hence, according to Eq. (23) the \(\mathrm{HX}^{-}\) is not bound in the parabolic approximation. Thus, we come to the conclusion that \(Bk^{4}\) contribution should be sizeable to make \(\mathrm{HX}^{-}\) bound. Note that experimentally observed \(\mathrm{HX}^{-}\) binding energies are \(\approx 43\) meV for WSe\({}_{2}\) monolayer and \(\approx 21\) meV for MoSe\({}_{2}\) monolayer [27]. In the former case it is slightly larger than the band-edge trion, \(\mathrm{X}^{-}\), binding energy, while in the latter case it is slightly smaller than that of \(\mathrm{X}^{-}\). Depending on \(\mu_{2}/\mu_{1}\) and \(B\) the \(\mathrm{HX}^{-}\) binding energy can be on the order of \(10\%\dots 25\%\) of the HX binding energy, thus, somewhat increased values of \(E_{b,\mathrm{HX}^{-}}\) compared to \(E_{b,\mathrm{X}^{-}}\) can be related to (i) enhancement of the \(E_{b,\mathrm{HX}}\) due to rather large \(\mu_{2}/\mu_{1}\approx 6\dots 10\) for the estimated parameters and (ii) to large \(B^{*}\) where, as mentionned above, the trion-to-exciton binding energy ratio turns out to be quite large, Eq. (26). In this theoretical paper we abstain from further analysis of the experimental data and more detailed comparison of the calculations with experiment. The main reason is that the dispersion in \(cb+2\) band is quite complicated [26; 28] and contains, in addition to simplified Eq. (2), anisotropic terms. Also, experimental data on high-lying excitons binding energy are not available to the best of our knowledge. Still our theoretical results in combination with experimental data [27] indicate importance of the non-parabolic terms in the high-lying electron dispersion for formation of the three-particle Coulomb complexes. ## V Conclusion and outlook To conclude, we have developed the theory of high-lying excitons and trions in two-dimensional semiconductors. Such Coulomb-bound complexes involve one electron in the excited conduction band with the negative effective mass and a non-parabolic dispersion. We have developed (i) variational method for calculating such complexes with simple and physically justified trial functions and (ii) the efficient and accurate numerical approach based on decomposition of the wavefunctions of Gaussians. We have demonstrated the importance of the band non-parabolicity for formation of the high-lying excitons and trions. In particular, for negative reduced mass the presence of \(k^{4}\) terms in the high-lying electron dispersion makes exciton bound and strongly enhances the range of stability of the negatively charged high-lying trions. Our estimates show that the high-lying trion binding energies can be in the range of \(10\%\dots 30\%\) of the high-lying exciton binding energy, i.e., on the order of several tens of meV for transition-metal dichalcogenide monolayers. The developed theory is not limited to the monolayer transition-metal dichalcogenides. In the gapped bilayer graphene the dispersion contains the mexican hat features both in conduction and valence bands [46] and tunable excitons are observed in this material [55]. In several other material platforms, including few-layer Ga- and In-monoseledines and monosulfides [56] the dispersion with a ring of extrema in the valence band. Similar situation is probably realized in two-dimensional hexagonal BN [57; 58]. In this respect, we can also mention bulk GaP with the camel's back dispersion [59; 60]. Importantly, dispersion engineering, e.g., in moire lattices can be used to realize the non-parabolic dispersion with an extremum ring. In this regard, developed theoretical approaches will be helpful for studying the fundamental quasiparticles, excitons and trions, in a wide range of material systems. ###### Acknowledgements. The authors are grateful to Kai-Qiang Lin, Jonas D. Ziegler, Alexey Chernikov, and Boris I. Shklovskii for valuable discussions. This work was supported by the State Task of Ioffe Institute.
2310.16884
Atomic Hydrogen Shows its True Colours: Correlations between HI and Galaxy Colour in Simulations
Intensity mapping experiments are beginning to measure the spatial distribution of neutral atomic hydrogen (HI) to constrain cosmological parameters and the large-scale distribution of matter. However, models of the behaviour of HI as a tracer of matter are complicated by galaxy evolution. In this work, we examine the clustering of HI in relation to galaxy colour, stellar mass, and HI mass in IllustrisTNG at $z$ = 0, 0.5, and 1. We compare the HI-red and HI-blue galaxy cross-power spectra, finding that HI-red has an amplitude 1.5 times higher than HI-blue at large scales. The cross-power spectra intersect at $\approx 3$ Mpc in real space and $\approx 10$ Mpc in redshift space, consistent with $z \approx 0$ observations. We show that HI clustering increases with galaxy HI mass and depends weakly on detection limits in the range $M_{\mathrm{HI}} \leq 10^8 M_\odot$. In terms of $M_\star$, we find blue galaxies in the greatest stellar mass bin cluster more than blue galaxies in other stellar mass bins. Red galaxies in the greatest stellar mass bin, however, cluster the weakest amongst red galaxies. These trends arise due to central-satellite compositions. Centrals correlate less with HI for increasing stellar mass, whereas satellites correlate more, irrespective of colour. Despite the clustering relationships with stellar mass, we find that the cross-power spectra are largely insensitive to detection limits in HI and galaxy surveys. Counter-intuitively, all auto and cross-power spectra for red and blue galaxies and HI decrease with time at all scales in IllustrisTNG. We demonstrate that processes associated with quenching contribute to this trend. The complex interplay between HI and galaxies underscores the importance of understanding baryonic effects when interpreting the large-scale clustering of HI, blue, and red galaxies at $z \leq 1$.
Calvin Osinga, Benedikt Diemer, Francisco Villaescusa-Navarro, Elena D'Onghia, Peter Timbie
2023-10-25T18:00:02Z
http://arxiv.org/abs/2310.16884v2
# Atomic Hydrogen Shows its True Colours: Correlations between Hi and Galaxy Colour in Simulations ###### Abstract Intensity mapping experiments are beginning to measure the spatial distribution of neutral atomic hydrogen (H i) to constrain cosmological parameters and the large-scale distribution of matter. However, models of the behaviour of H i as a tracer of matter is complicated by galaxy evolution. In this work, we examine the clustering of H i in relation to galaxy colour, stellar mass, and H i mass in IllustrisTNG at \(z=0,0.5\), and 1. We compare cross-power spectra of H i and red galaxies (H i \(\times\) Red) and H i and blue galaxies (H i \(\times\) Blue), finding that the H i \(\times\) Red power spectrum has an amplitude 1.5 times higher than H i \(\times\) Blue at large scales. The cross-power spectra intersect at \(\approx 3\) Mpc in real space and \(\approx 10\) Mpc in redshift space, consistent with \(z\approx 0\) observations. We show that H i clustering increases with galaxy H i mass and depends weakly on detection limits in the range \(M_{\rm HI}\leq 10^{8}M_{\odot}\). We also find that blue galaxies in the greatest stellar mass bin cluster more than blue galaxies in other stellar mass bins, however red galaxies in the greatest stellar mass bin cluster the weakest amongst red galaxies. These trends arise due to central-satellite compositions. Centrals correlate less with H i for increasing stellar mass, whereas satellites correlate more, irrespective of colour. Despite the clustering relationships with stellar mass, we find that the cross-power spectra are largely insensitive to detection limits in H i and galaxy surveys. Counter-intuitively, all auto and cross-power spectra for red and blue galaxies and H i decrease with time at all scales in IllustrisTNG. We demonstrate that processes associated with quenching contribute to this trend. The complex interplay between H i and galaxies underscores the importance of understanding baryonic effects when interpreting the large-scale clustering of H i, blue, and red galaxies at \(z\leq 1\). keywords: cosmology: large-scale structure of the universe - galaxies: haloes - galaxies: formation ## 1 Introduction In the current paradigm of structure formation, gravity transforms small primordial density fluctuations in our nearly homogeneous Universe into the cosmic web. Within this web, overdense regions of dark matter known as haloes emerge via gravitational instabilities. Over time, baryons sink into gravitational potential wells of haloes and form galaxies (White & Rees, 1978). Consequently, the clustering of galaxies is shaped by the cosmology responsible for the large-scale distribution of haloes (Jenkins et al., 1998; Eisenstein et al., 2005; Reddick et al., 2014) and how galaxies occupy haloes (Zheng et al., 2007). Studies of the clustering in galaxy surveys therefore provide insight into the behaviour of dark matter and dark energy and the influence of dark matter haloes on galaxy properties. Galaxy surveys measure the large-scale distribution of galaxies via their starlight and thus focus on the stellar properties of galaxies. However, the gas properties of a galaxy are also critical in galaxy evolution, as they are tightly linked to a galaxy's star-formation rate (SFR, Schmidt, 1959; Kennicutt, 1998). Fortunately, future experiments that map the distribution of neutral atomic hydrogen (H i) can probe the link between the gas properties of galaxies and their host halo, called the H i-galaxy-halo connection (Guo et al., 2017; Li et al., 2022). Moreover, these maps can also constrain cosmological parameters as a competitive alternative to galaxy surveys (Cosmic Visions 21 cm Collaboration et al., 2018). By sacrificing angular resolution, 21cm intensity mapping experiments can improve the signal-to-noise ratio and, in principle, observe the structure of the universe quickly and efficiently (Leo et al., 2019). Projects intended for this purpose such as CHIME (Bandura et al., 2014), Tianlai (Wu et al., 2021), HI-RAX (Newburgh et al., 2016), and SKA (Dewdney et al., 2009) are largely still in their proof-of-concept phase. Just recently, authors in Paul et al. (2023) claim to have successfully detected the H i signal in auto-correlation. The spatial distribution of H i is shaped by the confluence of matter's large-scale structure and how H i occupies haloes. Consequently, a large quantity of work has been dedicated to understanding the H i-galaxy-halo connection via studying galaxy clustering as functions of different properties (e.g., Zehavi et al., 2005; Li et al., 2012; Ander son et al., 2018; Qin et al., 2022). One such result is the tendency for "red" galaxies to cluster more strongly than "blue" galaxies (Zehavi et al., 2011; Skibba et al., 2015; Coil et al., 2017). Colour-dependent clustering arises from the propensity for red galaxies to occupy older and more massive haloes, which also tend to be the most clustered via "assembly bias" (Gao et al., 2005). A galaxy's colour reflects its star-formation status; galaxies with low SFRs cannot maintain a substantial population of short-lived blue stars, yielding a redder colour on average (Tinsley, 1980; Madau et al., 1996). The cessation of star-formation (called quenching) follows from the depletion of cold molecular gas reservoirs (Jimenez-Donaire et al., 2023), and thus is particularly relevant to understanding the spatial distribution of H i. A galaxy's H i abundance is correlated with its SFR, so the mechanisms that eventually transform a galaxy from blue to red usually also suppress its H i content (Bigiel et al., 2008; Wang et al., 2020). Studying the spatial relationship between H i and galaxy colour furthers our understanding of the processes responsible for quenching. Past works have measured cross-correlations at nearby redshifts (\(0<z<1\)) between H i and galaxies separated into blue and red populations (Chang et al., 2010; Masui et al., 2013; Papasteris et al., 2013; Anderson et al., 2018; Wolz et al., 2021; Cunningham et al., 2023; Jiang et al., 2023). They find that the abundance of H i is suppressed in regions within anywhere from 2 Mpc to 9 Mpc of a red galaxy. Physically interpreting these results as contributions from evolving galaxy populations is a formidable analytical task; simulations offer a way to address this challenge. Moreover, the current generation of simulations now produce low-redshift galaxy populations with realistic colour and H i properties (Nelson et al., 2018; Diemer et al., 2019; Dave et al., 2020), offering the opportunity to illuminate the relationship between galaxy colour and the spatial distribution of H i. In this work, we analyse the H i auto power spectrum and H i-galaxy cross-power spectra in real and redshift space using the hydrodynamical simulation IllustrisTNG. For each power spectrum, we characterize their dependence on scale, colour, H i mass, and stellar mass at \(z=0\), \(z=0.5\), and \(z=1\). We find that the H i distribution is colour-dependent to surprisingly large scales. In an upcoming paper (Osing et al. in prep.), we will analyse the implications of the large-scale colour-dependence on the cosmological interpretation of H i intensity maps at low redshift. In this work, we focus on the insight that these cross-correlations provide on galaxy formation and evolution. The paper is structured as follows. In Section 2, we describe the simulated dataset used in our analysis and various mathematical definitions. In Section 3, we study the clustering of H i and galaxy colour cross-power spectra and their relationships with redshift, stellar mass, and H i mass. We then analyse the ramifications of these results in Section 4 before concluding in Section 5. For brevity, some additional figures are referenced but are not included. These are provided on the author's website1. Footnote 1: www.calvinosinga.com ## 2 Methods ### Simulation data We base this work on data from the IllustrisTNG suite of cosmological magneto-hydrodynamics simulations (Nelson et al., 2018; Pillepich et al., 2018; Springel et al., 2018; Naimam et al., 2018; Marinacci et al., 2018). The suite provides simulations at three resolutions in three different volumes, with side lengths \(35\ h^{-1}\) cMpc, \(75\ h^{-1}\) cMpc, and \(205\ h^{-1}\) cMpc. The simulations adopt the Planck Collaboration et al. (2016) cosmology with \(\Omega_{\rm m}=0.3089\), \(\Omega_{\rm b}=0.0486\), \(h=0.6774\) and \(\sigma_{\rm s}=0.8159\). The boxes are evolved using the moving-mesh code AREPO (Springel, 2010) that calculates gravity with a tree-PM method and magneto-hydrodynamics with a Gudonov scheme using a Voronoi mesh. IllustrisTNG applies sub-grid models for unresolved processes such as star formation, stellar winds, gas cooling, supernovae, and active galactic nuclei (AGN, Vogelsberger et al., 2013; Weinberger et al., 2018). The parameters to these models are tuned using a small subset of observations to produce a realistic low-redshift galaxy population (Pillepich et al., 2018). A Friends-of-Friends (FoF) algorithm (Davis et al., 1985) groups particles into dark matter haloes. Overdense substructure within the haloes called "subhaloes" are identified using the Subfindy algorithm (Springel et al., 2001). We primarily use the highest-resolution \(75\ h\) cMpc\({}^{-1}\) box, called TNG100, at redshifts \(z=0\), \(z=0.5\), and \(z=1\). The largest box, TNG300, offers power spectra at larger scales, but unresolved galaxies contribute a non-negligible proportion of the cosmic H i (see Villaescusa-Navarro et al. (2018) and Appendix B) and the smallest box does not reach the scales of interest for 21cm intensity mapping. We remove galaxies with \(M_{\star}<2\times 10^{8}\) M\({}_{\odot}\) as the data from galaxies with fewer than 100 stellar particles is unreliable. We then separate the galaxies into blue and red using the difference in magnitude in the g- and r-bands, as defined by the Sloan Digital Sky Survey (SDSS, Stoughton et al., 2002). The resulting colour\(-M_{\star}\) plane is shown in Fig. 1. For each redshift, we select the \(g\)-\(r\) value that corresponds to the bin with the minimum count between the two peaks in the colour distribution. Galaxies falling on or above the line are classified as blue and all others as red. We find that the minimum evolves from \(g\)-\(r=0.60\), \(g\)-\(r=0.55\), and \(g\)-\(r=0.50\) at \(z=0\), \(z=0.5\), and \(z=1\), respectively. Nelson et al. (2018) find that IllustrisTNG's \(g\)-\(r\) distribution roughly matches observations in our redshift range. However, at high redshift (\(z>1\)) IllustrisTNG's colour distribution begins to diverge from observations. IllustrisTNG lacks dusty and star-forming red galaxies (Donnari et al., 2019), an observed phenomenon that is also missing in other simulations such as MU-FASA (Dave et al., 2017). Only recently has the follow-up simulation for MUFASA, SIMBA (Dave et al., 2019), produced this population (Akins et al., 2022). We use redshifts \(z\leq 1.0\) where the star-forming red population is negligible in IllustrisTNG (online figures), such that nearly all red galaxies are quenched and blue galaxies star-forming. We test the sensitivity of the blue and red galaxy clustering to different \(g\)-\(r\) thresholds and the effect of dust reddening according to the model from Nelson et al. (2018). The clustering of both blue and red galaxies is not appreciably impacted by any reasonably chosen colour threshold or by dust reddening (see Appendix A). ### Modelling atomic and molecular hydrogen We adopt the same notation as Diemer et al. (2018) to distinguish between the states of hydrogen, involving a consistent set of subscripts for any physical quantity that would be associated with the gas. If \(\Sigma\) is the surface density, then \(\Sigma_{\rm gas}\) is adopted for all gas, \(\Sigma_{\rm H}\) for all hydrogen, \(\Sigma_{\rm HI+H_{2}}\) for neutral hydrogen, \(\Sigma_{\rm HI}\) for atomic hydrogen, and \(\Sigma_{\rm H_{2}}\) for molecular hydrogen. Using this notation, the molecular fraction is defined as \[f_{\rm mol}=\frac{M_{\rm H_{2}}}{M_{\rm HI+H_{2}}}\,. \tag{1}\] Equation 1 provides just one of the necessary ingredients to compute \(M_{\rm HI}\) for each gas cell. We also require the fraction of gas that is hy drogen, \(f_{\rm H}=M_{\rm H}/M_{\rm gas}\), and the fraction of hydrogen that is neutral, \(f_{\rm H1+H_{2}}=M_{\rm H1+H_{2}}/M_{\rm H}\). With these definitions, we can describe the mass of H i within a gas cell as \[M_{\rm HI}=(1-f_{\rm mol})\times f_{\rm HI+H_{2}}\times f_{\rm H}\times M_{\rm gas}\,. \tag{2}\] IllustrisTNG provides \(f_{\rm H}\) and \(f_{\rm H1+H_{2}}\) by tracking the abundance of hydrogen, helium and a small network of metals among other physical processes (Faucher-Giguere et al., 2009; Vogelsberger et al., 2013; Rahmati et al., 2013; Ferland et al., 2017; Pillepich et al., 2018). However, in star-forming cells, \(f_{\rm H1+H_{2}}\) is not computed self-consistently in the IllustrisTNG model (Springel & Hernquist, 2003) and is thus treated separately. Star-forming cells are divided into a hot and cold gas phase. The entire hot phase is assumed to be ionized, and any ionization due to local young stars is neglected in the cold phase, yielding \(f_{\rm HI+H_{2}}=f_{\rm H1}\rho_{\rm cold}/\rho_{\rm gas}\). A more detailed discussion of this assumption can also be found in Vogelsberger et al. (2013) and Diemer et al. (2018). To model the final ingredient, \(f_{\rm mol}\), we must post-process the simulation because IllustrisTNG does not distinguish between molecular and atomic hydrogen. Physically, \(f_{\rm mol}\) is the result of a balance between H\({}_{2}\) production on dust grains and photo-dissociation via Lyman-Werner radiation. Thus \(f_{\rm mol}\) models require the gas metallicity, gas surface density, and the intensity of incident UV radiation in the Lyman-Werner band as input (Draine, 2011). We must post-process the simulation to estimate the incident Lyman-Werner UV flux on a particular gas cell because IllustrisTNG does not include radiative transfer. The UV flux on any cell originates from two sources: the cosmological UV background (Faucher-Giguere et al., 2009) and from nearby star-forming regions. The incident UV flux on gas in galaxies is dominated by nearby stars. The gas in filaments, however, receives significant UV flux from distant sources. However, radiative transfer across the entirety of the simulation volume is impractical. We employ models with two approaches to resolve the filamentary gas issue: Villaescusa-Navarro et al. (2018), which neglects stellar UV sources, and Diemer et al. (2018), which neglects gas in filaments. We will briefly describe the relevant portions of each of the two approaches; for more details see the discussed papers. Villaescusa-Navarro et al. (2018, hereafter VN18) treats star-forming and non-star-forming gas cells separately. VN18 assigns \(f_{\rm mol}=0\) in non-star-forming gas, neglecting the trace amounts of H\({}_{2}\). For star-forming gas, \(f_{\rm mol}\) is calculated using the Krumholz, McKee, & Tumlinson (KMT) model (Krumholz et al., 2008, 2009). KMT estimates \(f_{\rm mol}\) when given properties characterizing the cloud's size, metallicity, and the intensity of the photo-dissociating UV radiation in the Lyman-Werner band (Krumholz et al., 2008). VN18 neglects photo-dissociating UV radiation from local star formation, only incorporating background UV estimates from IllustrisTNG itself (Faucher-Giguere et al., 2009). Diemer et al. (2018, hereafter D18) utilize five models of three different types to compute \(f_{\rm mol}\): models based on observed correlations (Leroy et al., 2008), calibrations with simulations (Gnedin & Kravtsov, 2011; Gnedin & Draine, 2014), and analytical models (Krumholz, 2013; Sternberg et al., 2014). These models require the same three inputs as the model from VN18: the gas metallicity, density, and the incident UV intensity in the Lyman-Werner band. D18 use the SFR of nearby cells to approximate incident UV from local stellar sources, assuming an optically thin medium (see Gebek et al. (2023) for comparison to full radiative transfer prescriptions). The models used by D18 are tuned for 2D surface densities, not the 3D densities from the simulation. D18 convert from 3D to 2D in two ways: a cell-by-cell method using the Jean's length and by projecting the galaxy in a face-on orientation onto a 2D grid of pixels (see their section 2.3). Each of the five \(f_{\rm mol}\) models is then applied to the 2D quantities, except Leroy et al. (2008) which yields unphysical results when applied cell-by-cell. In total, this procedure results in nine H i distributions from D18. ### Power spectra We compute the power spectra of the overdensity \(\delta(\mathbf{x})=\rho(\mathbf{x})/\overline{\rho}-1\) with \[\langle\tilde{\delta_{i}}(\mathbf{k})\tilde{\delta_{j}}(\mathbf{k^{\prime}})\rangle=(2 \pi)^{3}P_{i\times j}(k)\delta_{D}^{3}(\mathbf{k}-\mathbf{k^{\prime}})\,. \tag{3}\] \(P_{i\times j}(k)\) is the power spectrum and \(\mathbf{k}\) is the wavenumber, with bold denoting a vector. \(\delta_{D}\) is the Dirac delta function. \(\tilde{\delta_{i}}\) represents the Fourier transform of the overdensity from position-space to \(k\)-space. If \(i=j\) in the above equation, \(P_{i\times j}(k)\) is called an auto power spectrum, which will be denoted with one population \(P_{i}(k)\). Otherwise, it is called a cross-power spectrum. The halo occupation distribution (Peacock & Smith, 2000; Berlind Figure 1: Distribution of galaxy colours in TNG100 at \(z=0\) (top), \(z=0.5\) (centre), and \(z=1\) (bottom). 2D histograms (right) show the colour-\(M_{\bullet}\) distribution. 1D colour distributions (left) sum along each \(g\) - \(r\) bin in the colour-\(M_{\bullet}\) plane, normalised by the number of galaxies at that redshift, \(N_{\rm tot}\). The red dotted line represents the bin with the minimum galaxy count between the peaks of the bimodal distribution, chosen to be the threshold that separates galaxies into blue (below line) and red (above line) subpopulations. The blue cloud is distributed over a wide range of \(g\) - \(r\) values and reaches a maximum at \(M_{\bullet}\approx 10^{11}M_{\odot}\). The red sequence has a tight \(g\) - \(r\) distribution with larger spread in \(M_{\bullet}\). & Weinberg, 2002; Kravtsov et al., 2004; Tinker et al., 2005; Zheng et al., 2007; Hadzhiyska et al., 2020) provides a useful analytical framework for understanding power spectra. We can represent the power spectrum as a sum of contributions of inter- and intra-halo galaxy pairs, \[P(k)=P^{\rm 2h}(k)+P^{\rm 1h}(k)+P^{\rm SN}\,. \tag{4}\] The two-halo term reflects large-scale structure and the one-halo term structure within haloes. The shot noise term is a constant determined by the size of the sample used to measure the clustering. In the following sections, we will often refer to this framework to guide our analysis of the power spectra. We consider power spectra in both real and redshift space. Matter is placed in redshift space \(s\) by displacing real-space positions along an arbitrarily chosen line-of-sight using their velocities, \[s=\mathbf{x}+\frac{1+z}{H(z)}\mathbf{r}_{\parallel}(\mathbf{r})\,, \tag{5}\] where \(H(z)\) is the Hubble parameter, \(\mathbf{x}\) is position in real space, and \(\mathbf{r}_{\parallel}\) is the velocity parallel to the chosen line of sight. The resulting real- and redshift-space matter distributions are placed in a \(800^{3}\) grid with bin lengths of \(\approx 95~{}h^{-1}\) ckpc, binned using a Cloud-In-Cell (CIC) interpolation scheme. The grids are then used as input for the power spectrum calculation routine provided by the Python library Pylans(Villaescusa-Navarro, 2018). Our results are converged with grid resolution at all relevant scales (online figures). We can quantify the "faithfulness" of a matter tracer using the bias \(b_{ij}(k)=\delta_{i}(\mathbf{k})/\delta_{j}(\mathbf{k})\), where \(i\) and \(j\) represent two different populations. For this paper, we are exclusively interested in the bias of matter tracers with respect to the full mass distribution, so we always take \(j\) to mean "all matter" and express the bias as \(b_{i}\). We calculate the bias as \[P_{i}(k)=b_{i}^{2}(k)P_{\rm m}(k) \tag{6}\] where \(P_{\rm m}\) is the matter power spectrum, and \(i\) represents some chosen matter tracer such as H i or galaxies. To measure the strength of the relationship between two samples, we use the correlation coefficient \[r_{i-j}=\frac{P_{i\times j}(k)}{\sqrt{P_{i}(k)P_{j}(k)}}\,. \tag{7}\] \(r_{i-j}\) takes a value of zero for completely random distributions and approaches unity for entirely dependent samples. We distinguish the correlation coefficient from the position vector by denoting the vector \(r\) using a bold letter and the length scale \(R\) with a capital. ## 3 Results In this section, we characterize the clustering of H i, blue, and red galaxies. In Section 3.1, we show that the H i distribution is not particularly sensitive to the post-processing of the simulations. We analyse how H i clusters with different galaxy samples separated by colour in Section 3.2 and examine their scale- and time-dependence in Section 3.3. We study how the cross-power spectra change as a function of \(M_{\bullet}\) in Section 3.4 and \(M_{\rm HI}\) in Section 3.5. In Appendix B, we show how simulation resolution affects these results. ### Hi auto power spectra Before we proceed, we must ascertain if our results are sensitive to the post-processing models used to calculate H i distributions in IllustrisTNG. We split the H i models into three groups: _Galaxy Centres_, _Particles in Galaxies_, and _All Particles_. For models in _Galaxy Centres_, we assign all the H i in a galaxy to its centre for all nine models from D18. The four cell-by-cell models from D18 are also included in the next group, _Particles in Galaxies_, where we instead assign H i to the position of each individual host cell. Gas cells outside of galaxies are excluded in both cases (Section 2.2). For the final group, _All Particles_, we apply the one model from VN18 to every gas cell in the simulation. In this section, we compare the H i clustering for the H i models in and amongst these three groups. We show slices through the real-space H i distributions in the top row of Fig. 2 to provide insight into the H i auto power spectra in the top panel of Fig. 3. The filaments present in _All Particles_ (top left in Fig. 2) but missing in _Galaxy Centres_ and _Particles in Galaxies_ (top middle and right, respectively) reflect the trade-off described in Section 2.2 - namely, the neglect of local UV sources in VN18 and of filaments in D18. The filaments that connect galaxies supply some small mass-weighted power to all scales in between them, boosting the H i power spectrum from _All Particles_ slightly in Fig. 3. On small scales, _Galaxy Centres_ diverges from _Particles in Galaxies_, as representing a galaxy's H i as a point removes any clustering within galaxies. For all upper spectra presented in this work, we exclude scales below \(k\approx 20~{}h~{}\mathrm{KMpc}^{-1}\) due to aliasing issues near the Nyquist frequency (\(k_{\rm Nyquist}=33~{}h~{}\mathrm{Mpc}^{-1}\)). Measurements of H i, however, take place in redshift space. Line-of-sight velocities distort the real-space power spectra primarily in two ways, called redshift-space distortions (RSD): the Kaiser effect (Kaiser, 1987) and fingers-of-God (FoG, Jackson, 1972). The Kaiser effect enhances each redshift-space auto power spectrum (bottom panel of Fig. 3) on large scales, as groups of galaxies moving coherently appear closer in redshift space. The second RSD, FoG, manifests due to the velocity dispersions within a halo, smearing the distribution on small scales into the columns ("fingers") along the line of sight (bottom left and centre panels of Fig. 2). FoG suppress the redshift-space power spectrum compared to the real-space counterparts for each H i group (bottom of Fig. 3), although the magnitude of the suppression varies between groups. FoG manifest more weakly in _Galaxy Centres_ than in _Particles in Galaxies_ and _All Particles_. The strength of FoG in each H i group is dependent on its velocity dispersion (Juszkiewicz et al., 2000). The velocity dispersion for _Particles in Galaxies_ and _All Particles_ receives contributions from the velocity dispersion of galaxies themselves _and_ their constituent particles (Zhang et al., 2020). By collapsing H i to the centre of each galaxy, _Galaxy Centres_ removes contributions from the intrinsic velocity dispersion of the particles within a galaxy, softening the net FoG in the _Galaxy Centres_ case. As a result, the redshift-space H i auto power for _Galaxy Centres_ breaks away from the other two groups at \(k\gtrsim 0.5~{}h~{}\mathrm{Mpc}^{-1}\), before approaching a constant value due to shot noise at \(k\sim 2~{}h~{}\mathrm{cMpc}^{-1}\). In following sections, we represent H i power spectra as shaded areas encompassing results calculated from each of the H i models from all groups and treat the contours as systematic uncertainties due to H i post-processing. However, we exclude _Galaxy Centres_ from redshift-space power spectra since their FoG are artificially suppressed. ### Hi-Galaxy cross-power spectra Previous works have established the environmental dependence of a galaxy's gas abundance, finding that cluster members have significantly lower gas fractions (Giovanelli & Haynes, 1985; Solanes et al., 2002; Brown et al., 2021). We can measure the effect of this envi ronmental dependence on larger scales by computing cross-power spectra between H i and blue galaxies (H i \(\times\) Blue) and H i and red galaxies (H i \(\times\) Red). In Section 3.2.1 we focus on their scale- and colour-dependence in real space. In Section 3.2.2, we study the impact of RSDs on those relationships. #### 3.2.1 Real space The top row of Fig. 4 shows the distribution of blue and red galaxies overlaid on H i in real space. Red galaxies (top left panel) are concentrated in the densest regions while blue galaxies (top right) are broadly distributed. Massive, older, and more clustered haloes tend to host red galaxies, leading to colour-dependent clustering and a larger bias (equation 6) with respect to matter for red galaxies than blue galaxies (Gao et al., 2005; Zehavi et al., 2005; Wechsler et al., 2006; Springel et al., 2018). The intrinsic clustering strength of red galaxies manifests in the cross-power spectra shown in Fig. 5 via their larger bias. Mathematically, we can express the cross-power spectra as \[P_{\rm HI\times Colour}(k)=b_{\rm HI}(k)b_{\rm Colour}(k)r_{\rm HI- Colour}(k)P_{\rm m}(k) \tag{8}\] to describe how they relate to the bias \(b_{\rm Colour}\), correlation coefficient \(r_{\rm HI-Colour}\), and matter power spectrum \(P_{\rm m}\). Equation 8 is useful for understanding the galaxy properties responsible for the relative strengths of H i \(\times\) Blue and H i \(\times\) Red: their inherent clustering strength, represented by their bias \(b_{\rm Colour}\), and their spatial relationship with H i, represented by the correlation coefficient \(r_{\rm HI-Colour}\) (equation 7). H i \(\times\) Red is greater than H i \(\times\) Blue on large scales (Fig. 5) because red galaxies cluster more strongly (\(b_{\rm Red}>b_{\rm Blue}\)). However, this trend is counteracted by the weaker spatial connection between red galaxies and H i (\(r_{\rm HI-Red}<r_{\rm HI-Blue}\), bottom row). The disparity between the correlation coefficients is small on large scales (\(r_{\rm HI-Red}\approx r_{\rm HI-Blue}\approx 1\)) because the clustering of H i is colour-independent, in the sense that its distribution is governed mostly by linear growth rather than small-scale effects. However, H i tends to be suppressed in the massive haloes hosting red galaxies, reducing \(r_{\rm HI-Red}\) faster than \(r_{\rm HI-Blue}\) when approaching small scales. The disparity between the correlation coefficients grows such that on small scales the H i \(\times\) Blue cross-power spectrum is greater than H i \(\times\) Red. We describe the clustering of H i on these scales as "colour Figure 2: Slices through the IllustrisTNG H i mass distribution at \(z=0\) in real (top) and redshift space (bottom). Each slice sums 20% of the simulation volume along the \(z\)-axis. The redshift-space distributions are created by displacing particles from their positions in real space using their velocities along the line of sight, in this case the \(y\)-axis. The left column shows the model of VN18, which calculates the H i in all gas cells but does not account for local UV sources, whereas the remaining distributions account for local UV sources but only account for gas cells within galaxies. The middle and right columns display H i distributions calculated by D18 with the Gnedin & Draine (2014) model, using the positions of individual gas cells and the centre of the host galaxy, respectively. The sizes of the points in the right column scale logarithmically with galaxy H i mass. In the bottom row, the fingers-of-God effect can be observed in all three cases. In the _Galaxy Centres_ case, however, the fingers-of-God manifest more weakly because it removes the contribution of the velocities of the cells within a galaxy along the line of sight. dependent" since the spatial relationship between H i and galaxies governs the relative strengths of each cross-power spectra more than the galaxy population's inherent clustering. In terms of equation 8, we can call those scales colour-independent when H i \(\times\) Red is greater than H i \(\times\) Blue, as \(P_{\rm HI\times Red}/P_{\rm HI\times Blue}>1\) implies \(b_{\rm Red}/b_{\rm Blue}>r_{\rm HI\times Blue}/r_{\rm HI\times Red}\) and therefore the intrinsic clustering of the galaxy population dictates the relative strengths of the cross-power spectra on large scales. On the other hand, colour-dependent scales occur when H i \(\times\) Blue is greater than H i \(\times\) Red, as \(P_{\rm HI\times Red}/P_{\rm HI\times Blue}<1\) implies \(b_{\rm Red}/b_{\rm Blue}<r_{\rm HI\times Blue}/r_{\rm HI\times Red}\), showing that the galaxy population's spatial relationship with H i is largely responsible for their relative strengths. We roughly interpret the scale at which the intersection between H i \(\times\) Red and H i \(\times\) Blue occurs (\(k_{\rm eq}\)) as the transition between the colour-independent and -dependent regimes, but emphasize that \(k_{\rm eq}\) is not a sharp transition. The intersection is easily identifiable when the ratio of H i \(\times\) Red over H i \(\times\) Blue (top right panel of Fig. 5) falls below unity at \(k_{\rm eq}\)\(\approx\) 3.25 \(h\)\(\rm{\,{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{ \rm{ }}}}}}}}}}}}}}}}\) (R_{\rm eq}\)\(=2\pi/k_{\rm eq}\approx\) 2.8 Mpc). The scale at which the H i distribution becomes significantly more colour-dependent reflects the findings of previous work on the concept of "galaxy conformity". Galaxies within \(\sim\) 4 Mpc of a larger red galaxy tend to also be red (Kauffmann et al., 2013), indicating that a galaxy's large-scale environment impacts its star-formation (Hearin et al., 2016; Ayromlou et al., 2023). Given the link between H i and star-formation (Bigiel et al., 2008), it is reasonable that the H i content of galaxies would also be suppressed within 4 Mpc of a large red galaxy. This crossover also matches the \(z\)\(\approx\) 0 cross-correlation between SDSS galaxies (York et al., 2000) and ALFALFA H i maps (Giovanelli et al., 2005; Haynes et al., 2011) from Papastergis et al. (2013), where they find that H i abundance is reduced within 3 Mpc (\(k\)\(\sim\) 3.14 \(h\)\(\rm{\,{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ \rm{ }}}}}}}}}}}}}}}}}\) \(R_{\rm eq}\)\(=\) 0.05) of a red galaxy. The agreement between IllustrisTNG and observations is encouraging. #### 3.2.2 Redshift space Fig. 4 shows slices through the redshift-space distributions of red and blue galaxies overlaid on H i (bottom row), where the FoG in red galaxies stretch over longer distances than in blue galaxies (Li et al., 2006). The massive haloes that host red galaxies possess deep potential wells, causing a large velocity dispersion in the member galaxies that can stretch FoG for nearly half the length of the box. This colour-dependency in the strength of FoG alters the comparison between H i \(\times\) Blue and H i \(\times\) Red in the redshift-space cross-power spectra, which are provided in Fig. 5 (top middle panel). We quantify the strength of RSDs on the power spectra with the ratio of real- and redshift-space power spectra (bottom right panel of Fig. 5). At large scales, the redshift-space power spectra are slightly greater than their real-space counterparts until \(k\)\(\sim\) 0.4 \(h\)\(\rm{\,{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{ }}}}}}}}}}}}}}}}}\)\({}^{-1}\). The boost arises from the Kaiser effect (Section 3.1) and manifests Figure 4: Slices of the red (left) and blue (right) galaxy distributions overlaid with the H i distribution from VN18. The slices show TNG100 at \(z\)\(=\) 0, summing 20% of the \(z\)-plane centred in the middle of \(z\)-axis, in both real (top) and redshift (bottom) space. Matter was placed in redshift space by projecting its velocities along the line of sight, in this case the \(y\)-axis. Red galaxies are clearly more clustered, grouping heavily around the largest haloes, whereas the blue galaxies better trace out the H i distribution across the box. Figure 3: H i auto power spectra in real (top) and redshift space (bottom) at \(z\)\(=\) 0. The contours delineate the models from D18 that are applied to galaxies as a whole (light brown, nine models) and to the individual particles within galaxies (dark brown, four models). The crimson line represents the distribution from VN18. All distributions are similar in real space, although _All Particles_ receives a small boost by including filaments. In redshift space, _Galaxy Centres_ diverges from the other auto powers at \(k\)\(\sim\) 0.5 \(h\)\(\rm{\,{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ }}}}}}}}}}}}}}}}}\)\({}^{-1}\). Representing galaxies as points removes any velocity contributions from individual gas cells, tempering the fingers-of-God effect. We represent subsequent H i power spectra using shaded areas encompassing each model. similarly in both H i \(\times\) Blue and H i \(\times\) Red. Although both H i \(\times\) Blue and H i \(\times\) Red will approach the Kaiser limit at sufficiently large scales, it is unclear if they will approach the limit similarly, and it is difficult to extrapolate from the small \(k\) regime that probes the Kaiser effect. At small scales, FoG dominate the distortion of redshift-space power spectra, reducing H i \(\times\) Blue and H i \(\times\) Red significantly within \(k>0.4\ h\) cMpc\({}^{-1}\). FoG suppress H i \(\times\) Red more strongly than H i \(\times\) Blue, with the two diverging at \(k\sim 0.5\ h\) cMpc\({}^{-1}\). The stronger FoG in H i \(\times\) Red introduce a secondary colour-dependent effect in the redshift-space cross-power spectra. Consequently, the redshift-space colour ratio (to bright panel of Fig. 5) curves downward much more rapidly at \(k=0.4\ h\) cMpc\({}^{-1}\) and reaches unity at a much larger scale in redshift-space (\(R_{\rm eq}\approx 10.4\) Mpc) than in real space (\(R_{\rm eq}\approx 2.8\) Mpc). At \(k>1\ h\) cMpc\({}^{-1}\), random particle velocities emerge as noise in the redshift-space power spectra--this effect is small and does not alter any conclusions made in this paper. Below the redshift-space cross-power spectra in Fig. 5, we showcase the \(z=0\) redshift-space correlation coefficients. Similarly to their real-space counterparts, H i \(\times\) Blue has a greater correlation coefficient than H i \(\times\) Red at all scales. However, both H i \(\times\) Red and H i \(\times\) Blue decrease much more sharply at \(k\sim 1\ h\) cMpc\({}^{-1}\) than in real-space, with their redshift-space correlation coefficients reaching \(\sim 0.1\) on the smallest scales. We speculate that the spatial disconnect may arise from differences in velocity dispersions between H i and galaxies. H i possesses intrinsic velocity dispersion within galaxies (Zhang et al., 2020), whereas galaxies only experience pair-wise velocity dispersion. We reserve further analysis for the following paper (Osinga et al., in prep). The power spectra shown in Fig. 5 agree with \(z\sim 0\) observations. Anderson et al. (2018) computed cross-correlations between 2df galaxies (Colless et al., 2001) and Parkes H i maps (Staveley-Smith et al., 1996), shown as points in the top centre panel of Fig. 5. Our results match in the range \(0.3<k<1.5\ h\) cMpc\({}^{-1}\), with both H i \(\times\) Blue and H i \(\times\) Red within their statistical uncertainties. We both find that H i \(\times\) Red is greater on large scales but drops beneath H i \(\times\) Blue at smaller scales, intersecting within \(0.8<k<1.5\ h\) cMpc\({}^{-1}\) range. Anderson et al. (2018) measure H i suppression around red galaxies to much larger scales than Papastergis et al. (2013) (\(\sim 10\) Mpc compared to \(\sim 3\) Mpc) because of RSDs. Papastergis et al. (2013) employ a projected correlation function that measures clustering perpendicular to the line of sight, removing RSD effects. At larger scales \(k\sim 0.1-0.3\ h\) cMpc\({}^{-1}\), Anderson et al. (2018) find anti-correlations between H i and galaxies, which we exclude from Fig. 5 due to their large uncertainties. Interestingly, other intensity mapping experiments find subdued (although still positive) H i clustering at similar scales even Figure 5: Cross-power spectra (top) and correlation coefficients (bottom) for H i \(\times\) Red and H i \(\times\) Blue at \(z=0\) in real space (left, green) and redshift space (centre, yellow), shown as contours that enclose the H i models. The cross-power spectra are calculated with respect to the total H i distribution, not only the H i in galaxies of the respective colour. We compare the observational data from Anderson et al. (2018) to the redshift-space power spectra, displayed as points. For clarity, red points are offset in \(k\). The point at which the cross-power spectra intersect is more discernible in the ratio H i \(\times\) Red over H i \(\times\) Blue (top right) for both real (green, \(R_{\rm eq}\approx 3.25\ h\) cMpc\({}^{-1}\)) and redshift space (yellow, \(R_{\rm eq}\approx 0.9\ h\) cMpc\({}^{-1}\)). We compare the ratio of redshift-space over real-space power spectra for H i \(\times\) Red and H i \(\times\) Blue (bottom right). At small scales, H i \(\times\) Red is suppressed more than H i \(\times\) Blue because FoG are stronger in red galaxies. This adds a secondary colour-dependency, which pushes the intersection between the cross-power spectra out to \(\sim 10.4\) Mpc in redshift space. The correlation coefficients confirm that H i is more tightly linked to blue galaxies than red at all scales in both real and redshift space. at other redshifts (Wolz et al., 2021; Paul et al., 2023), however we find no evidence for such behaviour in IllustrisTNG. ### Redshift evolution Assuming that gravity is the dominant influence on the time evolution of power spectra, we expect the clustering of matter to increase linearly on large scales with respect to the growth factor and non-linearly on small scales (Fry, 1996; Dodelson and Schmidt, 2020). However, the behaviour of H i\(\,\times\,\)Blue and H i\(\,\times\,\)Red in Fig. 6 deviates from those expectations. From \(z=1\) to \(z=0.5\), H i\(\,\times\,\)Blue and H i\(\,\times\,\)Red experience negligible changes and from \(z=0.5\) to \(z=0\) their power spectra even _diminsish_ with time. This discrepancy prompts us to investigate the various processes responsible for the time evolution of each cross-power spectrum. Since cross-power spectra conflate changes in the member distributions and their spatial relationship (Section 3.2), we analyse the auto power spectra of the member distributions first in Fig. 7 and then apply those insights to the H i-galaxy cross-power spectra. Throughout this section, we examine three interconnected processes that shape the redshift evolution of power spectra: galaxy quenching, gas loss, and colour transitions. While these processes are certainly linked, it is important to clarify their precise meanings in our analysis. We employ the term "quenching" to describe galaxies leaving the star-formation main sequence (Daddi et al., 2007; Noeske et al., 2007; Donnari et al., 2019). "Gas loss" refers to processes that strictly affect H i clustering by reducing a galaxy's gas reservoirs, regardless of when the gas loss occurs relative to quenching. A "colour transition" denotes a galaxy's transition from blue to red along the \(g\) - \(r\) axis, again independent of quenching. Galaxy quenching, per these definitions, is largely not responsible for the redshift evolution of the power spectra, but it is closely associated with other two processes that do contribute to this trend. For the remainder of the paper, we refer to quenching, gas loss, and colour transitions with these definitions in mind. The bottom row of Fig. 7 shows that both the blue and red galaxy auto power spectra decrease with time from \(z=1\) to \(z=0\), _despite_ the all-galaxies auto power spectrum (top left panel) continuing to grow. This discrepancy implies that the decreasing auto power spectra for both blue and red galaxies is an artefact of making a colour cut. Some fraction of galaxies transition from blue to red between redshifts. These transitioning galaxies tend to be the "reddest," oldest, and most clustered members of the blue population. As a result, the average clustering of the remaining blue galaxies decreases upon losing their most clustered component (bottom left panel). Simultaneously, transitioning galaxies join the red population as its _least_ clustered members, thereby reducing the average clustering of the red galaxies and decreasing their power spectrum (bottom right panel). Figure 6: Redshift evolution of H i\(\,\times\,\)Blue (left) and H i\(\,\times\,\)Red (right) in real (top) and redshift (centre) space for \(z=0,\,0.5\), and 1 (darkest to lightest colour). The bottom row shows the redshift evolution of the ratio of redshift-space over real-space power spectra, measuring the strength of RSDs. For clarity, we only plot the contour with all H i\(\,\)models for \(z=1\) and plot the others as a line representing the contour’s centre—the width of the contours does not change significantly across redshift. In real and redshift space, both cross-power spectra either change little or _decrease_ with time. These trends seem to conflict with the picture of structure growth but occur due to colour transitions and gas loss (see text). For each colour, RSDs strengthen slightly with time, amplifying the loss of clustering at later times in redshift space as compared to real space. Figure 7: Redshift evolution of the all-galaxy (top left), H i\(\,\times\,\)Red (top right), blue galaxy (bottom left), and red galaxy (bottom right) auto power spectra in real space divided by the \(z=0\) auto power spectra. Darker colours indicate results closer to \(z=0\). The H i\(\,\)auto powers are displayed using the centre of the contour containing all the H i models. The only auto power that consistently increases with time is the galaxy auto power spectrum. This implies that the decrease in the clustering of red and blue galaxies with time arises due to colour transitions. The transitioning population at \(z\leq 1\) is large enough to offset structure growth, particularly for red galaxies between \(z=1\) and \(z=0.5\) and blue galaxies between \(z=0.5\) and \(z=0\) when their smaller populations increase sensitivity to transitioning galaxies (Fig. 1). Importantly, it should be noted that this is not a result of moving the \(g\)-\(r\) cut with time (Section 2); these trends are amplified if the colour cut remains constant. As of yet, the effect of colour transitions on the clustering evolution of H i and galaxies has not yet been properly understood. Previous observational studies on this subject focus on red galaxies (Zehavi et al., 2005; White et al., 2007; Guo et al., 2013), as it is challenging to obtain complete samples of blue galaxies at higher redshifts (Wang et al., 2021). These studies have found that red galaxy clustering grows at a slower rate than all-galaxy clustering over \(0\leq z\leq 1\), a phenomenon attributed to mergers and disruptions. However, our findings suggest that colour transitions are largely responsible for suppressing the clustering growth of blue and red galaxies. Blue galaxies merge at a slower rate than red galaxies (Lin et al., 2008; Darg et al., 2010), meaning that mergers and disruptions are unlikely to induce similar effects on the redshift evolution of the blue galaxy auto power spectrum (bottom left panel of Fig. 7). We will further discuss the contributions from mergers, disruptions, and colour transitions on the time evolution of the power spectra at different scales in Section 4. In addition to the changes in the clustering of blue and red galaxies, the time evolution of H i\(\times\) Blue and H i\(\times\) Red is influenced by the behaviour of H i clustering. The upper right panel of Fig. 7 shows that the H i auto power spectrum increases between \(z=1\) and \(z=0.5\) before decreasing between \(z=0.5\) and \(z=0\). Given the connection between blue galaxies and H i, we speculate that the gas loss associated with colour transitions plays a significant role in the loss of structure from \(z=0.5\) to \(z=0\). Notably, the cosmic abundance of H i increases from \(z=1\) to \(z=0\) in IllustrisTNG (Villasescusa-Navarro et al., 2018; Diemer et al., 2019), which would seem to contradict our finding that the clustering decreases with time. This may be because of changes in the clustering of H i as a function of halo mass, however further investigations of this conflict are left to future work. The redshift-space power spectra evolve with time similarly to their real-space counterparts. The strength of this trend is amplified in redshift space by the evolution of RSDs (bottom row of Fig. 6). For both colours, the RSDs suppress the redshift-space power spectra more strongly at later redshifts, albeit the relative weak evolution of the RSDs overall. Although not explicitly visible in Fig. 6, the redshift-space intersection between H i\(\times\) Blue and H i\(\times\) Red occurs at \(R_{\rm eq}\approx 10.4,5.4\), and 3.8 Mpc at \(z=0,0.5\), and 1, respectively. These intersections at \(z=0\) and \(z=1\) are comparable to those reported by Anderson et al. (2018) and Wolz et al. (2021), respectively. ### Dependence on stellar mass in centrals and satellites It is well-established that more luminous or massive galaxies cluster more strongly (Zehavi et al., 2005, 2011; Beutler et al., 2013; Guo et al., 2013, 2014; Skibba et al., 2014; Zhai et al., 2023). In this section, we examine how this relationship manifests in cross-correlations with H i and test whether the tendency for observations to detect bright objects skews the measured cross-power spectra. We compare the clustering of galaxies whose stellar masses fall within three coarse \(M_{\star}\) bins, as well as those whose stellar mass exceeds three lower detection limits with separations of \(\approx 1\) dex. The \(M_{\star}\) relationships of the cross-power spectra evolve little with redshift; for brevity, we examine only \(z=0\) results and provide similar analyses at other redshifts in the online figures. In the top row of Fig. 8, we separate blue, red, and all galaxies into coarse \(M_{\star}\) bins and compute their cross-power spectra and correlation coefficients with H i. The correlation coefficients distinguish changes in the spatial connection of the galaxy subsample to H i from changes to the intrinsic clustering of the member distributions of the cross-power spectra (Section 3.2). Both H i\(\times\) Blue (left column) and H i\(\times\) Galaxy (right) increase with \(M_{\star}\), showing that more massive galaxies in the blue and whole galaxy samples cluster more strongly. H i\(\times\) Red (middle) on the other hand does not exhibit a clear trend with \(M_{\star}\); we will elaborate on this behaviour later in the section. The correlation coefficients (second row) for all galaxy colours vary negligibly with \(M_{\star}\). This lack of evolution implies that within each \(M_{\star}\) bin, the correlation between blue or red galaxies and H i does not significantly differ from that of other galaxies of the same colour but with different masses. Consequently, the \(M_{\star}\) evolution of the cross-power spectra in the top row of Fig. 8 reflects the inherently stronger clustering of massive galaxies. In addition to \(M_{\star}\) bins, we also investigate cross-power spectra and correlation coefficients for lower \(M_{\star}\) thresholds in Fig. 8, which we consider a rough analogue for detection limits in observations. The cross-power spectra for H i\(\times\) Red and H i\(\times\) Galaxy converge to the fiducial cross-power spectrum for the whole sample at \(M_{\star}=10^{10}M_{\odot}\), and H i\(\times\) Blue does the same at \(M_{\star}=10^{9}M_{\odot}\). H i-galaxy cross-power spectra measured in observations with galaxy surveys complete up to those masses should be therefore insensitive to detection limits. The correlation coefficients for each \(M_{\star}\) detection limit also converge to the fiducial sample by the same masses as the cross-power spectra for each colour. The clustering of blue and all-galaxies in Fig. 8 increases with \(M_{\star}\). Red galaxy clustering, however, only increases slightly from \(2\times 10^{8}\leq M_{\star}/M_{\odot}<10^{9}\) to \(10^{9}\leq M_{\star}/M_{\odot}<10^{10}\), before decreasing drastically in the largest \(M_{\star}\) bin. This trend aligns with observations and previous simulation work, where the red galaxy auto power spectrum decreases in the range \(10^{8.5}\leq M_{\star}/M_{\odot}\leq 10^{10}\) before increasing in \(10^{10}\leq M_{\star}/M_{\odot}\leq 10^{11.5}\)(Zehavi et al., 2005; Guo et al., 2011; Henriques et al., 2017; Springel et al., 2018). This unique trend is tied to the composition of each \(M_{\star}\) bin with respect to centrals and satellites. The greatest \(M_{\star}\) bin predominantly consists of centrals, whereas the two smallest mass bins are composed almost entirely of satellites. Central galaxies, by definition, must inhabit different haloes whereas satellites have no such restriction. As a result, a larger proportion of red galaxies in the smallest \(M_{\star}\) bins reside in the same haloes, increasing their clustering. To investigate the effect of central/satellite demographics on the \(M_{\star}\) evolution, we analyse the H i correlation coefficients of blue, red, and all-galaxies split into centrals and satellites in Fig. 9. H i\(\times\) Galaxy (right panel) correlation coefficients for centrals inherit their shape from H i\(\times\) Blue (left), as most centrals are blue. Similarly, most satellites are red, such that satellite correlation coefficients for H i\(\times\) Galaxy echo H i\(\times\) Red. However, blue satellites boost the corresponding H i\(\times\) Galaxy coefficients by contributing some correlations with H i. Central correlation coefficients (purple contours) approach constant values in the one-halo regime (\(k\sim 0.3\)\(h\)\(\mathrm{\,cMpc^{-1}}\)) because only one central can occupy each halo. For all galaxy populations, satellites correlate more with H i with increasing \(M_{\star}\) whereas centrals correlate less. We can interpret these correlations by understanding the influence of environment on a galaxy's gas abundance. Various external processes increase in strength and frequency with environmental density (Brown et al., 2021; Zabel et al., 2022; Villanueva et al., 2022; Watts et al., 2023), such as starvation (Larson et al., 1980; van de Voort et al., 2017), ram pressure stripping (Gunn & Gott, 1972; Abadi et al., 1999), and gas heating via satellite-satellite interactions (Moore et al., 1996, 1998), among others. Larger central galaxies tend to occupy denser haloes that are more effective at removing gas, and thus central galaxies correlate less with H i with increasing \(M_{\star}\) in the one-halo regime. Massive red centrals in particular (centre panel of Fig. 9) are nearly completely uncorrelated with H i out to \(k\approx 3\ h\) cMpc\({}^{-1}\), or \(R=2\pi/k\approx 2.1\ h^{-1}\) cMpc, demonstrating the efficacy of external quenching mechanisms near the centres of massive haloes (Gomez et al., 2003; Balogh et al., 2004; Wolf et al., 2009; Woo et al., 2013). In contrast to centrals, satellite galaxies correlate _more_ with H i with increasing \(M_{\star}\). The reason for this trend is relatively straightforward for blue satellites. Massive blue satellites tend to inhabit Figure 8: The clustering of stellar mass subsamples for H i \(\times\) Blue (left), H i \(\times\) Red (centre), and H i \(\times\) Galaxy (right) in real space at \(z=0\). Contours encompass all H i models, with darker colours representing subsamples with greater \(M_{\star}\). The cross-power spectra for \(M_{\star}\) bins (first row) and detection limits (third row) are shown as a ratio to the overall fiducial sample for that galaxy colour. H i \(\times\) Blue and H i \(\times\) Galaxy both cluster more with increasing \(M_{\star}\). H i \(\times\) Red, however, clusters the most weakly in the largest mass bin (see text for discussion). The correlation coefficients for each galaxy colour and H i (second row) evolve little with \(M_{\star}\), demonstrating that differences in the cross-power spectra arise from changes in the clustering of the galaxy population rather than their spatial connection with H i. In the bottom two rows, we display cross-power spectrum ratios and correlation coefficients for galaxy samples above certain \(M_{\star}\) thresholds, which represent a rough proxy for detection limits. H i \(\times\) Red and H i \(\times\) Galaxy converge to their fiducial cross-power with no (additional) mass cuts by \(M_{\star}=10^{10}M_{\odot}\) and H i \(\times\) Blue by \(M_{\star}=10^{9}M_{\odot}\). H i \(\times\) Blue converges at greater masses, as most galaxies in \(M_{\star}\gtrsim 10^{10.5}M_{\odot}\) are red. We exclude the \(M_{\star}\geq 10^{11}M_{\odot}\) threshold for H i \(\times\) Blue, as it is dominated by shot noise. larger H i-rich haloes, resulting in stronger H i correlations. On the other hand, the largest red satellites are typically hosted by haloes that are increasingly H i-deficient at larger masses, as indicated by their low \(M_{\rm HI}/M_{h}\) (see figure 4 in Villaescusa-Navarro et al. 2018). This characteristic should lead to a decrease in the correlation between red satellites and H i at higher \(M_{\star}\), which we do not find in Fig. 9. We speculate that this discrepancy may arise from two effects: satellite-satellite correlations and resistance to environmental effects. Satellites in massive haloes are, relative to the rest of the halo, H i-rich (see figure 7 in Villaescusa-Navarro et al. 2018). Larger haloes contain a greater number of satellites, which may mitigate their H i deficiency. The second effect, environment resistance, may also contribute as massive satellites possess deeper potential wells that can better withstand environmental stripping mechanisms (Marasco et al. 2016; Stevens et al. 2019; Donnari et al. 2021). ### Dependence on galaxy Hi mass In the previous section, we studied how the H i-galaxy cross-power spectra evolve with \(M_{\star}\). Here, we present a similar analysis with \(M_{\rm HI}\). Observations have reached different conclusions about the behaviour of galaxy clustering with \(M_{\rm HI}\). For example, Basilakos et al. (2007) and Guo et al. (2017) claim that the H i auto power spectrum increases as a function of \(M_{\rm HI}\), while Meyer et al. (2007) and Papastergis et al. (2013) find no conclusive evidence of such a trend. We investigate this relationship in IllustrisTNG by separating galaxies at \(z=0\) into three coarse \(M_{\rm HI}\) bins and thresholds separated by 1 dex, and measure the clustering and correlation strength with blue and red galaxies for each \(M_{\rm HI}\) subsample in Fig. 10. We find that galaxy clustering increases as a function of \(M_{\rm HI}\) in the top left panel of Fig. 10, supporting the conclusions of Basilakos et al. (2007) and Guo et al. (2017). In the bottom row, we show the ratio of the H i-red and H i-blue correlation coefficients, \(r_{\rm HI-Red}/r_{\rm HI-Blue}\), which describes how each \(M_{\rm HI}\) subsample correlates with blue and red galaxies. H i is expected to correlate more with blue galaxies than red galaxies; the ratio \(r_{\rm HI-Red}/r_{\rm HI-Blue}\) indicates how much more a particular \(M_{\rm HI}\) subsample correlates with blue galaxies over red galaxies. If lower \(M_{\rm HI}\) bins are preferentially occupied by smaller and H i-rich blue galaxies rather than larger but H i-poor red galaxies, then the lower \(M_{\rm HI}\) bins will exhibit a smaller \(r_{\rm HI-Red}/r_{\rm HI-Blue}\) ratio. However, the correlation coefficient ratios evolve little between each \(M_{\rm HI}\) bin, at most differing by a factor of \(\sim 1.3\) on large scales. The small differences in \(r_{\rm HI-Red}/r_{\rm HI-Blue}\) illustrate that blue and red galaxies have approximately equal influence over the clustering in each bin. This lack of evolution in our \(M_{\rm HI}\) range implies that the H i Figure 10: H i auto power spectra of galaxies within a particular \(M_{\rm HI}\) subsample at \(z=0\). For \(M_{\rm HI}\) detection limits, we display only the centre of each contour encompassing the H i models to avoid crowding the figure. Breaking up the galaxy population into \(M_{\rm HI}\) bins (top left), we find that the H i auto power grows with increasing \(M_{\rm HI}\) (darker shades). The proximity of the fiducial H i auto power spectrum (black) to the largest \(M_{\rm HI}\) bin implies that high \(M_{\rm HI}\) galaxies dominate H i clustering. The similar auto power spectra for different \(M_{\rm HI}\) thresholds (top right) demonstrate that the H i auto power is insensitive to detection limits. We examine the influence of the colour make-up of each \(M_{\rm HI}\) subsample by providing the correlation coefficient ratio between an \(M_{\rm HI}\) bin/threshold and all red galaxies over blue galaxies. A smaller \(r\) ratio indicates that the galaxies within particular \(M_{\rm HI}\) bin/threshold correlate more with blue galaxies than other \(M_{\rm HI}\) bins/thresholds. The similar \(r\) ratios across \(M_{\rm HI}\) subsamples indicate that these trends do not arise from evolving galaxy colour demographics but instead from the inherent clustering strength of galaxies with greater \(M_{\rm HI}\). Figure 9: Correlation coefficients between H i and three galaxy colour populations, blue (left), red (middle), and all galaxies (right) at \(z=0\). Each galaxy colour population is separated into stellar mass bins of width \(\approx 1\) dex, with lighter colours representing less massive bins and darker more massive. Each stellar mass bin is further separated into centrals (purple) and satellites (green). H i correlates more strongly with satellites for increasing \(M_{\star}\) across galaxy colour, but this trend reverses for centrals. By definition, there is only one central galaxy per halo, thus the correlation coefficients for centrals are constant on small scales. auto power spectrum increasing with \(M_{\rm HI}\) arises from the tendency for galaxies to inhabit more massive haloes with increasing \(M_{\rm HI}\), rather than changes in the colour make-up within a particular \(M_{\rm HI}\) bin. Although we conclude that galaxy clustering increases with \(M_{\rm HI}\), we emphasize that this does not necessarily contradict the findings of Meyer et al. (2007) or Papastergis et al. (2013). Both works conducted their analyses on galaxy samples in a higher \(M_{\rm HI}\) regime, where we find some evidence that the trend of increasing H i clustering may no longer hold (online figures). However, our box size is not sufficient to test the behaviour of H i clustering at higher \(M_{\rm HI}\) regimes. We also test how different instrument detection limits will affect the measured H i auto power spectrum, using \(M_{\rm HI}\) cuts as a rough proxy. We present in the right panels of Fig. 10 the clustering for subpopulations with cuts at \(M_{\rm HI}\geq 10^{7}M_{\odot}\), \(M_{\rm HI}\geq 10^{8}M_{\odot}\), and \(M_{\rm HI}\geq 10^{9}M_{\odot}\). The \(M_{\rm HI}\)-limited auto power spectra contains negligible changes, indicating that the H i auto power spectrum is not sensitive to detection limits. The \(r\) ratio for each threshold are identical (bottom right panel of Fig. 10). We conclude from this that H i detection thresholds below \(M_{\rm HI}=10^{9}M_{\odot}\) do not cross-contaminate observations, in the sense that an instrument that detects galaxies down to that threshold will not be preferentially measuring H i from H i-rich blue galaxies or H i-poor red galaxies. ## 4 Discussion: What Causes the Redshift Evolution of the Power Spectra? We have characterized the clustering of H i as a function of colour, redshift, \(M_{\star}\), and \(M_{\rm HI}\), often alluding to the influence of baryonic processes on these relationships. In particular, we attributed much of the inverse redshift evolution of the power spectra from Section 3.3 to baryonic processes, but did not determine the precise mechanisms responsible and scales at which they take place. Previous studies of the redshift evolution of red galaxy clustering have proposed mergers, disruption, galaxy quenching and some combination thereof as possible mechanisms that suppress structure growth in galaxies (Zehavi et al., 2005a; White et al., 2007; Guo et al., 2013). In our analysis, we have introduced colour transitions and gas loss as alternative suppression mechanisms. We attempt to disentangle these processes in the redshift evolution of the power spectra in Section 3.3 by considering the clustering of smaller galactic subpopulations. We first decompose the power spectra into contributions from centrals and satellites. Next, we describe how colour transitions and gas loss manifest in each component. We finish by briefly describing whether the redshift evolution of these terms can be explained with other processes, such as mergers and galaxy interactions. We separate the blue, red, all-galaxy, and H i auto powers into contributions from centrals and satellites to provide insight into the processes that govern their time evolution, shown in Fig. 11. We use _Galaxy Centres_ for the H i distribution in order to more clearly distinguish between centrals and satellites, but this should have little effect on our conclusions here (Section 3.1). Okumura et al. (2015) provide the following framework for separating the galaxy auto power spectra into central and satellite components, \[P(k)=(1-f_{s})^{2}P_{c\times c}(k)+2f_{s}(1-f_{s})P_{c\times s}(k)+f_{s}^{2}P_{ s\times s}(k)\,, \tag{9}\] where the contributions from centrals (c) and satellites (s) are denoted in subscripts and \(f_{s}\) represents the satellite fraction. Each term from equation 9 contributes differently at various spatial scales, depending on the shape of the term itself and the corresponding \(f_{s}\) coefficients (see figure 1 in Okumura et al., 2015). By definition, there is only one central per halo, so \(P_{c\times c}\) is dominated by shot noise in the one-halo regime. The red galaxy auto power spectrum, however, is dominated by \(P_{s\times s}\) on all scales because of the high satellite fraction for red galaxies (\(f_{s}\sim 0.8\) in TNG100). In contrast, the small satellite fraction for blue galaxies (\(f_{s}\sim 0.3\)) significantly down-weights \(P_{s\times s}\). We now study the evolution of the terms from equation 9 in Fig. 11. Every term from the all-galaxies power spectra increases with time (third row), retaining the structure growth seen in the full power spectrum from Section 3.3, whereas the evolution in other tracers is significantly more nuanced. Each component of the blue galaxies (top row) and H i (bottom row) increases slightly between \(z=1\) and \(z=0.5\) in Fig. 11, before both of the central-dependent terms (\(P_{c\times c}\) and \(P_{c\times s}\)) decrease between \(z=0.5\) and \(z=0\). Red galaxies (second row), on the other hand, cluster less with time in every term on small scales. This effect is particularly strong in the \(P_{s\times s}\) term, where the clustering of satellites falls by a factor of 2.5 from \(z=1\) to \(z=0\). We now analyse how colour transitions affect the previously described evolutions from Fig. 11. As described in Section 3.3, colour transitions suppress clustering by adding relatively weakly clustered galaxies to the red population and removing strongly clustered galaxies from blue. The impact of colour transitions is most visible in the suppression of the clustering of blue centrals (top right panel of Fig. 11), suggesting transitione blue centrals are primarily responsible for the decreasing power spectrum between \(z=0.5\) and \(z=0\). The suppression in \(P_{c\times c}\) (left) is strongest at largest scales, where transitioning blue centrals dominate because they tend to occupy the most massive blue-hosting haloes. However, in the case of \(P_{c\times s}\) (right), the massive haloes hosting transitioning centrals contribute substantially more to smaller scales because they possess the highest satellite fractions amongst blue-hosting haloes. Consequently, when all the central-satellite pairs from that halo are removed after their central transitions, \(P_{c\times s}\) decreases significantly at the boundary between the one-halo and two-halo regimes (\(k\sim 2\)\(h\)\(\rm{\,cMpc^{-1}}\)). Previous studies of galaxy quenching in IllustrisTNG indicate that AGN feedback is largely responsible for the colour transitions in these centrals, particularly when their mass reaches the sharp transition at \(M_{\star}\geq 10^{10.5}M_{\odot}\)(Zinger et al., 2020; Donnari et al., 2021). The last term, \(P_{s\times s}\) (middle), still increases with time, indicating that colour transitions are less effective at removing power from blue satellites. In the case of the red population, transitioning galaxies influence their clustering on small scales rather than large scales. As depicted in the middle row of Fig. 11, \(P_{c\times c}\) (left) is suppressed moderately at the smallest scales of the two-halo regime (\(k\sim 1\)\(h\)\(\rm{\,cMpc^{-1}}\)), while \(P_{s\times s}\) and \(P_{c\times s}\) decrease drastically almost entirely within the one-halo regime. Transitioning galaxies inhabit the smallest and youngest red-hosting haloes, which contribute most to small-scale red galaxy clustering, as large scales are dominated by the largest haloes. This is particularly true for the clustering of satellites, as new red-hosting haloes have smaller satellite fractions and satellite quenched fractions (Donnari et al., 2021). The other baryonic process we examine is gas loss, which only directly impacts the clustering of H i but not of galaxies. The evolution of H i auto power spectrum components (bottom row) echo the corresponding blue terms (top row), although with a few additional subtleties. When a galaxy transitions from blue to red, it is completely removed from the blue distribution. The associated gas loss, however, simply down-weights previous contributions of that galaxy to the H i auto power spectrum. This mitigates the inverse evolution of the \(P_{c\times c}\) (left) and \(P_{c\times s}\) (right) terms at \(z=0.5\) to \(z=0\) as compared to blue galaxies. Similarly to the effect of colour transitions on blue galaxies, we attribute the abating H i power spectrum primarily to the rapid gas loss in and around centrals. The alloca tion of H i between centrals and satellites in massive haloes further supports this conclusion. VN18 find that satellites contain most of the H i in \(M_{h}\gtrsim 10^{13}M_{\odot}\) haloes at \(z\leq 1\) (see their figure 7). The H i profiles of these haloes provide further evidence for gas loss in and around centrals. The profiles of \(M_{h}\sim 10^{13}M_{\odot}\) haloes contain \(\sim 10\) kpc / \(h\) "holes" around their centres which steadily deepen and broaden with time, a characteristic slow found in massive galaxies from other simulations (Bahe et al., 2016; Stevens et al., 2023). At \(z=0\), \(M_{h}\sim 10^{12}M_{\odot}\) haloes also develop similar features in their profiles (see figure 5 from VN18), showing an increased prevalence of this characteristic at later times even at lower-mass haloes. This phenomenon supports our conclusion that gas loss near the centres of previously gas-rich haloes suppresses the growth of the H i auto power spectrum. Colour transitions and gas loss do not comprise an exhaustive list of mechanisms that can suppress power spectra. Mergers and disruptions, for example, can also impede the rate of clustering growth in the one-halo regime by reducing the number density of satellites (Bell et al., 2004; Skelton et al., 2009; Guo et al., 2013). However, mergers are unlikely to elicit the decrease in blue galaxy clustering, as blue galaxies merge at a relatively slow rate (Lin et al., 2008; Darg et al., 2010). Red galaxies appear to follow this description in the \(P_{sX\rm{S}}\) and \(P_{cX\rm{S}}\) terms, which decrease significantly at small scales. However, the all-galaxy \(P_{sX\rm{S}}\) term _increases_ with time at \(k\sim 5\)\(h\)\(\rm{cMpc}^{-1}\) by a factor of \(\sim 1.5\), despite red galaxies decreasing by a factor of \(\sim 2\) at the same scale. Mergers should affect both the red and all-galaxy \(P_{sX\rm{S}}\) terms (Watson et al., 2011), particularly at \(z=0\) where red galaxies comprise \(\sim 70\%\) of the mass in satellites. However, we note that our reasoning throughout this section neglects the impact of galaxies crossing our imposed resolution limit and centrals becoming Figure 11: Redshift evolution of the blue (first row), red (second), all-galaxy (third), and H i (last) auto power spectra split into contributions from centrals (\(P_{cxc}\), left), satellites (\(P_{sX\rm{S}}\), centre), and their cross-correlation (\(P_{cX\rm{S}}\), right) according to equation 9. Power spectra at \(z=1\) (lighter colours) and \(z=0.5\) (darker) are normalised by the corresponding \(z=0\) power spectra, such that power spectra monotonically increasing from \(z=1\) to \(z=0\) approach unity from below and those decreasing from above. We employ the _Galaxy Centres_ H i models to more clearly distinguish centrals and satellites in the H i distribution, and display only the mean of the resulting power spectra for visibility. The blue galaxies and H i power spectra lose a substantial contribution from the \(P_{cX\rm{S}}\) term between \(z=0.5\) and \(z=0\), with a smaller loss from \(P_{cxc}\). Red galaxies, on the other hand, lose considerable power on small scales from all terms, especially from \(P_{sX\rm{S}}\). These trends provide deeper insight into the processes responsible for the unexpected redshift evolutions found in Section 3 (see text). satellites between redshifts. Properly disentangling the contributions of various baryonic processes requires sophisticated modelling (Guo et al., 2013), which we leave for future work. In summary, we find evidence that previously neglected baryonic processes, namely colour transitions and gas loss, significantly influence the clustering of blue and red galaxies and H i at \(z\leq 1\) such that the power spectra for these populations decrease with time. We identify the signatures of colour transitions and gas loss in the evolution of the terms from equation 9 in Fig. 11. These processes in (previously) gas-rich and star-forming haloes appear particularly effective at suppressing the clustering of each population. ## 5 Conclusions We have presented the first systematic investigation of the cross-power spectra of H i and galaxies split into colour subpopulations in IllustrisTNG and studied how their clustering changes with time and various galaxy properties. We find the following: 1. The clustering of simulated H i distributions exhibits only a weak dependence on the model for the transition between atomic and molecular hydrogen. 2. The H i-red galaxy cross-power spectrum (H i\(\times\) Red) is greater than H i-blue (H i\(\times\) Blue) at large scales due to red galaxies' inherent clustering strength and larger bias with respect to matter. However, processes such as AGN feedback and ram-pressure stripping suppress H i abundance in the massive haloes that red galaxies typically occupy, weakening H i\(\times\) Red on small scales and creating an intersection between H i\(\times\) Red and H i\(\times\) Blue at \(\approx 3\) Mpc at \(z=0\). 3. In redshift space, the suppression of power due to the fingers-of-God effect manifests more strongly in red galaxies than blue galaxies, introducing a secondary colour-dependency and pushing the intersection of H i\(\times\) Red and H i\(\times\) Blue to \(\approx 10\) Mpc at \(z=0\). 4. The H i, red, and blue galaxy auto power spectra and their cross-power spectra _decrease_ with cosmic time, contrary to the clustering of matter and the galaxy population as a whole. Colour transitions in galaxies and H i consumption contribute to this inverse evolution. These baryonic processes may need to be taken into account in models of the large-scale distribution of these populations at \(z<1\). 5. H i\(\times\) Blue increases as a function of \(M_{\star}\). H i\(\times\) Red also reflects this trend, until \(M_{\star}\approx 10^{10}M_{\odot}\) where the clustering is the weakest amongst the stellar mass bins examined. Red galaxies below this threshold are typically satellites that occupy massive haloes more frequently than the red centrals in larger \(M_{\star}\) bins. 6. Satellites correlate more strongly with H i with increasing \(M_{\star}\), whereas centrals correlate less strongly. These trends hold regardless of galaxy colour. 7. The H i clustering increases as a function of \(M_{\rm HI}\), and galaxies with \(M_{\rm HI}\geq 10^{8}M_{\odot}\) dominate the total H i auto power spectrum. We show that H i auto power spectra should be unbiased as long as the survey captures galaxies with \(M_{\rm HI}\geq 10^{8}M_{\odot}\). These conclusions are important for future 21cm surveys, where detections occur in the first phases as cross-correlations. These results contribute to the theoretical formalism needed to extract cosmological constraints from upcoming 21cm surveys and better understand the H i-galaxy-halo connection. We have established that the baryonic processes associated with quenching can have large-scale imprints on the clustering of H i, blue, and red galaxies. Models of the bias with respect to matter of these populations rely on simplistic assumptions about their growth and scale-dependence, which may skew the interpretation of the data from 21cm surveys, as we will explore in future work (Osinga et al., in prep.). One caveat to our results is that they were derived from a single suite of simulations, IllustrisTNG. Repeating our analysis with other simulations could illuminate whether these relationships are sensitive to the simulation's model for galaxy formation and evolution. For example, it is unclear whether or not the cosmic abundance of H i (\(\Omega_{\rm HI}\)) in IllustrisTNG is consistent with observations in the redshifts studied here (Villaescusa-Navarro et al., 2018; Diemer et al., 2019). Furthermore, mock-observing IllustrisTNG would allow for more faithful comparisons with observations and further our understanding of how observational effects manifest in the clustering relationships examined here. For example, Donnari et al. (2021) showed that quenched fractions are quite sensitive to observational effects. ## Acknowledgements We would like to thank Alberto Bolatto and Massimo Ricotti for their guidance during the project, and Adam Stevens for the useful discussions. Research was performed in part using the compute resources and assistance of the UW-Madison Centre For High Throughput Computing (CHTC) in the Department of Computer Sciences. The CHTC is supported by UW-Madison, the Advanced Computing Initiative, the Wisconsin Alumni Research Foundation, the Wisconsin Institutes for Discovery, and the National Science Foundation. We also acknowledge the University of Maryland supercomputing resources ([http://hpc.umd.edu](http://hpc.umd.edu)) made available for conducting the research reported in this paper. Other software used in this work include: numpy (Harris et al., 2020), matplotlib (Hunter, 2007), seaborn (Waskom, 2021), h5py (Collette et al., 2021), colossus (Diemer, 2018) and pyfftw (Gomersall, 2021).
2304.13863
Ensoul: A framework for the creation of self organizing intelligent ultra low power systems (SOULS) through evolutionary enerstatic networks
Ensoul is a framework proposed for the purpose of creating technologies that create more technologies through the combined use of networks, and nests, of energy homeostatic (enerstatic) loops and open-ended evolutionary techniques. Generative technologies developed by such an approach serve as both simple, yet insightful models of thermodynamically driven complex systems and as powerful sources of novel technologies. "Self Organizing intelligent Ultra Low power Systems" (SOULS) is a term that well describes the technologies produced by such a generative technology, as well as the generative technology itself. The term is meant to capture the abstract nature of such technologies as being independent of the substrate in which they are embedded. In other words, SOULS can be biological, artificial or hybrid in form.
Ty Roachford
2023-04-26T23:11:59Z
http://arxiv.org/abs/2304.13863v1
**Ensoul:** A framework for the creation of self organizing intelligent ultra low power systems (SOULS) through evolutionary enerstatic networks. ## Abstract Ensoul is a framework proposed for the purpose of creating technologies that create more technologies through the combined use of networks, and nests, of energy homeostatic (enerstatic) loops and open-ended evolutionary techniques. Generative technologies developed by such an approach serve as both simple, yet insightful models of thermodynamically driven complex systems and as powerful sources of novel technologies. "Self Organizing intelligent Ultra Low power Systems" (SOULS) is a term that well describes the technologies produced by such a generative technology, as well as the generative technology itself. The term is meant to capture the abstract nature of such technologies as being independent of the substrate in which they are embedded. In other words, SOULS can be biological, artificial or hybrid in form. ## Motivation and Outline The contents of this paper are written with the intention of connecting topics that are believed to help provide insight into fundamental questions about life and intelligence as well as assist in motivating the particular approach that this paper takes to creating generative technologies. It is written at a high level and as such the precise technical details of certain topics are left out. This has been done for a number of reasons. The first is that this paper is meant to be rather concise. Many of the precise details of each topic are not necessary to understand the connection between topics. Secondly, it is meant to allow for those without specialized backgrounds to more easily follow the thread of discussion. This should make the paper as accessible as possible. Readers who wish for a deeper understanding can refer to the various references provided in this text. The main questions that have motivated an evolutionary enerstatic network approach to building generative technologies are the following: * How do components of naturally evolved systems come together to work towards a common interest? In other words, how does self-organization work? * How do "embedded agents", that is, agents that are made out of the same "stuff" as the environment they are in, make decisions under logical uncertainty (i.e. with limited spatiotemporal access to the environment and limited processing power)? * How can one capture the complexity of nature in computer simulations without randomly searching the space of possible models, or solely using mathematical methods to create, tune and run highly parameterized biologically accurate models? The major topics that this paper then synthesizes and proposes to explore are: * Energy homeostatic (enerstatic) loops as powerful models * The Free Energy Principle (FEP) as an informative description of such models * Open ended evolution through energetically closed systems of enerstatic loops * Assembly theory as an informative description of such closed systems. Specifically, the paper is organized into eight sections as follows: 1. The **"Teleology, TAME and SOULS"** section, details why it is helpful to talk about systems as having "goals" as motivated by the free energy principle, cybernetics and the Technological Approach to Mind Everywhere (TAME) framework. In that same context, the term "SOULS" is then explained as being an ideal description of such substrate-independent phenomena. 2. The **"Open-Ended Evolution and Assembly Theory"** section describes what "Open-Ended Evolution" actually is, as it relates to one of the goals of the field of evolutionary programming. It further discusses open-ended evolution's relationship to assembly theory and posits a "zoomed out" look at assembly systems in general. 3. The **"IOS Illusions: Motivating Modeling With Enerstatic Networks"** section motivates enerstatic loops, networks and nests as an ideal abstraction for modeling SOULS as opposed to an "Input Output" system based approach. 4. The **"Enerstatic Networks"** section discusses how an enerstatic loop works with a simple code explanation, and also outlines how it can be specialized to model neural networks in particular whose generalization served as the birth of abstract enerstatic networks. 5. The **"Evolving Enerstatic Networks"** section is the main contribution of this paper. It describes how open-ended evolution might be achieved through the use of energetically closed systems harboring enerstatic loops which naturally interact to form enerstatic networks and nests. As in the case of the "Enerstatic Networks" section, as much detail as possible is given without prescribing particular implementation details. 6. **"Thermodynamic Computing & Enerstatic Networks"** explains what thermodynamic computing is, and posits that enerstatic loops, networks and nests may serve as an extensible general model system for their development. 7. **"Taming SOULS"** explains enerstatic networks as enacting cooperative and competitive dynamics whose consequences drive behavior and discusses approaches towards understanding and controlling such dynamics. In particular, the "Test for Controlled Variable" is discussed along with general outlines for a "Test for Variable Controllability" and a "Test for Control Switch". 8. **"Discussion: Research Directions, Speculations and Philosophy"** suggests possible research directions, speculates on the potential benefits of such research, and finally discusses the philosophical implications of deep scientific knowledge. All sections are laid out in a way that follows a more conversational and "story-like" format, rather than "just" an academic style paper. Papers with more technical details and specific implementation methodologies along with empirical data from actual experiments are underway. It was deemed important to get these connections out there before the results from such experiments were completed in order to make the ideas behind such work readily accessible to both the scientific community, and the enthusiastic layman. The following is a small, but exciting step into the future of technology, technology made with SOULS. ## Teleology, TAME and SOULS Any "thing" that exists for longer than 1 Planck length of time must be "doing something" that allows it to persist its existence. This is the most general form of the "Free Energy Principle" (FEP), which is a theory that was initially proposed by Karl Friston to explain the brain's behavior before being extended to living and non-living "things". In mathematical form, we say that a "thing" is a "dynamical system", specifically a dynamical system in a "non-equilibrium steady state" (NESS), and that what the system is "doing" allows it to keep itself in that state [12]. The "something" that the system is "doing" can generally be called "self-evidencing", a term that Carl Gustav Hempel introduced with respect to explanations for events. This is just what it sounds like, the verb form of something being "self-evident". An explanation is self-evidencing if the event which it explains provides key evidence for the explanation [54, 22]. It turns out that a big part of the explanation for the persistent existence of every "thing" that we can observe can be understood in terms of what it is doing. Simply put, if a "thing" wasn't engaging in this self-evidencing process to at least some degree, then it wouldn't persist. A related argument in the same vein, is that for something to exist - despite not persisting it's existence - it must necessarily have causal power. In other words, it must be exerting some sort of effect on its environment. This has been called the "Eleatic Principle", which was first described in Plato's book "Sophist" [40]. FEP simply extends this to suggest that when looking at anything which is able to **persist** its existence, then one would find, upon measurement, that the causal power it is exerting must, on average, be self-evidencing. * Simplified illustration of "Self-Evidencing Causal Power". An environmental fluctuation results in "too many" particles inside the boundary, resulting in one of the bound particles being ejected. * A key phrase in the description is "biases the environment". It's not necessarily the case that the structure is capable of guaranteeing its own self-creation. It may be that the structure only increases the probability that the environment will re-create the structure for some duration of time. By establishing a formal connection between the particular dynamics of a "thing" and variational inference, the FEP allows us to speak of "things" created via some self-organizing process, teleologically [15]. That is, to speak about a "thing" with respect to its "goal". The term "goal" can be understood in a strictly cybernetic sense. In a cybernetic system, the system's goal is to regulate the value of a certain variable just as a thermostat regulates the temperature. The system's regulatory behavior is considered to be "goal-directed". Since the most primary variable that a "thing" must be regulating in order for it to persist its existence is its own particular dynamics, we can indeed speak of its behavior as being goal-directed. Thus, the FEP suggests that the primary goal of every "thing" is to self-evidence, regulate its particular dynamics, or most specifically, minimize its variational free energy through the process of variational inference. A detailed explanation on the mathematics behind how the aforementioned formal connection is made is outside the scope of this paper. Readers can refer to [15] for more information. What is important about the construction of this formal connection between the particular dynamics of a "thing" and variational inference is that it serves as a guiding principle for better understanding the behavior of any "thing" whose intelligence and cognition is related to how well it can perform such self-evidencing variational inference. In other words, all "things" exhibit some degree of intelligence and cognition. Indeed, even the fundamental units, or particles, in any theory of everything must necessarily have this "self-evidencing through inference" characteristic in order to persist its existence through time and thus has in the most minimal sense, a degree of intelligence and cognition. There are a few terms here that deserve further discussion, namely, definitions of intelligence and cognition, as well as arguments for why this paper prefers teleology over teleonomy. The definition of terms intelligence and cognition have been fraught with arguments in a way that is similar to what is meant by "consciousness" has. As such this paper provides no strict definition of the two that everyone has agreed upon. Instead, it takes an operational stance by defining them with respect to the mathematics of variational inference underwriting the FEP. When describing any "thing" the language of FEP speaks of its "beliefs", which essentially constitute knowledge that the "thing" has about the causes of its "sensory perceptions". It further suggests that any "thing" must be "doing something", that is, taking "actions", upon its external environment, which are based on its "beliefs". In this paper, the transformation of "sensory perceptions" and "beliefs" into "actions" upon an external environment, which are on average in the service of self-evidencing, is cognition. Cognition can then be measured via the degree to which that "thing" can "sense", "believe" and "act". In other words, cognition can be measured via the answer to questions such as: "What does it sense and to what level of detail? What seem to be its beliefs?", and "What are the actions available to it?" respectively. Intelligence, then, is a quality of this "sense", "believe", "act" cycle. In particular, intelligence is the adaptive capacity of a "thing" to accomplish its "goal" of existence. In other words, if we see something that exists, then the closer that it can get to non-existence ("death") in some particular way (i.e. the more "stressed" it can get) and pull itself back into a "healthy" state, the more intelligent we say that thing is. The ability of a "thing" to pull itself back into a "healthy state" from a "stressed state" necessarily entails the ability of that "thing" to update its "beliefs" and therefore change its actions. This is because it must necessarily do different things than that which resulted in that "stressed state". Thus, when that "thing" becomes better by some measure at pulling itself back into a "healthy state" from a specific type of "stressed state", then we will say that it has learned. The various ways that a "thing" can be "stressed" indicate the various ways in which a system has the opportunity to demonstrate intelligence, thus intelligence is a multi-dimensional characteristic of cognition. Teleonomy is a term that was developed to distinguish between "things" that have "real goals" (teleological systems), like humans, and "things" that only look "as if" they have "real goals" (teleonomic systems), like robots, or bacteria. This paper argues that this distinction is unnecessary based on the formal connection that the FEP made between a "thing's" particular dynamics and inference, the application of the cybernetic sense of a "goal", and on the following argument which focuses on the shared intrinsic characteristic of all things, existence. **Argument for fundamental teleology:** 1. A "goal" is the desired result of a thing's behavior. 2. Identification of something's goal given its behavior is an unsolved, and in some cases potentially intractable, problem [5, 2, 24]. 3. However, any "thing" that exists, must be exhibiting behavior which allows it to exist, in order to exist. 4. Thus, existence is a result of any "thing" that exists behavior. 5. The question is then: Does this "thing" "intend" to exist, is that its "desired" goal? - We want to know whether this is the case or not, because we can only measure intelligence by observing behavior with respect to some goal. There are only two possible answers: 1. Existence is its goal, in which case, observing its behavior allows us to measure its intelligence. 2. Existence is NOT its goal, in which case, observing its behavior allows us to measure its **incompetence** While both answers to the "intentional existence" question posed in point 5 are equivalent in what they allow for with respect to measurement of a "thing's" behavior, the interpretation that each allows is different. Only the first answer's interpretation properly aligns with what we commonly understand as "intelligence". Readers may have noticed that there is a potential problem with this definition of intelligence with respect to a "thing's" existential goal. If you are observing some "thing" and it dies, does that mean it was necessarily incompetent in some way? Would that further suggest that "things" which die voluntarily are incompetent? For example, what about the case of a war hero who sacrifices himself to save his team? The answer is that one must broaden the lens through which they are observing. The definition of intelligence in this paper requires that one properly identify the "thing" of study, and in the case of a sacrificial war hero, such a sacrificial event cannot be properly understood outside of the context that it is embedded in. Instead of looking at the war hero as an individual "thing", this paper suggests that, upon observation of such a sacrificial event, they must now be looked at in context of the team, or collective, they are embedded in, whose higher scale intelligence now becomes the "thing" of study. In this context the war hero would not necessarily be considered "incompetent" because they are operating as part of a larger organization, which may be the appropriate "thing" of study. An analogy can also be drawn between the protective skin cells of an organism, and the organism itself. The sacrificial behavior of skin doesn't justify a judgment of incompetence. This sort of existentially-based multi-scale analysis of intelligence has consequences for what it means to be a particular individual "thing", or "self". You have a "self" when the components that make up any "thing" operate towards the same macro-level goal states [10, 49, 1]. Most importantly, these components come to operate together only through their natural physical behavior abiding by the path of least action [32]. An important part of establishing a coherent "self" is through the sharing of stress. The sharing of stress enforces cooperation over competition by making defection an impossibility. For example, in a situation where components are isolated from each other in terms of stress, it is possible for component A to stress component B without increasing its own stress levels. However, if stress is shared between components via physical forces, or molecules, then component A stressing any other component would increase that component's own stress levels. The larger the degree to which stress is shared between components the more of an impossibility that defection becomes and the more coherent the "self". This binding of individual "selves" through stress can lead towards more resources, and therefore computational power, being utilized towards the same aim [30, 29, 11]. This might be summed up in the following phrase: "When what stresses you, stresses me, we are a self". As we can see from our previous discussion, the concepts of "intelligence", "cognition", "goals", "stress", and "self" are all invariant across all "things" at every spatial scale from fundamental particles to humans, and beyond. Additionally, these concepts are completely naturalistic and operational in their definitions. In other words, not only are they useful concepts through which one can make measurements and perform engineering tasks, but they also do not require anything that's not included in the laws of physics at the most fundamental level. It is in this sense that the evolution of the universe since its inception has been one in which these fundamental properties have been scaled up through the combination of various "things". The process through which more complex "things" arise can be thought of as a generalized form of evolution in which all "things", rather than just "living organisms" specifically, are the object of natural selection. In this generalized evolution, we speak about the stability of "things" rather than the fitness of "things". Rather than saying "Only the fittest survive", we say "Only the most stable persist" [53]. In other words, "things" that are more stable "out compete" "things" that are less stable. This process of generalized evolution is well described by assembly theory in what is then called an "assembly process" which will be discussed in more detail in the "Open-Ended Evolution, Assembly Theory and Modeling" section [36]. Given that: (1) components of "things" come to operate together through their own behavior (2) such "things" as well as their components can be described as having "intelligence" (3) that the behavior of all things are only following the path of least action, it would seem that any "thing" that is assembled during an assembly process can be best described as **self organizing** in that its structure arises due to local interactions between its otherwise independent parts, **intelligent** with respect to its adaptive capacity to meet its existential goal, and **ultra low power** with respect to how much energy it requires to meet that same existential goal. This is why such structures are best described as Self Organizing intelligent Ultra Low power Systems, or SOULS. This paper focuses on what we, as humans, can do to assist in the combination of "things" such that what we build results in practical technologies that can be used for both science and engineering. As previously seen, the invariant multi-scale characteristics intrinsic to all "things" break down discrete categories by placing "things" on a spectrum in which the experimenter, or engineer must ask "How much" and "What kind" of property does this "thing" that they are studying, or engineering with, have [30, 49, 29]? As such, the terminology being used to refer to the "thing" in question best reflects the goals of the observer rather than the intrinsic properties of the "thing" itself. "Things" can be called "structures", "materials", or "components", in the case of engineering, "chemicals", "organisms", "life", in the case of biology, or, as we will see in the case of Ensoul, enerstatic loops, networks and nests, which are abstractions of SOULS at different scales. Technological Approach to Mind Everywhere (TAME) is a framework being developed by Dr. Michael Levin which aims to provide an experimentally-grounded framework for understanding diverse bodies and minds, or SOULS. The goal of the framework is to use such invariants as a mode of thinking about how we can both create, and engineer with, structures that have various levels of intelligence and cognition. An important concept in TAME is the "axis of persuadability". Levin states: "Persuadability refers to the type of conceptual and practical tools that are optimal to rationally modify a given system's behavior." Some SOULS are more persuadable than others, and their persuadability is, in part, a function of what Levin describes as a SOULS' "cognitive light cone" [30, 49, 29]. The cognitive light cone represents the spatiotemporal degree to which a SOULS' intelligence, or problem solving ability, can take into account. For example, simple unicellular organisms might "care" only about things like local sugar gradients, and events that occur on the order of milliseconds to perhaps seconds. In contrast, humans can care about what's going to happen to the universe (huge space), in a few billion years (long time). Thus, the cognitive light cone of a human is much larger than that of a simple unicellular organism. * "Figure adapted from Levin (2022), published in Frontiers in Systems Neuroscience, under the Creative Commons Attribution License (CC BY). Source: [https://www.frontiersin.org/articles/10.3389/fnsys.2](https://www.frontiersin.org/articles/10.3389/fnsys.2) 022.768201/full" * "Figure adapted from Levin (2019), published in Frontiers in Psychology, under the Creative Commons Attribution License (CC BY). Source: [https://www.frontiersin.org/articles/10.3389/fpsyg.2019.02688/full](https://www.frontiersin.org/articles/10.3389/fpsyg.2019.02688/full)" For practical purposes, it is helpful to define the axis of persuadability on a per-goal basis. Some SOULS are better than others depending on the goal independent of their persuadability. For example, if one's goal is for the SOULS to bake cookies, a typical computer would need to be broken down into its constituent pieces and reconfigured perhaps into a small cookie making robot. This would be low on the axis of persuadability as it requires brute force micromanagement. However, a working chef would only need to be told what to make and perhaps convinced with some money. In this way, the axis of persuadability with respect to some goal is anti-parallel with the amount of information required to meet that goal. In other words, the amount of energy that must necessarily be expended by the TAMEer in order to have the SOULS meet some goal decreases as the SOULS' position on the axis of persuadability moves to the right. * Degree of persuadability vs the amount of energy, in the form of knowledge/effort, needed to get the SOULS to align with your goal. * "Figure adapted from Levin (2022), published in Frontiers in Systems Neuroscience, under the Creative Commons Attribution License (CC BY). Source: [https://www.frontiersin.org/articles/10.3389/fnsys.2](https://www.frontiersin.org/articles/10.3389/fnsys.2) 022.768201/full" It is important to recognize that the axis of persuadability is a spectrum. We covered the extreme left, micromanagement, and the extreme right, communication of cogent reasons, of the axis, but there are SOULS in the middle as well. Two examples illustrated in the axis of persuadability graphic are of a thermostat, and a dog. If one's goal is to change the temperature of some room they are in, then the thermostat is a good bet, and you don't need to micromanage all of its components in order to do so, you only need to change the set-point. Similarly, if one's goal is to have a dog bring you a beer from the fridge, micromanaging all its muscles, or trying to control its neurons with a brain computer interface, is probably not the best way to do it. Instead, one can train the dog with treats (rewards). Intelligence is truly in the eye of the beholder here. One must recognize how one's goals align with the goal of the SOULS that they are interacting with. The closer this alignment, the easier it is to "convince" the SOULS to do what you'd like. More of TAME and other tools for identifying the higher "non-existential" goals of a particular SOULS such that it can be modified and worked with are discussed in the "Taming SOULS" section. We first turn to other concepts related to the development of enerstatic networks as a tool for SOULS creation. ## Open-Ended Evolution and Assembly Theory Despite the title, the field of evolutionary programming has not yet been able to capture the true open-ended evolution that we observe when we look around the universe. Dr. Kenneth Stanley suggests that the reason is because most evolutionary algorithms are still focused on optimization rather than divergence, and that the problem with optimization is that the most ambitious goals are deceptive with respect to their path to get there [28]. In other words, if your goal is to create something like Artificial General Intelligence (AGI), then the path there is not at all straight-forward. In actuality, there will be many deceptive looking sub-goals that appear to take you closer to your goal, but actually take you farther from it. We see hints of this when training a neural network to perform some task. The neural network can move into a local minima with the "belief" that it's going in the right direction, but of course, this is actually taking it farther away from the global minima. A great practical example of deception is the example of training a bipedal robot to walk. You can either train it via an objective optimization approach where the goal is to get closer and closer to walking, or you can do this with a divergent explorative approach. Dr. Stanley employed both methods and found that the biped "trained" via divergent exploration actually ended up walking farther than the biped that was optimized towards walking specifically [28]. A more straight-forward example is that of training a neural network to solve a maze. If you simply optimize for minimizing the distance between the agent and goal, then the following maze constitutes a deceptive example: Most of technology has been focused on building machines, or programs, that have particular desired outputs. That is to say, creation of technology has mostly been focused on convergence, rather than divergence. It is this focus on convergence that makes most of technology today "evidence of life" rather than "life" itself. In assembly theory, "evidence of life" refers to structures with an "assembly index" that is higher than a certain threshold. The assembly index is an intrinsic measure of a structure's complexity [35, 48, 37]. The idea is that high complexity structures have a dramatically smaller chance of being created through random unbiased processes, than through biased processes and thus constitute evidence of some stable constructor. That stable constructor would be considered "evidence of life" and indeed a living structure, or alternatively, a SOULS lying relatively far to the right on the cognitive continuum. The more complex the structure the SOULS is capable of constructing, the more "alive" that SOULS could be said to be. Thus, placing the notion of "alive" on a continuum. Zooming out, in the context of assembly theory, life itself could be described as the process by which evidence of life, at potentially varying levels of complexity, is generated. In some sense then, our universe is "alive". Since assembly theory is a physical theory, we can think about other possible universes that may be more or less "alive" than our universe. One could say that the level of "aliveness", "interestingness", or "intelligence" of a universe would depend on that universe's trajectory through "assembly space". Specifically, the most alive, interesting and intelligent universe would be one whose trajectory, on average, best maximizes the number of unique objects at each possible level of complexity. Interestingly, as a consequence of the FEPs description of the informational symmetry of physical interactions, the environment (i.e. the universe) of any structure must itself be considered an agent (which we will call "U"). Furthermore, as U gains copies of the various structures within it, it is better able to predict the behavior of those structures, and therefore it is better able to minimize its variational free energy (VFE) [13]. If one does indeed grant that intelligence is the capacity of an agent to minimize its VFE, thereby persisting its existence, then the most intelligent universe is one in which the number of copies of each existing unique object is maximized. A more rigorous treatment of measuring aforementioned qualities in assembly systems is outside the scope of this paper. * Demonstrating what the trajectory of an intelligent universe through assembly space might look like. On the x-axis, we have indexes associated with each unique structure. On the y-axis, we have copy numbers associated with each unique structure I.E. how many of that structure is currently present. Each frame represents a different point in time. Each structure-index pair is ordered such that structures towards the right have a greater complexity than structures on the left. This forms a sort of "wave" where structures bootstrap the creation of more complex structures, which additionally represents resource constraints. In other words, under resource constraints, more complex structures preclude the existence of a high number of less complex structures (hence the wave). However, without resource constraints, this wave could in principle propagate forever; Hence the infinity symbol-question mark pair on the bottom right side of the graph. It should further be noted that such infinite propagation would not necessitate a decrease in less complex structures thus increasing perhaps, the "interestingness" and "aliveness" of the system; Interestingness being the diversity and amount of existing structures in general, and aliveness being the diversity and number of living structures, or stable constructors, specifically. Life itself is thus the process through which SOULS assemble and recombine to create more SOULS of varying complexity (evidence of life that might itself be living). The suggestion here is that we should focus more on how to create technologies that create more technology. In other words, how to create intelligent, open-ended, evolutionary, life processes. This will help us avoid deceptive sub-goals, discover new technologies, and through the study of structures created by such a process, learn more about existing structures, or SOULS. An interesting point to note here is that an assembly system itself exhibits **self-organization** with respect to consequences of its internal dynamics, **intelligence** with respect to its ability to create and sustain high complexity objects and **ultra low power** in that the "running" of its dynamics (i.e. execution of causal power by its components) require a total of 0 energy [52]. Thus, an assembly system itself can be considered a Self Organizing intelligent Ultra Low Power System, or SOULS. The requirements for open-ended evolution have been strongly hinted at through biological experimentation and observation [49, 28], and are well described phenomenologically by assembly theory [48]. Evolution doesn't just employ random mutation, but biased mutation through mechanisms including, but not limited to, enhanced epigenetic capacity, transposition, horizontal gene transfer, and hybridization. Such mechanisms allow organisms produced by evolution to essentially participate in their own evolution through increasingly intelligent mutation operations. This reciprocal biased process is well described by assembly theory. According to assembly theory, there is information to be gained about a so-called "Assembly Space" by considering the discovery process of assembling novel objects, i.e. the rate at which new objects are discovered (k\({}_{\text{d}}\)) over some time (discover time scale - t\({}_{\text{d}}\)), and the rate of production of a specific object (k\({}_{\text{p}}\)) over some time (production time scale - t\({}_{\text{p}}\)). When the ratio k\({}_{\text{d}}\)/k\({}_{\text{p}}\) is much greater than one, this results in an explosion of unique objects with a low number of copies (low copy number), whereas when k\({}_{\text{d}}\)/k\({}_{\text{p}}\) is much higher than one, we observe a high abundance of some common objects with high copy number. Either extreme leads to the absence of trajectories that grow more complex objects [48]. It can be demonstrated that the appearance of open-ended evolution in a physical system can be detected by a transition from \(k_{p}<k_{d}\) to \(k_{d}<k_{p}\), in other words from no-bias towards what gets created next, to increasingly more bias towards what gets created next. * Graphic demonstrating the transition from unbiased to biased complexity growth that occurs under selection. Figure has been recreated from "Assembly Theory Explains and Quantifies the Emergence of Selection and Evolution", Source: [https://arxiv.org/ftp/arxiv/papers/2206/2206_02279.pdf](https://arxiv.org/ftp/arxiv/papers/2206/2206_02279.pdf) [48] Essentially, assembly theory asserts that open-ended evolution algorithms follow the format: unbiased exploration to generate unique low-level building blocks which potentially limit the possibility of other low-level building blocks, followed by the combination of those low-level building blocks into "higher level building blocks", some of which are capable of sustaining themselves across time through their intrinsic causal dynamics. The two questions that this raises are: 1. What are those low level-building blocks? 2. What is responsible for the combination, and selection, of those building blocks into higher level building blocks? A helpful analogy is to see low-level building blocks as analogous to primitives in a programming language, and to see the entity responsible for combination, and selection, of those building blocks as analogous to software engineers constrained by the syntax, or rules, of the programming language. Software engineers combine these primitives into ever more complicated functions, which are then selected based on their usefulness and used to make ever more complicated pieces of software. Importantly, the programming language being used evolves and diversifies as do the ideas and models through which the software engineers operate. In actuality, this "analogy" is rather just a subset of a larger assembly process that is the evolution of the universe. The focus of this paper is how we can model an assembly process that captures the evolution of the universe "as it could be" using enerstatic loops. But first, we start by motivating why such loops are an ideal candidate for utilization as our low-level building blocks, explain how they work and show an example use case. ## IOS Illusions: Motivating Modeling With Enerstatic Loops A consequence of the fact that all "things" that exist must be "doing something" in order to exist (I.E. self-evidencing) is that most "things" are not best described as "Input/Output Systems (IOS)" like most typical computers are. Instead, they are best described as SOULS that are regulating their own particular dynamics and in doing so, exert causal power on the environment which appears to be its output in response to some input. The difference between IOS and SOULS is best illustrated by how each system solves the problems that are presented to them. IOS solve their problems using analytical solutions whereas SOULS solve their problems using non-analytical solutions. Analytical solutions to any given problem constitute modeling that problem explicitly where each "significant" part of the problem has a corresponding part in the model. In this case, the "thing" in question has an explicit model of the problem to be solved. Non-analytical solutions constitute lawful correspondence between a "thing" and its problem such that the natural dynamics of the parts that make up that "thing" solve the problem in question without explicit representation of that problem. In this case, the "thing" "is* a model of the problem to be solved. These two problem solving methods have also been referred to as "weak anticipation", and "strong anticipation", respectively [51]. A great example of these two very different problem solving methods is their application on the "outfielder problem", which asks: "How do outfielders catch a fly ball?" * Illustration of the "Outfielder problem" using an "analytical" solution where one measures the initial conditions of the baseball and calculates its trajectory using equations for projectile motion. An analytical solution to this problem might constitute the creation of a mathematical model using representative equations. For example, a "thing" could estimate the force and angle at which the ball was hit by the baseball player, and plug those values into equations that represent projectile motion. In this scenario, the "thing" must represent each input variable for the aforementioned equation, as well as the equation itself, with corresponding physical parts. The dynamics of these parts would constitute the calculation of the output which would then have to be used to drive behavior. A non-analytical approach to the outfielder problem would be to choose some particular aspect of the projectile motion of the baseball and control that variable such that its measured value remains fixed in such a way that leads to one's desired result. For example, a "thing" could look at the ball flying through the air and attempt to keep the speed of the ball constant in their visual field such that the ball does not appear to speed up or slow down. The idea is that if the ball is accelerating, then it will land behind the "thing", and if the ball is decelerating then it will land in front of the thing. In this scenario, the "thing" doesn't require explicit representations; the "thing" only needs to be able to measure changes in speed, and use such changes as feedback for driving behaviors that result in decreased measured change. This proposed solution is called the "Optical Acceleration Cancellation" approach and was first put forward by Seville Chapman [7]. The distinction between an analytical and non-analytical approach is crucial when it comes to understanding and modeling the "thing" in question. It shifts the focus from "What algorithms is this thing implementing?" to "What variables is this thing controlling?". Indeed, treating a SOULS as if it is an IOS is a categorical error that can lead to frustration because it can lead one to believe that the behavior of the "thing" is highly variable either due to noise present in its input, or intrinsic non-determinism [32]. The benefits that natural systems receive for engaging in non-analytical methods over analytical methods are clear. Non-analytical methods are simpler, faster, more adaptive and less power hungry while still benefiting from much of the power that prediction has to offer [51]. They accomplish this by avoiding explicit representation of parts of the problem being solved and by avoiding using energy to calculate, and re-calculate, speculative analytical solutions. This suggests that SOULS, rather than IOS, are likely to be the more prevalent solution in nature [43]. Indeed, although IOS do exist in both nature [7] and technology, the parts that make up those IOS consist of a collection of SOULS whose "highly variable behavior" are squelched as much as possible so that they can in effect behave as an IOS. In computer engineering, this "highly variable behavior" goes under the name of "noise", and comes in many different flavors such as thermal noise, flicker noise, shot noise and quantum noise. Combating such noise is an important part of building fast, reliable computers and thus techniques such as filtering, shielding, grounding, the addition of noise margins, error-correcting codes and clock synchronization are all used towards that end. Indeed, combatting noise due to thermal fluctuations is so important that it has been cited as the leading candidate for the ceasing of the exponentially accelerating rate of technological progress known as Moore's law [26, 23]. This paper suggests that "highly variable behavior", or "noise", in a "thing", whether that "thing" is a biological organism, or a circuit in a computer, is the observed result of a categorical error. In other words, it is the result of viewing a "thing" as an IOS, when it is better viewed as a SOULS. In this sense, IOS are illusions that are approximated to arbitrary degrees of reliability by SOULS in the same way that the "actual value" of some irrational number like \(\tau\) can only be approximated to arbitrary degrees of accuracy. This has been referred to as the "behavioral illusion" under "Perceptual Control Theory" (PCT) [44, 34, 33]. Does this mean that we should abandon the idea of an IOS-based approach in favor of a SOULS-based approach? This paper suggests otherwise. Instead, just as we appreciate the idea that concepts like "intelligence", "cognition", "goals", "stress", "self" and "life" lie on a continuum, so should we appreciate the spectrum between idealized SOULS and idealized IOS. We can refer to the degree to which something modifies its own input simply as "feedback", with zero feedback representing an idealized IOS. Here, "idealized SOULS" refers to a SOULS that perfectly regulates the variable it is controlling, such that the controlled variable simply cannot be perturbed because any possible perturbation has already been "predicted". This is akin to mathematical models of anticipatory synchronization in which systems A and B are perfectly coupled such that A always predicts B no matter its behavior [51]. "Idealized IOS" refers to an IOS that is completely deterministic and perfectly reliable. The reference to "idealized" is an acknowledgement that there does seem to be noise in most systems whether they are IOS, or SOULS. For example, in the middle of this spectrum, we might imagine an odd thermostat. This thermostat controls the temperature of the space it is embedded in 50% of the time, while the other 50% of the time it forwards its computations to some other system that does who knows what. This perspective suggests that when it comes to trying to understand any "thing", we should first assess whether that "thing" is very much like a reliable IOS, or more like a SOULS. As with assessing the level of any "thing's" intelligence, the scale of analysis at which this assessment is made, in addition to the careful partitioning of our observed sensory data into "things", is critical to a proper understanding. A too detailed look at an IOS, no matter how reliable it is, will reveal SOULS-like behavior, in other words, one would observe a collection of "things" that are regulating their own dynamics - A failure of appropriate choice of scale. Similarly, paying attention only to particular pieces of a SOULS can result in the conclusion of simply a collection of noisy IOS - A failure of appropriate partitioning. Given that SOULS, and not IOS, seem to capture things at the most fundamental level, and that IOS can be approximated arbitrarily well by a collection of SOULS, this paper suggests the usage of a simple abstraction of SOULS in order to model any "thing". This simple abstraction is an energy homeostatic loop, or in other words, a cybernetic control system that is chiefly concerned with managing its own "energy level". The big questions that explorations with this abstraction hopes to provide insight to are things like: 1. If everything is made up of one "thing" (energy), how and why do we come to observe distinct things? 2. How might a differing method of the partitioning of energy into "things" be helpful? 3. How do collections of energy homeostatic, or enerstatic loops, come to regulate variables at higher scales (i.e. measurable variables at larger spatial scales like the speed of a ball)? 4. To what degree is it possible to control and get along with such collectives? ## Enerstatic Loops, Networks and Nests. The Ensoul approach is agnostic as to what form an enerstatic loop can take. The key is only to create a negative feedback system, or control system, that regulates its own "energy level" in such a way that exerts causal power on its environment. In other words, to create an enerstatic loop. The energy level is a partial representation of the state of the enerstatic loop, while the causal power is a representation of the particular energy exchange between the enerstatic loop and its environment, thus making such a loop a "thermodynamically open system". We are not concerned with how enerstatic loops come to be, instead we make the observation that "nature builds where energy flows" and build our models accordingly. It is critical to not mistake the following specific example implementations as the final, and only way, of creating enerstatic loops. One can think of these only as thought experiments within the Ensoul framework that happen to be implementable as code on a computer. An enerstatic loop can connect to other enerstatic loops via energy exchange which together constitute an enerstatic network. Those same networks, or singular loops, can be present inside of larger enerstatic loops, which constitute enerstatic nests. This captures the multi-scale fractal nature of reality, however, it can make the terminology confusing. Given such a multi-scale approach, we distinguish between the "main" enerstatic loop that is the subject of study/modeling, and the "lower level enerstatic loops" that are nested inside of our main loop, by calling such nested loops "structures". We model with enerstatic loops, networks and nests using the following guidelines: 1. There is a flow of energy into the system that depends on the action of that system. 2. This energy flow results in the establishment of an energy distribution, where energy is higher and lower in different parts of the system in the form of other enerstatic loops, or "structures". The establishment of this energy distribution represents the system starting out "alive" I.E. in a non-equilibrium steady state. 3. This energy flow results in the aforementioned energy distribution being modified in some way. 4. There is an inevitable loss of energy as the system generates entropy via exertion of causal power. 5. Structures can only be assembled in areas where energy density is high enough to support such assembly. 6. There are rules defining the space of possible structures available for assembly by our enerstatic loop 1. A structure is minimally composed of the following elements 1. Cost of assembling structure 2. Cost of dissembling structure 3. Position in space 4. Causal Power of the structure 5. Cost of the causal power 1. Causal power always has a cost and "eats up" energy based on the particular effect that the power has at every time step. Not meeting this energy requirement would result in the non-existence of the structure. Causal power of a structure consists of the following properties: 1. Sensed Input Properties, this is the set of properties that the structure is "aware" of. 2. Affected Output Properties, this is the set of properties that the structure exerts its causal power on. i.e.. properties that are modified by the structure. 3. The effective radius, this is the distance at which properties can be sensed, or affected. This may be different for different properties. When the system is unable to perform actions that allow for an appropriate flow of energy, the system "dies" i.e. falls into equilibrium. To get a better idea of how to meet all these guidelines, we will quickly fashion together a minimal enerstatic network consisting of minimal enerstatic loops. To begin with, we acknowledge that an enerstatic loop is a control system, which formally consists of four components: (1) A "control center", which defines the ideal "set point", or state, of the system (2) A "physiological variable", or a "sense" of the variable to be regulated(3) "Sensors", which are used to gain information about the environment (4) "Effectors", which are mechanisms used to modify the environment. This takes us well on our way to meeting all our previous guidelines. The following is working javascript code for the most minimal enerstatic loop, where we can see all of these features reflected: \begin{tabular}{|l|} \hline sent & Inter Our minimal enerstatic loop is lacking quite a few things to make it interesting/extensible, specifically: **7.** There is only one loop in the environment 2. Its structure (i.e. its single effector) is fixed in that it cannot be added/removed. 3. The single effector doesn't have an assembly/disassembly and causal power cost 4. Death of the loop isn't explicitly defined 5. Cannot take active actions, and doesn't have any sort of learning capacity 6. The flow of energy into the system doesn't depend on the actions of that system 7. There no loss of energy as the loop operates * Example of a Minimal Enerstatic Network (MEN): Loop health is represented according to the color of the loop. Towards red means that a loop is close to its fatal energy threshold, whether that's too much, or too little energy. The opacity/transparency of each loop moves towards clear when the loop is towards its negative energy threshold. Dead loops end up invisible, but their associated energy channels do not. Each loop has 3 instances of a structure called an "Energy Channel", whose causal powers shifts energy between origin and target loops based on the origin's energy level. Thus, each loop is randomly connected to 3 other loops. The network can then be perturbed in various ways to see whether or not it is able to adapt. Addressing the specific code implementation of all these details is outside the scope of this paper, but javascript code implementing these aspects (used to generate the previous image), is available here: [https://github.com/Tyfoods/minimal-enerstatic-network](https://github.com/Tyfoods/minimal-enerstatic-network) At this point, there are a couple guidelines that may need a bit more clarity with respect to their motivation. The first is that idea that an enerstatic loop has a set of structures which it can assemble, and disassemble and the second is the nature of the structures themselves. The organizational nature of an enerstatic loop is meant to capture the idea from assembly theory that information about how to construct particular structures is embodied by what we would call living things. This information embodiment is what allows assembly theory to characterize life as that which produces a high copy of structures above a particular complexity threshold, and furthermore to place different living things on different points on the "life continuum". Structures are an abstraction of enerstatic loops that allow one to specify the function of such loops without referring to the various control system details that we have previously discussed. We are allowed to do this because these structures represent enerstatic loops that are kept consistently at their homeostatic set point and thus exert the same causal power consistently. We take the assumption that the enerstatic loop that these structures are a part of considers structures outside of homeostasis to be "defunct" and thus disassembles them. What will be discussed next is what a learning algorithm could look like in an enerstatic network. ### Learning in Enerstatic Networks In our minimal enerstatic network, each loop was extremely sensitive to any perturbations in its energy level. In other words, any fluctuation in internal energy resulted in "action" being exercised by the loop. In our case, this action was in the form of causal power exhibited by our single effector. There is another sort of action that can take place, and this action is with respect to the reconfiguration of the enerstatic loop. If we allow our enerstatic loop to reconfigure itself, and give it time to test the consequences of its new architecture, then we can employ some sort of learning algorithm. A great way to give enerstatic loops time to test the consequences of their actions is through implementing two "windows", or ranges, inside of its allowed energy levels. These two windows are the "stasis Window", and the "action Window". The stasis window is a set of numbers around the energy set point of the enerstatic window in which the loop's architecture goes unchanged. The size of the window defines how much time, or tolerance, a loop has before it takes the action of reconfiguring its architecture. The action window is a set of numbers around the positive and negative bounds of the stasis window which indicate that the loop should attempt to reconfigure its architecture. * Enerstatic window parsed into two separate windows: The stasis window in which no action is taken, thus the loop is considered stable, and the action window, in which the loop is considered unstable, and must take action to return to a stable state. Now that we know when we will reconfigure the enerstatic loop, we are now tasked with defining under what particular modification/action will be made, how many of those modifications will be made and under what circumstances. A simple approach is to define a probability associated with each action, and to allow for only one action to be made per time step. In other words, when the loop is pushed into the action window, each possible action has a probability associated with it, which determines whether or not that action is the action selected on that time step. However, it is not necessarily true that an action MUST be taken on any given time step if a loop is in its action window. This can be left up to the programmer and isn't important for our conceptual discussion here. The question the programmer must ask next is: "When and how should I update my beliefs, or probabilities?" In this example, we answer both of these questions by introducing yet another window, the "Change Action Probability" (CAP) window. Collectively, these windows are known as enerstatic windows. The CAP window allows for probabilities to be changed in a sort of last ditch effort to attain stasis. Since the goal is to change action probabilities in such a way that result in a return to stasis, it would be reasonable to come up with a complicated algorithm that attempts to predict what actions should be modified, however, inspired by plasticity mechanisms in neurons, we can opt for simple associative hebbian-like learning. During the action window, the loop can integrate the information, or series of steps that led to the loop being pushed into the CAP window. In particular, the loop can be made to remember the actions that were taken as well as what the energy level was when each action was taken. Instead of looking any deeper at the details of this data, our simple associative hebbian-like learning algorithm strengthens, or weakens all actions taken in the action window based on whether those actions led to the stasis window, or the CAP window. In this way, probabilistic sequences of actions that helped are strengthened, while probabilistic sequences of actions that hurt, are weakened. Furthermore, the particular energy level at which the action was taken could be recorded to allow for an even more detailed belief updating. While exploring the space of possible enerstatic networks is an interesting, and potentially valuable, task in its own right; Modeling physical phenomena is a powerful way to both gain insight into existing SOULS as well as build new even more heavily nature-inspired SOULS. As an example, we extend the concept of an enerstatic network to model the function of neurons. ### Modeling Neurons: Enerstatic Neural Networks Before conceptually detailing how one might go about modeling neurons with enerstatic networks, it is important to understand the motivation behind the approach in the context of biology because it may apply generally to all cells. The goal of modeling cells as enerstatic networks is to investigate if such an idea can serve as a guiding principle to understanding the behavior of cellular collectives. In the particular case of neurons, understanding brains. Neurons were the main inspiration in the development of the enerstatic network due to the observation that neurons are, for the most part, lifelong cells that collectively take up 20% of the body's energy budget [14, 20]. The question that this provoked was: "Given that we see neurons that exist over the lifetime of an organism, what would that neuron's behavior have to be like in order for it to do so?" This paper suggests that the answer is "behavior which regulates the neuron's energy level within a certain homeostatic window". Indeed a neuron with low enough levels of ATP will not even be able to carry out apoptosis and instead will undergo necrosis [57]. On the contrary, high levels of ATP in a neuron can facilitate cytotoxicity and ultimately cause cell death [16], or cause disruptions in protein synthesis, which can also lead to cell death [42]. This demonstrates the importance of ATP as the "physiological variable" that the "control system" that is the neuron, regulates. If the primary objective of a neuron is to control its energy level, then one could say that "information processing" in a neuron is "built on top" of the mechanisms that a neuron uses to regulate its energy level. In this case, it would be the case that trying to understand the way neurons process information would be frustrated by the fact that neurons are energy management systems first, and information processors second. There is indeed evidence demonstrating an informative relationship between energy efficiency and function [39, 45]. More generally, if all cells that exist must meet this existential imperative, then it could often make sense to model and understand them with enerstatic networks. In this perspective, all cells don't know what "computations" they're involved in, or what their role is in the development and operation of the organism they are embedded in. Cells only try to exist for as long as possible given their environment. The purpose of the structures that cells assemble and maintain is primarily to allow those cells to maintain their energy within a homeostatic window. This frustration is indeed one and the same as that which is caused by the "Behavioral illusion" discussed in "IOS Illusions: Motivating Modeling With Enerstatic Loops". We now turn to an example of using enerstatic networks to model neurons. Since computational experiments with enerstatic neural networks are currently in progress, and the purpose of this section is only to inspire ways of modeling biological SOULS with enerstatic networks, only the general conceptual details of the project will be discussed. Enerstatic neural networks are just minimal enerstatic networks except for the fact that the structures that loops in an enerstatic neural network assemble are analogous to those in neurons to the programmers desired level of complexity. Using the requirements for an enerstatic network, we can specify our conceptual model as follows: 1. There is a flow of energy into the system that depends on the action of that system * Neurons receive blood flow, or energy, only when they are stimulated. This is inspired by neurovascular coupling (NVC). 2. This energy flow results in the establishment of an energy distribution, where energy is higher and lower in different parts of the system in the form of other enerstatic loops, or "structures". The establishment of this energy distribution represents the system starting out "alive" I.E. in a non-equilibrium steady state. * Each neuron starts out with a suite of structures such as ion channels, receptors, axons, dendrites, genes etc.... Each structure occupies a different point in space and costs a variable amount of energy to create. 3. This energy flow results in the aforementioned internal energy distribution being modified in some way * Energy is captured and released by mitochondria which despite their motility, are unequally distributed about the neuron. This results in local pockets of high and low energy. 4. There is an inevitable loss of energy as the system generates entropy * Neurons have a "causal power" cost that is defined as a function of the structures they've assembled. 5. Structures can only be assembled at these energetic points * Only local areas with high enough energy are capable of assembling structures. 6. There are rules defining the space of possible structures. * As mentioned during our minimal enerstatic loop exercise, the cost of structure assembly, disassembly, and causal power are up to the programmer, but can be determined through empirical measurements. Similarly, conditions under which a structure exerts its causal power can be determined either way. Thus, for brevity, and because it is besides the point for this example, specific values/conditions have been left out. 7. **Structures** * **Macrostructures** 1. Genes - It is important to note that a genome doesn't necessarily specify all possible structures in the space of our program. * **Structural Genes** - Produces a particular "physical" structure which modifies the loop's architecture. * **Regulatory Genes** - Specifies how gene expression (causal power exertion) in one gene affects the expression of other genes. 2. Soma * Geometrically represented as a sphere with some diameter specified by the genes * Membrane potential specified by genes 3. Axons * Geometrically represented as a 2 or 3D lines * Membrane potential specified by genes 4. Dendrites * Geometrically represented as 2 or 3D lines * Membrane potential specified by genes * Microstructures 1. Voltage Gated Ion Channels - Voltage sensitivity specified by genes. Activates when the membrane potential of the structure it is embedded in reaches a certain threshold. When activated, either depolarizes (raises), or hyperpolarizes (lowers) the membrane potential of the cell; the degree to which this lowering happens is specified by the genes. 2. Ligand Gated Ion Channels - Activated when in contact with some particular neurotransmitter specified by the genes. When activated, either depolarizes (raises), or hyperpolarizes (lowers) the membrane potential of the cell. 3. ATPase pumps - Consume a certain amount of energy per time step thus causing local energy demand. 4. Neurotransmitters - Upon contact, activates particular ligand gated ion channels as specified by the genes. 5. Mitochondria * Moves around according to energy gradients, preferentially going where energy is low. * Receives energy upon receiving blood flow and outputs energy to the local area. 6. mRNA - Instructions sent out by the nucleus to ribosomes which detail which structure to assemble. 7. Ribosomes * Moves according to energy gradients, preferentially going where energy is high. * Upon contact, and given sufficient local energy, translates mRNA into a structure. 7. When the system is unable to perform actions that allow for an appropriate flow of energy, the system dies I.E. falls into equilibrium. * When neurons hit their fatal energy threshold, they undergo apoptosis and shut down shop. In particular, all their associated structures are removed from the program. * Otherwise, it dies. This makes enerstatic networks a great candidate for open-ended evolution. As mentioned, one can be as biophysically detailed with model parameters as they'd like. This may result in important insights as to the inner workings of their actual biological counterparts. However, when discussing the creation of technology, it has been evidenced by advances in artificial intelligence, that one needn't bother with setting all parameters, and structural details, as accurately as possible to yield something that is both technically useful, and worthy of scientific inquiry. Indeed, despite being dramatically simpler than real biological neural networks, Artificial Neural Networks have been used to gain insights into their behavior. Modeling physical phenomena with enerstatic networks thus potentially provides a way to both understand those phenomena and use the resulting technology to do something useful. An interesting way to use enerstatic neural network models to generate more biologically relevant results, might be to map abstraction layers onto each other. For example, a Hodgkin Huxley Neuron (HHN) is less complicated than a biological neuron (BN), but empirical data can be used to fit a HHN to a BN. Similarly, one could then use empirical data generated by the HHN neuron to map onto yet an even simpler neuron model. This may allow one's model to be much more relevant to experimental inquiry. Such abstraction layer mapping is part of the ongoing computational experiments involving enerstatic neural networks. Further ongoing experiments are in the domain of applying open-ended evolutionary techniques to enerstatic networks in general. The advantage of using enerstatic networks as a tool for building technologies is its natural symbiosis with such techniques. As previously mentioned, open-ended evolution requires building blocks with a particular kind of dynamics which allow for an increase in complexity through the construction of ever larger building blocks as selected for by an intelligence which is itself constructed and amplified from within the system. This selection mechanism is built into enerstatic loops by default making them great candidates for open-ended evolution. ## Evolving Enerstatic Networks The key properties that make enerstatic loops, and therefore enerstatic networks, symbiotic with open ended evolutionary techniques can be summed up in the following way: Enerstatic loops can only persist their existence if they are able to modify their architecture in real-time in such a way that they take in more energy than they lose. If the only way that enerstatic loops can receive energy is by successfully organizing themselves in particular ways based on the environment that they are embedded in, then there is no need to specify particular rewards for particular tasks because such rewards are intrinsic to their nature. With currently available tools, this makes it difficult to engineer an enerstatic network to do exactly what you'd like it to do, but it does open the doors of opportunity to evolving networks that both achieve your desired behavior and exhibit behaviors you may have never come up with yourself. The goal of open-ended evolution is to generate seemingly never-ending complexity as we observe in our universe. The challenge of developing a system with such behavior is centered around the environment-agent dilemma. The dilemma is that, as an agent embedded inside some environment becomes more complex, or intelligent. It essentially masters the environment thereby making out its own complexity and intelligence. Thus, arguments have been made for creating environments that complexify along with the agent that is embedded inside of it [55, 50]. While seemingly straightforward, the problem of how to continuously complexify the environment along with the agent, without periodic injection of some outside intelligence (I.E. programmer intervention), has been elusive. A large part of the problem is how the encoding of the environment operates. Although, there exist universal encodings for certain domains like mazes, or 2D landscapes. The problem with expanding beyond particular environmental domains seems to persist. In their paper, the authors of Evolution through Large Models (ELM) propose that: "computer programs are a general and powerful encoding for continually expanding the richness of an existing environment." and takes a critical step towards demonstrating how use of raw computer programs could be used to ensure open-endedness indefinitely [27]. In their paper, they develop a particular methodology that results in a program that is able to conditionally output computer programs which they call an "invention pipeline". In short, the methodology consists of three steps: 1. Use any arbitrary large language model (LLM) to provide intelligent mutations to an evolutionary algorithm (In this case, the quality diversity (QD) algorithm, MAP-Elites) to generate training data in a domain where little, or none, exists. 2. Pre-train the LLM with aforementioned training data, such that it can now output similar data 3. Use reinforcement learning, and some additional tweaking, to fine tune the LLM such that it is able to produce data based on the condition that it is exposed to. This methodology captures a few key elements that we believe are key to indefinite open-endedness, which are as follows: 1. Increasingly intelligent (correlated) mutations over indirect encodings, which are nonetheless, just code. 2. Generation of a diverse amount of novel structures without the use of a trained neural network. 3. The creation of an agent that is able to create structures conditionally based on its "needs". Although different in form, It is these elements that evolutionary enerstatic networks aim to achieve. ### Energetically closed systems Fundamentally, enerstatic networks are meant to capture the idea that any "thing" that exists is an "energetically open system". An energetically open system is a system that takes as input some finite amount of energy, and outputs only a fraction of that amount of energy. Specifically our energetic open systems consisted of an extension of these three rules: 1. There must be a source of energy flowing into the system 2. There must be mechanisms that essentially trap energy inside the system. 3. There must be mechanisms responsible for energy flowing out of the system In this setting, we had a rather static environment that was not capable of becoming much more complex in response to the complexity of our enerstatic network. However, we have mentioned that as a consequence of the information symmetry of the free energy principle, the environment must also be treated as an agent capable of, in a sense, increasing its complexity [13]. One way to realize this information symmetry is to treat the environment itself as an enerstatic loop. In this case, we now have an "energetically closed system". This paper suggests that using such an approach, along with methods like ELM, is critical to the development of truly open-ended processes. Our energetically closed system follows these rules: * Some finite amount of energy exists inside the system * Energy cannot be created nor destroyed * Energetically open system rules previously specified apply to all structures within the system* (We'll later see a small exception, but it's mostly correct). * Environment is modeled as an enerstatic loop that minimizes free energy (I.E. the energy not currently trapped within a structure) * Environmental enerstatic loop (EEL) is responsible for building structures within it In order to get a better grip on the approach being developed here, it is best to run a sort of simulation, or thought experiment, over what we could feasibly do within the bounds of our required specifications interspersed with motivations for why we might choose to do things in that particular way. Of course, there are many particular ways, much of which can be co-opted for application to energetically open systems. Following the theme of this paper: The following is a thought experiment whose iterations are currently in development - Consider this to be "food for thought" rather than a definitive "how to" guide. #### Causal Niches At first, we only have an environmental enerstatic loop (EEL), whose "goal" is to minimize free energy. Since all energy is free at first, the first thing the loop "wants" to do is trap energy within structures. The first structure that it generates, \(\mathrm{S}_{0}\), must necessarily have causal power such that the environment recreates it. To actually implement this, we will require that our EEL has a property which details how much energy each structure inside it should be receiving each timestep. The default amount of energy is 0. Thus, in order to exist \(\mathrm{S}_{0}\) must sense this particular property and set it equal to its requisite free energy demand (FED), or higher. There are three scenarios here for any given structure: 1. If the structure receives less than its FED, it ceases to exist. 2. If the structure receives exactly its FED, it persists its existence. 3. If the structure receives more than its FED, and this energy is not explicitly used, then this energy remains trapped unless explicitly moved by some other "thing". 1. Importantly, the structure must receive less than its free energy limit (FEL), which places a bound on the amount of energy a structure can contain. Recall that this dynamic is in alignment with our aforementioned reasoning with respect to structures inside of an enerstatic loop. Structures inside a loop are abstracted enerstatic loops that are kept consistently at their homeostatic set point and thus exert the same causal power consistently. We take the assumption that the enerstatic loop that these structures are a part of consider structures outside of homeostasis to be "defunkt" and thus disassemble them. Importantly, S\({}_{0}\) is **only** capable of interacting with EEL and no other structure (S\({}_{n}\)). This is due to the fact that it is not possible to explicitly specify an interaction of S\({}_{0}\) with the unknown properties of a yet non-existent structure. However, S\({}_{0}\) can interact with future structures indirectly through its effect on the environment. This restriction of only being able to interact with structures that have been previously generated remains true for every subsequent structure, which gives us a graph illustrating the causal niche of each generated structure. A causal niche is defined as the set of properties that a structure can sense and affect. * Causal Niche Graph demonstrating that structures can only affect the environment and previous structures that have been created. Illustrated here are the arrows for structure 8 and the environment, which can always affect every structure. The reason that structures are defined like this is because the alternative is intractable, or at minimum, much closer to the practice of inducting the laws of physics. The reasoning is as follows: In order to generate a structure that is capable of explicitly affecting a future structure, one needs to have a medium (i.e. universal method of communication) that is able to underwrite those particular specifications. This medium is what every structure "lives in" and operates through. A good analogy for better understanding is with the case of a physics engine. In a physics engine, we are typically only concerned with Newtonian physics, in other words, with the question of "What happens when object A bumps into object B?". Since the underlying physics are known, one doesn't need to worry about which object was created first in order to have the objects interact properly. The laws of physics that detail how forces act on objects in order to move them, are known. In our case, the laws of physics are not known, and thus the interaction between two "objects" need to be explicitly defined via code which defines the object's causal power. The requirement of explicitly specified interactions is important for explainability. There have indeed been efforts for exploring possible open-ended evolutionary systems using basic universal rules which constitute the system's "laws of physics". Three notable systems are n-dimensional discrete cellular automata, continuous cellular automata, and hypergraphs. The issue here is that, not only can we not guarantee open-ended evolution, but that even if we managed to achieve it, it would be difficult to understand what is happening in a way that is human-interpretable. Therefore, using explicitly specified interactions via code, despite such a restriction, will be important for understanding both what is going on, and potentially, for how to inch closer to open-ended evolution in a principled manner. * 1D cellular automata called "Rule 110" demonstrating the amount of complexity one can achieve from running a simple rule over a simple initial condition. Despite starting with a single black cell and a simple rule for determining how each cell changes color (It does so only based on its immediate neighbors), one ends up with unpredictable, and potentially never-ending, complexity. In other words, simple rules can give rise to arbitrarily complex behavior. Indeed, rule 110 was found to be capable of "universal computation", in other words, like a typical computer, you can use rule 110 to run any possible computer program. The problem here is the difficulty in encoding and interpreting the program. Interestingly, simply exploring the space of possible simple programs has revealed patterns that are similar, at a macro-level, to the kinds that have been found in nature. As a related note, Stephen Wolfram heads a research program focused on discovering a fundamental theory of physics by exploring the space of possible simple hypergraph programs, which are essentially even simpler versions of cellular automata. Through this exploration, Wolfram Research has been able to mathematically derive essential features of special relativity from hypergraphs, such as Lorentz transformations and the speed of light as a universal speed limit [56] * On the left, the rules of the program and the first 16 steps, on the right, the first 250 steps. We start from a single black cell on the first row and apply the rule to each cell in the row to generate the next row. * Source: Weisstein, Eric W. "Rule 110." From MathWorld-A Wolfram Web Resource. https://mathworld d.wolfram.com/Rule110.html #### Structure specifications As illustrated in our causal niche graph, we have a total of 9 structures generated, but they're completely abstract. All we know is that the minimum requirement for each structure is that there must be a cost of assembling/disassembling, causal power (and its associated cost), effective radius and a position that they occupy in space. There are other properties that one can apply to these structures, which could either be ubiquitous to all structures, or defined uniquely only to some structures. In the case of ubiquitous properties, the thought is that these should be directly computable in some way from the aforementioned properties so that they remain related to the specifications of that structure. For example, the mass of a structure could be calculated based on the energetic cost of the structure using E = mc\({}^{2}\)\(\rightarrow\) E/c\({}^{2}\) = m, where E = energy, m = mass, and c = constant speed of light, or cosmic speed limit. Furthermore, position could then be modified as a function of mass, where the position of an object requires a certain amount of energy to be perturbed. This could be calculated using: W = (1/2) * m * v - 2 where W is the work done on the object, m is the mass of the object, and v is its final velocity. The amount of energy required to perturb a property's value by "1" unit is here called the "perturbation energy" and is important for energetically grounding any property whether it is unique, or ubiquitous. An example of a unique property might be "membrane voltage". The property should minimally have a default value, and a perturbation energy, however a third likely important characteristic is a "location" within the causal power of the structure. This will allow the causal power of a structure to be modified through perturbations of structure properties. One could imagine utilizing current membrane voltage to decide whether to send a message, or not. For this reason, we'll refer to such properties as "causal properties". Similar to how equations that have been empirically discovered were used to decide how to modify the position of any structure, equations which describe how difficult it is to modify membrane voltage could be implemented. Thus, such properties can assist in generating an energetically closed system that is not just interpretable, but also bears semblance to our physical reality thus potentially serving as a powerful modeling tool. However, it must be noted that this is not without the cost of dramatically increasing the complexity of implementation. As for actually defining these properties, this is something we'd like to leave to an automated task. #### Automatic structure generation The causal niche graph allows us to generate structures continuously in a principled way. We can fill niches with various kinds of structures in each causal niche. Since each causal niche restricts only the sensed input properties and affected output properties, the actual causal dynamic that implements the sensed information and affects the output properties is up to us. To offload this work, we take a hint from ELMs usage of Large Language Models (LLMs). In effect, we've created a function that has certain inputs (sensed set of properties), potential internal variables (causal properties), and outputs (affected set of properties), and we'd like to use a Large Language Model to fill in the details about how inputs and internal variables produce on output. Utilizing LLMs, the environment can create a set of structures with various causal dynamics in every niche - The limit of structures within a niche is up to the programmer. As for the effective radius, the LLM can be used, or the programmer can opt to assign effective radii based on some particular algorithm or criteria. In addition to arguing that the goal of the environment is to minimize its free energy, this paper also suggests that one good way to do this is for the environment to actually test different sets of structures concurrently to see which sets of structures combine to make complex composite structures capable of minimizing VFE. Those that fail to minimize VFE well enough get outcompeted and thus end up with a lower likelihood of being created in the future. Before discussing how structures can mix and match in this way, it is important to discuss a mechanism for automatically assigning energy costs to such structures. #### Automatic energy cost assignment Previously, we assigned energy costs to structures either using intuition or empirical data from experimentation in the service of modeling. The caveat is that one needs to be careful not to violate the constraint that energy cannot be created or destroyed. With this in mind, it is possible to do something similar to what we did before, but we can also opt for a different method that allows for automation. The benefit of assigning energy costs automatically is not just to avoid the work of doing so, but also to allow for energetic consistency throughout various created structures. In other words, similar causal powers have similar energetic costs. This provides us with a sort of "underlying physics" that our enerstatic program must obey. It should be noted that this same technique can be applied to energetically open systems of enerstatic networks as well, but may be less straightforward to apply in the context of modeling. In order to automatically assign energy costs to structures, we first make the claim that the cost of structure assembly and disassembly are equal and opposite. Furthermore, the cost of structure assembly is equal to the cost of causal power. Since causal power is just a function whose body is code that has been generated by an LLM, we can take that code and convert it into an "Abstract Syntax Tree" (AST). An AST is a simplified representation of a piece of code that is typically constructed by a compiler, or interpreter as a way of efficiently analyzing and processing source code. Technical details aside, the AST allows us to calculate the number of fundamental operations, or primitives, that are inside of the code defining our causal dynamic. With knowledge of all the possible operations in our programming language of choice, we can assign energy costs to each of these operations and then calculate the total amount of energy that our causal dynamic costs. Toggling these costs among inside of, and between, different programming languages opens up an interesting domain of study. As for causal properties, the cost of these must be calculated using their perturbation energy function. Such a function receives the current value of the property and returns the amount of energy required to modify that value. One can then calculate the cost of the property by starting at 0 and iteratively summing the costs as it is raised to the default value. Thus, causal properties represent stored energy. #### Structure recombination - Information, and intelligence, are physical. As had been mentioned in the "Enerstatic Loops, Networks and Nests" section, the concept of an enerstatic loop is meant to capture the idea that information required to reliably produce a structure is physically embodied by such a loop. Additionally, we suggested that a structure by itself is indeed an enerstatic loop, whose possibly varying architectures have been abstracted away due to that structure being deemed "stable", or useful to the loop in which it is embedded, only when it is in one particular form of stasis. This paper argues that the dynamics of enerstatic loops are to be taken as a ubiquitous consequence of laws of physics which dictate what relationships are stable between constituent structures at every scale. Thus, enerstatic loops are to be understood as inter-structure relationships whose stability depends upon (1) the interaction between the structures, and (2) the particular local environment that the loop is in. Here, environment refers to both the EEL and every other structure that is "sufficiently outside" of the enerstatic loop in question in the sense that the structures inside the enerstatic loop only have sparse, weak, or otherwise unstable, relationships with structures outside the loop. The way that these relationships form is simple. When one structure is within another structure's effective radius, a relationship is formed. This relationship constitutes the embodiment of information about the structures present in the relationship. The following image illustrates what such sparse relationships could look like. * Illustration of structures that have been created inside the "Environmental Enerstatic Loop (EEL) and have established relationships with each other". Red arrows show the impact of structure causal power on the environment, while black arrows indicate the impact of structure causal power on other structures. Dotted blue line indicates the enerstatic loops, which are to be understood as relationships between structures. Structures are in a relationship only when structures affect each other. * Loop A structures have no relationship with any other structure. Loop B & Loop C have sparse relationships with structure S\({}_{8}\), as they are affected, but don't affect S\({}_{8}\). As before, each enerstatic loop has an "Enerstatic Window" consisting of a "Stasis Window", "Action Window" and "Change Action Probability Window", however, instead of limiting loops to only be capable of creating structures that they have already come into contact with, we allow loops to create any structure that has been already "invented" by the EEL. This further breaks down the distinction between loop and environment by treating each loop as a sort of mini EEL in itself. Thus, what makes each enerstatic loop sufficiently different from other loops is their differing probability distribution with respect to what actions each loop will take given its energy level. In this way, relationships break down when structures move too far away from each other, when the action taken disassembles a structure, or when energy flowing through the loop is either insufficient to sustain its dynamics, or higher than the loop is capable of dissipating. Defining the enerstatic window is again something that we'd like to do automatically. For this, we take another look at our abstract syntax tree (AST). Our abstract syntax tree allowed us to calculate the free energy demand (FED) of any structure, but it also allows us to calculate the free energy limit (FEL) of any structure. These define the amount of energy a structure requires to maintain its existence, and the maximum amount of energy a structure is capable of dissipating, respectively. Knowing these values for every structure in an enerstatic window, as well as their connectivity, we can create a function that allows for the calculation of the lower and upper energy bounds of our enerstatic window. Using such a calculation, our lower enerstatic bound is equivalent to the smallest amount of energy required to sustain the system, while the upper enerstatic bound is equivalent to the largest amount of energy that can be dissipated through the network. Outside of these bounds, the loop breaks down through the loss of structure. #### Summarizing We started off with an environmental enerstatic loop (EEL), which consisted of a pool of "free energy", a large language model (LLM) used to produce the causal structures within "causal niches", an energetic mapping used to assign energetic costs (FEDs), and limits (FELs) to each structure, and finally a description of the way structures make, break and adjust relationships a la enerstatic loops. Each of these elements is flexible when it comes to the specifics of how it is implemented. The amount of free energy the EEL starts with, the way causal structures are generated, the way energy costs are mapped onto logical code, and of course, the way structures recombine and adjust their relationships, can all be varied. Specific implementations do not reflect the Ensoul framework. They only reflect particular ways of executing the framework. That being said, the core idea here is to develop energetically closed systems that generate structures which are increasingly interesting, intelligent, and "alive" as generally defined in the "Open-Ended Evolution and Assembly Theory" section. Quantum computing demonstrates that it may not be practical, or even desirable, to physically create perfectly energetically closed systems, however, it might be that approximations to such systems would serve as powerful ways to both more easily generate intelligent technology in the physical world, and serve as models for understanding the evolution, and dynamics, of thermodynamically open systems more deeply. Before diving into creating the physical manifestations of such devices, it is important to first demonstrate their capabilities in the computer, and use such model systems to inform the creation of them. What we've effectively done in this paper is to make code physical by creating an underlying "energetic physics" on top of which our code is built. In other words, "structures" are code that has been manifested into an energetically constrained virtual realm. This is in alignment with the idea that everything in the physical world can be said to be "computing", and is therefore describable via computations possible by a universal Turing machine. This has been called pancomputationalism [4]. While it is not clear that everything in the physical world is computable; it is also not clear that we have any other choice but to use mathematical and computational tools to understand it. So far, the discussion has been about what we can do inside of a computer in order to create SOULS. These simulations are important for the development of software based technology, in its own right, however, simulations of energetically closed systems using conventional Von-Neumann architecture IOS severely underutilize the true computational capacity of the materials being used. This underutilized computational capacity, is what we referred to in "IOS Illusions: Motivating Modeling With Enerstatic Loops" as "highly variable behavior", or "noise". This property of materials that engineers have painstakingly avoided can actually function as useful computational processes if one takes the right approach. Systems that embrace noise as a computational resource have been called "Thermodynamic Computers" (TCs). These computers are SOULS in physical reality and truly bring us closer to the "Ultra Low power" part of the SOULS definition. ## Thermodynamic Computing & Enerstatic Networks Thermodynamic computers have been envisioned as an "engineered, multi-scale, complex system that, when exposed to external potentials (inputs), spontaneously transitions between states in the short term while refining its organization in the long term, both as part of an inherent, adaptive process driven by thermodynamics " **[**33**]**. This is analogous to what we've described as the behavior of enerstatic networks, but in physical form. This paper argues that the collection and study of manifestations of this general concept will prove to be important for understanding the problem of embedded agency. In this section, we will briefly touch on thermodynamic computers, the relationship between thermodynamic computing and enerstatic networks, and finally, embedded agency. Thermodynamics is a branch of physics that deals with the relationships between the various different forms of energy such as heat and work. It was originally developed in order to optimize the amount of energy extracted from steam engines **[**9**]**. This application of thermodynamics focuses on states of "equilibrium" in which the state of a system is homogenous and macroscopically unchanging, such as a pool of still water. Indeed, one of the conclusions drawn from thermodynamics is that the entire universe is evolving towards such an equilibrium state in the so-called "heat death of the universe". In spite of this conclusion, many physicists have come to believe that thermodynamics is also chiefly responsible for driving the organization of life as we know it **[**47, 46**]**. This has spurred the development of "Non-equilibrium Thermodynamics", which is thought to be the type of thermodynamics that life, and many other complex systems that are far from equilibrium, seem to employ. In particular, work on fluctuation-dissipation theorems and information thermodynamics have been continuing to generate insights into understanding life and the development of thermodynamic computers. One of the issues in thermodynamic computing is the lack of a unifying general model system that can properly express thermodynamic computing. While there are existing model systems such as Hopfield Networks, Boltzmann Machines, thermodynamic neural computation models, and Thermodynamic Neural Networks; These models have been regarded as only "a small set of interesting and suggestive devices and architectures **[**9**]**". Although there remains much work to be done with respect to this area of research, this paper suggests that enerstatic networks may be a general enough model to serve as such a unifying general model system. This kind of system is critical for the evaluation of core theoretical concepts, like fluctuation-dissipation theorems, and information thermodynamics, the demonstration of problems that are best solved using such approaches, and as a helpful reference for the development of thermodynamic computers. The reasoning behind speculation that enerstatic networks might serve as such a model is based on the fact that their components - enerstatic loops - are ultimately extremely simplified abstractions of thermodynamically open systems whose level of detail can be straightforwardly scaled up through the use of empirically measured values and integration of empirically derived mathematical equations. Given the existence of SOULS, whether they are virtual, physical, engineered, or evolved, and the need to control, or take advantage of said SOULS, one needs a proper framework, or way of thinking, with which to do so. This brings us back to TAME - A Technological Approach to Mind Everywhere - which provides many such ideas and beliefs in order to accomplish just that. ## Taming SOULS While it is trivially true that the goal of any "thing" is to exist, it is apparent that "things" also seem to have "higher level" goals that very much seem to have little to nothing to do with existence in particular. For example, a person who chooses to summit Mt. Everest or to run in a 135 mile race through death valley in the scorching hot sun doesn't quite seem to be taking the most optimal path towards a continued existence. ### Deceptive higher level goals This paper argues that the pursuit of existence can be deceptive in a way that is perhaps analogous to the way the path to a particular goal can be deceptive as we discussed with the training of the biped. That is to say, the path to existence appears to be divergent rather than convergent. The invariant multi-scale TAME approach suggests an interesting way of describing this seeming inconsistency. First of all, any "thing" is made up of parts whose goal is to exist. For many things, particularly those we consider "more alive", those parts are not densely connected to each other. In other words, each part is not directly causally connected to every other part. Considering that the coherence of a "self" is determined by the degree to which parts share stress, certain parts can be partitioned as belonging to different "things", or in other words, to some degree, constituting separate "selves". A relatable example is the fact that biological organisms are made up of cells (parts), which cluster together in such a way as to be called different tissues (things), which then differentially cluster to be called organs, and so on. As previously discussed, when resources are limited, achieving the goal of existence necessarily induces resource limitations for other "things", since there is less free energy to utilize. So, the thing about not directly sharing stress with other parts, is that it now becomes possible for parts of any "thing" to compete with each other over resources despite ultimately belonging to the same "thing". However, the possibility of competition does not imply competition must be happening. In other words, despite having the ability to strictly compete and perform actions that would be detrimental to the collective as a whole, persistent collectives of cells achieve consensus through a combination of cooperation and competition, which we will refer to here as "coopetition". The consensus that arises from coopetitive dynamics are rather nicely illustrated by financial markets, and companies, in which individuals, and populations of individuals, only acting according to their own local interests, are able to achieve stable market prices and can ensure that all involved balance their books. Coopetition operates at all scales of collective intelligences, from organelles, to cells, to tissues, to organs and so on. According to Dr. Stephen Grossberg's work in such coopetitive systems, there is a trade off between how free parts are to make certain decisions, and the size of the global decision that can be made. This restriction can result in the existence of multiple choices in a network, in which a novel choice can be preferred, and in oscillatory, or "jumps", between different decisions, or parts, that are currently "winning" the competition [9, 18, 17, 19]. Both of these phenomena contribute to interesting, complex adaptive behavior as the collective in question takes actions which in favorable cases allow the collective to survive. Dr. Grossberg proved a theorem about this dynamic that allows for the design of such systems called the "Adaptation Level System theorem". As an incredibly simplistic example, one might understand this in the following way: For our climber, reaching the summit of Mt. Everest is bad for some populations of cells in the collective, but great for the survival of other populations. The populations of cells that benefit from this behavior are "winning" the competition, but this is balanced by yet other populations "winning" the competition once the mountain has been descended thus all parties involved balance their books. Now we have an idea behind what might in many cases be the cause of seemingly existentially-irrelevant behavior of "things". Essentially, it is dependent on the coopetition between parts within a "thing", which can be called "parts" as a function of the degree to which their constituent pieces share stress. The task is then to figure out how one can identify the goals, or purpose of the decisions, made by the collective. Such an endeavor enables us to better characterize, engineer and work with such collectives. ### Taming SOULS and Cognitive Capacity Tests (CCTs) Perceptual Control Theory (PCT) is a version of control theory that is dedicated to understanding naturally occurring control systems, or SOULS. PCT asserts that: "A natural control system can be organized only around the effects that its actions (or independent events) have on its inputs (broadly defined), for its inputs contain all consequences of its actions that can conceivably matter to the control system [43]". Powers well describes this concept in just a few words saying that "behavior is the control of perception (inputs)". In other words, actions are taken in service of ensuring that the "thing" perceives what it "wants" to perceive. In PCT, this desired perception is called the "reference signal", and critically, is thought of as being internally generated as opposed to externally generated as it is in most artificial control systems. As an example of this externality, the value of the temperature that a thermostat is trying to regulate, depends on the reference signal given to it by its user and typically is not modifiable by the thermostat itself. For a natural control system the reference signal is always modifiable by the "thing" itself. Consider again the non-analytical solution to the outfielder problem known as "Optical Acceleration Cancellation". The natural control system, in this case the outfielder, presumably comes to choose the variable that they are controlling. That same control system can work towards controlling a number of other variables all towards solutions of different tasks making its position on the cognitive continuum much farther to the right than a simple thermostat. #### Test for Controlled Variable (TCV) The protocol for inferring the controlled variable of any control system is the same whether it is natural, or artificial. Under the PCT, this protocol is called the "Test for Controlled Variable (TCV)". TCV allows one to identify both the type of system that one is looking at (IOS vs SOULS), and, if present, the variable(s) that the SOULS is controlling. The TCV process is an iterative empirical process through which a hypothesis about what variable is being controlled is tested. The hypothesis takes the following form: "If variable X is the variable under control, then variable X should experience less than usual effect upon being disturbed". Temperature under the control of a thermostat provides a straightforward example. Provided your AC was on, opening a window on a chilly day would slowly cause a decrease in temperature, which would soon be countered by the thermostat's activation of the heating mechanism. This is not what would be expected if the thermostat was not controlling temperature as a variable and constitutes the temperature experiencing a "less than usual effect upon being disturbed". A non-trivial application of TCV was in the identification of "control of optical velocity (COV)" over "optical acceleration cancellation (OAC)" as the better explanation for how organisms solve the outleider problem [33]. These two variables, velocity and acceleration are closely related, as such, it is difficult to disturb one without disturbing the other. In order to help sway interpretations in a single direction, two model control systems were made that each implement control of only one of these variables. The OAC model accounted for 75% of the variance in the path of an organism intercepting an object, while the COV model accounted for 93% of the variance. This suggests that optical velocity, rather than optical acceleration, is the variable actually being controlled during object interception. TCV rules out the possibility that the system we are studying is an IOS by confirming that no variable is under the control of that system's behavior. In the case of something like our "odd thermostat", which is a thermostat that regulates temperature with its output only 50% of the time, the application of statistical methods along with TCV can allow us to gauge where on the IOS vs SOULS continuum our system lies. While TCV allows us to figure out what variable the system is controlling during some particular task, it does not tell us the entire set of variables a control system is capable of controlling, nor does it tell us how we might modify the value of reference signal of the systems controlled variable, or how we might induce the system to switch between different controlled variables. Such inquisition might lead us to coming up with a "Test for Variable Controllability (TVC)" and a "Test for Control Switches (TCS)" respectively. We'll now briefly discuss how these could work. #### Test for Variable Controllability (TVC) Test for Variable Controllability (TVC) attempts to answer the question: "What variables that I am capable of measuring are this system capable of sensing?". This framing makes it clear that the observer is chiefly responsible for properly inferring the system's cognitive capacity. Like TCV, TVC starts with a hypothesis about some set of measurable variables lending a credence to each one. The testing amounts to sensor identification and subsequent inference about what sensory partitions the control system might make. In other words, identifying the location and type of sensor, as well as what these sensors could amount to sensing as a collective (i.e. light detection (small scale), vs perception of an entire object (large scale)). A simple example is inferring that based on having eyes a creature might be able to detect individual objects. Although non-trivial, the task is important to mapping out the spatial dimension of a system's "cognitive light cone", which is a system's spatiotemporal computational capacity towards attaining a particular goal. The temporal dimension is intimately related to the spatial dimension. In TVC we ultimately care about what type of measurable variable the system expends energy towards controlling as well as about how far disturbances can be in space and time and still evoke a change in the system's actions. In short, the TVC should work towards mapping a system's cognitive light cone by starting from the theoretical maximum and conducting experimental tests to cut down on possibilities. #### Test for Control Switch (TCS) Test for Control Switch (TCS) is a test that is meant to identify the method that one should use to modify the value, and/or identity, of the variable that the system is controlling. TCS is related to the "axis of persuadability" concept presented by TAME. Recall that the position of a system on the axis of persuadability represents the level of complexity at which intervention is required to get a system to do what you'd like it to do. The farther to the right the system is on the axis of persuadability, the less energy one has to expend. In other words, a good control switch is one that ultimately allows one to expend as little energy as possible. As such, TCS should test for switches at different levels of cognitive capacity starting from the highest potential cognition whose cognitive light cone was inferred via the TVC. In short, TCS would ask: 1. Can I communicate with the system with some language through some medium? 2. Can I identify rewards for this system which to use for training purposes? 3. Are their setpoints which I could modify using some physical means? 4. Would I really have to reduce it into pieces and refashion it together from scratch to get it to do what I want? The emphasis here is to avoid underestimating the intelligence of a system. One excellent example demonstrating the power of a control switch is found in the demonstration of morphological control through modification of a bioelectrically embodied setpoint [31]. The lab of Dr. Michael Levin, and their collaborators, established the bioelectrical state of cells as instructive to their collective target morphology through a few key examples. First, it helps to know that cells are typically more negatively charged inside the cell than outside the cell. The charge, or voltage, inside a cell can be measured and visualized. This allows us to see the impact of certain drugs, or other interventions on the collective bioelectrical state of some cells. The first of Levin's experiments that we will mention is in flatworms. Flatworms are complex organisms that exhibit bilateral symmetry and have a nervous system. They also have spectacular regenerative abilities. Flatworms can be cut in two, which causes each half to develop into perfectly good flatworms. A piece of flatworm as small as 1/279th the size of the animal has been demonstrated to regrow into a fully viable adult flatworm [38]. Through visualizing the bioelectrical state of the cells one can actually induce a severed planaria to have two heads rather than one by manipulating the cells of the "should be" tail. Two tailed flatworms have been made in this way, flatworms with heads from other flatworm species, and even flatworms with heads that are presumably non-existent in flatworm nature as we've known it thus far. Importantly, once these bioelectrical manipulations are made, the flatworm remembers the state upon being severed once again. In other words, once severed, a two headed flatworm begets two, two headed flatworms. This is despite the genome not being modified in any way. This represents a powerful control switch for biological organisms. In the same vein was in the case of tadpoles which can be bioelectrically manipulated in order to have an eye on their tail. Not only is an eye on the tadpole's tail with no genetic modification, the cells of the tadpole actually organized themselves towards making the eye functional from a relatively simple local signal [41]. Tadpoles original eyes were blinded to demonstrate that it could still navigate with the tail-eye [6]. Further evidence of bioelectrical cell state as a control switch was in the creation of so-called Xenobots. Xenobots were created by Levin in collaboration with Dr. Josh Bongard. Xenobots are simply what one gets when the skin cells of a frog are liberated from the electrical signals they normally get from the rest of the body during development. Once liberated, these cells coagulate to form a proto-organism that reproduces itself by physically moving other cells around it into similar proto-organisms. These phenomena make it clear that bioelectrical states of cells act as a setpoint that controls their behavior. Indeed, Levin demonstrated that certain cancers can be effectively cured by forcing cancer cells to reintegrate electrically with the collective [8]. Identifying a system's cognitive light cone and control switches are both big tasks, but the reward certainly justifies the work involved. Controlling a system by rewiring its hardware should only be resorted to when absolutely necessary. The competence of naturally occurring control systems should make it clear that the intelligence of the parts that make up natural control systems should be properly assessed in order to determine the most efficient way to understand and leverage its behavior towards a particular goal. Importantly, these tests all have usages towards the understanding and development of thermodynamic computers. Making all of these tests more rigorous and formalizable is an aim of TAME and a major motivation for this paper. A model that can capture the self-organizing dynamics of thermodynamically driven complex adaptive systems would serve as the ideal test bed for theoretical calculations in the development of empirical tests/statistics and in the development of more biologically, or rather, thermodynamically, inspired technological advancements. We now recap the desired perception(s) driving all of this behavior and discuss future directions. ## Discussion: Research Directions, Speculations and Philosophy ### Summarizing Ensoul "Given that we see neurons that exist over the lifetime of an organism, what must it be really good at doing in order to do so?" - This paper suggested that that thing was "managing its energy level", in which case, analysis of its behavior assuming any other purpose would frustrate such analysis. However, information processing certainly seems to be just what neurons do. Thus, is it possible to build systems in which information processing, or "work", is done as a consequence only of a "thing" managing its energy level? Might this concept relate more generally to what we observe in physical reality? - This paper answered affirmatively to both of these questions. The generalization of this concept led to the thermodynamically inspired abstraction, the enerstatic loop. The core driving force being that, any "thing" which exists, and persists in its existence, must be participating in something that allows for that persistent existence. In other words, it must have causal power, and that causal power must either be embedded in the self-evidencing behavior of some other "thing", or must itself exhibit some degree of self-evidencing behavior. This scheme implies that any persistent thing is well understood as a control system in which negative feedback maintains what would otherwise be broken down and that this dynamic is intrinsically generated by the parts which make up that system. Following such a scheme, this paper leaned into pancomputationalism, the idea that such causal power can always be written down formally and is computable. However, actually writing such code by hand is non-trivial due to the simultaneous considerations of energy and information processing. Additionally, the findings in open-ended evolution make it clear that the path to a desired outcome isn't always best found with optimization (convergent) techniques. Fortunately, enerstatic loops are well suited to open-ended evolution due to their existential imperative and capacity for information processing. Mapping energy costs to code (A process one might call "deabstracting", or "physicalizing"), the use of large language models, the power of assembly theory as basis for complexity measures in evolutionary systems, and the noted information symmetry posited by the free energy principle makes the exploration of "energetically closed systems" a promising avenue for the understanding, and generation of, models of thermodynamically open systems based on enerstatic loops. Given the abstract substrate-independent nature of such energetically closed systems, and their generated energetically open systems, the term "Self Organizing intelligent Ultra Low Power System (SOULS)" seemed a fitting description for both such entities. The understanding of "things" such as SOULS illustrate the need for an existentially-based multi-scale perspective on reality, a perspective which focuses on the invariant concepts that can be consistently applied to every "thing" at every scale. Technological Approach to Mind Everywhere (TAME) serves as an important framework for taking such a perspective. As a result, the concepts of "intelligence", "cognition", "goals", "stress", "self", "life", and "feedback" are defined on a continuum, rather than with discrete categories. We ask not if the "thing" has these properties, but of "what kind", and of "how much" [30]. This is critical for developing an understanding of complex systems, one that specifically allows us to understand how these concepts scale up, and what the macroscopic result of such scaling looks like. It is clear that the influence of technological advancements on humanity have been increasing exponentially in their rate of development with no apparent end in sight. The results of these advancements have often been unpredictable with both positive and negative consequences. As such, the development of the technology described in this paper is meant to serve two purposes: (1) The generation of practical technologies that can be put to use and, (2) to serve as a test bed through which negative consequences can be better understood and mitigated. Understanding the consequences of such technology is a non-trivial task, but it is critical that we be able not just to build it, but to control it. Lack of understanding of the consequences of our current technologies have resulted in things like loss of privacy, addiction to social media and unprecedented political influence. At the same time, there are strong arguments that the overall quality of life, including for those in impoverished conditions, has significantly improved as a result of technological progress and increased global interconnectedness. This makes the development of a framework like TAME alongside the development of SOULS both an existential, and utopic, imperative. The faster the development of our understanding, the better chances we have of intentionally shaping the consequences of our technological advancements. This leads us to considering important future research directions. ### Future directions Briefly, future directions include: 1. Using conventional von-Neumann computers to implement enerstatic programs, perhaps via FPGAs, or other flexible classical computing approaches. 2. The development of specialized versions of enerstatic programs aid as a test bed in the development of thermodynamic computers through advancement of theoretical/mathematical frameworks. 3. The development of specialized versions of enerstatic programs to assist in the understanding of biological systems, an example of which was the enerstatic neural network described in this paper. 4. The development of rigorous measures of intelligence, interestingness, and "aliveness" (complexity) in energetically closed enerstatic programs such that different programs can be compared and autonomously analyzed. 5. The development of TAME, and more specifically, the development of rigorous cognitive capacity tests (CCTs), (i.e.. existential measures of intelligence/cognition, test for controlled variable (TCV), test for variable controllability (TVC), and test for control switch (TCS). 6. Finally, if deemed necessary, the creation of alternative methods than the use of Large Language Models for the generation of structures in energetically closed systems. ### Speculations and philosophy The contents of this paper were written in an attempt to put together ideas that may be key to advancing the understanding and development of science and technology. To that end, this paper embraced substance monism, process philosophy, pancomputationalism, and an extended version of the strong life-mind continuity thesis. Philosophy is essential because the way that we think of the world influences the way that we act in that world, which importantly, includes how we conduct scientific research. As such, we now step through some of the philosophy motivating the development of Ensoul. Substance monism suggests that everything is made up of "one thing", which seems to then motivate the question: "Why do we see distinct 'things' if everything is made up of one thing?" and "How do these things come into being?". While this paper provides no answers beyond "it is just so", it does suggest physically and logically grounded guidelines as to how to simulate the behavior of such "things" as well as how to go about evolving systems with such "things". The following argument suggested a way about how to go on: 1. Physics has told us that E=mc\({}^{2}\), in other words, that every "thing", is made up of energy. 2. The way that we understand the behavior of "things" is through cause and effect. 3. A "thing" cannot be said to exist, if it has no effect. In other words, to exist is to be the cause of an effect (eleatic principle). 4. Similarly, a "thing" cannot be said to be persisting its existence if there is no cause(s) for its existence. In other words, to persist is to be the effect of a cause (Persistence principle). 5. Therefore, we are to model "persistent things" as having persistent causes, of which they themselves may be the cause. In other words, to model their existential and self-evidencing causal powers. This leads to a continuous vision of "things" that have various degrees of what one might call "existential feedback". This is what happens when a "things" effect is part of its cause. Indeed, if this is not the case, then it must be the case that something else causes it, which itself must be caused by itself, or something else. Therefore, we form the hypothesis that things which cause their own existence have a more prevalent/sustained existence. These "things" are well understood as negative feedback systems, or control systems. The interesting thing about the nature of these natural control systems, is that they are regulating their own particular dynamics, or causal power, rather than any one particular measurable variable. This view is well in line with process philosophy which suggests no "thing" is static, but is always changing. This paper uses this necessary dynamic of "changing" in combination with the eleatic principle and principle of causality to motivate existential circular causality, or existential feedback loops, as proper abstractions for such systems. In doing so, it leans into pancomputationalism and the continuity thesis proper. Pancomputationalism, like process theory, suggests that the physical world can be understood as a process, however, it specifically suggests that this process is a computational one. That is, a cause and effect process that can be simulated on a computer. The reason this paper leans into this notion is two fold: (1) If we are to understand the nature of reality we must use the language of cause and effect as its automation in computational devices has helped extend our cognition in ways we could have never thought possible, and (2) This sort of approach might provide strong evidence that pancomputationalism cannot be sufficient through the demonstration that properties which we typically describe as "emergent" truly cannot be described as being computationally attainable by the fundamental constituents of that phenomenon. In leaning towards the idea that these properties can indeed be found as computationally attainable by the dynamics of the fundamental "stuff" that makes up the universe, this paper embraces the strong life-mind continuity thesis [25]. The strong life-mind continuity thesis suggests that the origin of life is synonymous with the origin of mind. In other words, once you have life, you have a mind. This paper attempts to operationalize familiar properties such as intelligence and cognition as dependent on a "things" self-evidencing causal power. It is on these grounds that the efforts encouraged by this paper rest. The challenge at hand is to use the abstraction of an existential feedback loop as a tool to understand how such operationally defined properties scale up into the forms that we are familiar with, as well as discover how the dynamics of such loops can be leveraged in the creation of new technologies and understanding of life as it might be. To this end, embracing evolutionary approaches along with traditional approaches to the creation of enerstatic loops, networks and nests in virtual and physical spaces may prove to be a fruitful area of exploration. A final word, one should be careful not to mistake the map for the territory. Philosophy, theories and models of reality are not equivalent to reality despite their predictive power or believability. The situation that we find ourselves in is one in which we seemingly cannot truly know what is behind the causes of our senses [21]. Philosophy, theories and models are ultimately just stories we tell ourselves about those possible hidden causes. SOULS is a term that just so happens to fit well for any "thing" that persists its existence while abiding by the stories asserted by process philosophy and substance monism. As such, we can ask whether taken metaphysically, the notion of SOULS can have application in giving the colloquial word "soul" a more rigorous definition. Philosophically speaking, Ensoul would seem to suggest that a soul is any causal force that is persistent over time. Thus, approaches like Ensoul, which use existential feedback loops may be considered a sort of implementable philosophical model, one in which this philosophical story might be better understood. Stories represent our subjective beliefs about what is happening in reality and as such the stories that we tell ourselves guide what we do in both life and science. Thus, while it is important to acknowledge that these stories are not equivalent to reality, these stories, should we believe them, are equally important, considering the power that these stories have over the sorts of actions that we take in the world. Interestingly, the word "psychology" was originally defined as "the study of souls" before becoming more commonly recognized as the "study of minds". Indeed, the word "psychology" is made from the combination of the Greek words "psyche" (meaning "breath, principle of life, life, soul"), and "logia" ( meaning "speech, word, reason"). One of the aims of Ensoul is to take seriously the idea that some of the properties one studies in psychology, like intelligence and cognition, are not just present in the human mind, but in all "things", or SOULS. As such, methods and ideas from psychology such as PCT and reinforcement learning can be useful in understanding the nature of any "thing" independent of what that "thing" is made up of. This might suggest an extension, or reinterpretation, of psychology that one might call "neopsychology", or the "new study of the SOULS", where what we are primarily interested in are these measurable substrate-invariant operationally defined properties of any "thing" (i.e. intelligence, cognition, stress level, etc) and how they scale up, or in what cases, if any, certain properties can be said to "emerge". Since this paper primarily takes a modeling, or construction approach to understanding SOULS, the framework has been named "Ensoul" which is a verb meaning "to endow with a soul".
2301.08323
Spiraling Defect Cores in Chromonic Hedgehogs
An elastic quartic twist theory has recently been proposed for chromonic liquid crystals, intended to overcome the paradoxical conclusions encountered by the classical Oseen-Frank theory when applied to droplets submerged in an isotropic fluid environment. However, available experimental data for chromonics confined to cylindrical cavities with degenerate planar anchoring on their lateral boundary can be explained equally well by both competing theories. This paper identifies a means to differentiate these theories both qualitatively and quantitatively. They are shown to predict quite different core defects for the twisted hedgehogs that chromonics generate when confined to a fixed spherical cavity with homeotropic anchoring. In the quartic twist theory, the defect core is estimated to be nearly one order of magnitude larger (tens of microns) than in the other and, correspondingly, the director field lines describe Archimedean spirals instead of logarithmic ones.
Silvia Paparini, Epifanio G. Virga
2023-01-19T21:25:29Z
http://arxiv.org/abs/2301.08323v1
# Spiraling Defect Cores in Chromonic Hedgeheogs ###### Abstract An elastic _quartic twist theory_ has recently been proposed for chromonic liquid crystals, intended to overcome the paradoxical conclusions encountered by the classical Oseen-Frank theory when applied to droplets submerged in an isotropic fluid environment. However, available experimental data for chromonics confined to cylindrical cavities with degenerate planar anchoring on their lateral boundary can be explained equally well by both competing theories. This paper identifies a means to differentiate these theories both qualitatively and quantitatively. They are shown to predict quite different core defects for the _twisted_ hedgehogs that chromonics generate when confined to a fixed spherical cavity with homeotropic anchoring. In the quartic twist theory, the defect core is estimated to be nearly one order of magnitude larger (tens of microns) than in the other and, correspondingly, the director field lines describe Archimedean spirals instead of logarithmic ones. C m Despite the lack of _uniformity_ in the ground state of these phases,1 their curvature elasticity has been modeled by the Oseen-Frank theory, albeit with an anomalously small twist constant \(K_{22}\). To accommodate the experimental findings and justify the (double) twisted ground state, this constant has to be smaller than the saddle-splay constant \(K_{24}\), in violation of one of the inequalities Ericksen [11] had put forward to guarantee that the Oseen-Frank stored energy be bounded below. Footnote 1: The classification of the most general _uniform_ distortions, which can fill the whole three-dimensional space, is given in [10] and recalled in Sect. 2.1. Actually, as shown in [12], the violation of one Ericksen's inequality does not prevent the twisted ground state from being locally stable in a cylinder enforcing degenerate planar anchoring on its lateral boundary. The same conclusion was reached in [13] on different grounds. But, as shown in [14], free-boundary problems may reveal noxious consequences of violating Ericksen's inequalities. If \(K_{22}<K_{24}\), a CLC droplet, tactoidal2 in shape and surrounded by an isotropic fluid environment enforcing degenerate planar anchoring for the director at the interface, is predicted to be unstable against _shape_ perturbations: it would split indefinitely in smaller tactoids while the total free energy plummets to negative infinity (see [14], for more details). Footnote 2: _Tactoids_ are elongated, cylindrically symmetric shapes with pointed ends as poles. This prediction is in sharp contrast with the wealth of experimental observations of CLC tactoidal droplets, stable in the biphasic region of phase space, where nematic and isotropic phases coexist in equilibrium. Experiments have been carried out with a number of substances (including DSCG and SSY) stabilized by the addition of neutral (achiral) condensing agents (such as PEG and Spm) [15; 16; 17; 18; 19]. These studies have consistently reported stable twisted bipolar tactoids. To resolve this contradiction, in [20] we proposed a minimalist quartic theory for CLCs, which adds to the Oseen-Frank energy density a single quartic term in the (double) twist measure; hence the name _quartic twist_ theory. We showed in [20] that indeed within this theory the total free energy of chromonic droplets subject to degenerate planar interfacial anchoring remains bounded below, even if \(K_{22}<K_{24}\); we also used published data to prove consistency with experiments and estimated a phenomenological length introduced by the theory. Higher-order theories are not new in liquid crystal science. Go under this name either theories that allow for higher spatial gradients of \(\boldsymbol{n}\) in the energy and theories that allow for higher powers in the first gradient. Under the first category, which perhaps has seen its first manifestation in [21] (see also [22]), falls, for example, Dozov's theory [23] for both twist-bend and splay-bend phases predicted long ago by Meyer [24] and more recently observed in real materials [25]. Under the second category falls, for example, a simple one-dimensional model for splay-bend nematics [26], then extended to incorporate a whole class of seven modulated ground states, of which twist-bend and splay-bend are just two instances [27]. A hybrid theory was also proposed in [28], where both higher gradients of \(\boldsymbol{n}\) and higher powers of the first gradient are allowed in the stored-energy density, with spatial derivatives and their powers balanced according to a criterion motivated by a molecular model.3 Our quartic theory is much simpler than these. Footnote 3: It should also be noted that other theories are known as “quartic” (see, for, example, the classical paper [29] and the more recent contribution [30]), but they owe this name to an elastic term globally quartic in de Gennes’ order tensor and its derivatives, added to the commonly considered version of the Landau de Gennes theory to resolve the spay-bend elastic constant degeneracy in the reduction to the Oseen-Frank theory. These theories serve a different purpose. In [20], we showed that both the classical Oseen-Frank theory and our quartic twist theory explain experimental data for the emergence of double twist in capillaries to a comparable degree of confidence. Here, in our quest for qualitative and quantitative features that may allow us to discriminate between these theories, we consider the case of the most common of point defects, the _hedgehog_. We imagine a CLC confined within a fixed spherical cavity enforcing homeotropic anchoring on its boundary, as in a recent experiment [31]. Since the seminal paper of Lavrentovich and Terentjev [32] we know that for \(K_{22}\) sufficiently small a radial hedgehog becomes _twisted_ and exhibits field lines spiraling about the point defect. A defect _core_ can then be easily identified; geometrically, it is delimited by an _inversion ring_ where spirals invert their winding sense. This is the feature under close scrutiny here: We want to mark the differences between quadratic and quartic theories in describing the defect core of a twisted hedgehog. We are interested in qualitative and quantitative differences as well, aiming to outline a setting that could possibly discriminate one theory from the other. The paper is organized as follows. In Sect. 2, we describe the energetics of chromonics, starting from the classical quadratic Oseen-Frank theory and then summarizing our quartic twist theory. In Sect. 3, we describe a class of director fields intended to represent twisted hedgehogs in a ball in terms of a single _twist_ angle depending on the radial coordinate only. We review the conditions that ensure that the radial hedgehog is unstable (according to both elastic theories) and find the energy-minimizing twist angle for the quartic twist theory; for the quadratic theory, this problem was solved in [33]. The results for the two theories are then compared and their stark differences emerge in Sect. 4. Finally, in Sect. 5, we summarize the conclusions of our work and comment on possible avenues for future research. The paper is closed by two technical appendices. In one, we collect a number of mathematical details concerning our representation of twisted hedgehogs. In the other, we describe an equivalent dynamical system, whose orbits correspond to twisted hedgehogs in equilibrium. A similar correspondence was used in [33], with the further advantage (lost here) that the equivalent dynamical system was autonomous. ## 2 Quadratic and Quartic Theories As customary in liquid crystal science, an elastic theory for chromonics is based on a free-energy functional \(\mathscr{F}\) that expresses the energy stored in a region in space \(\mathscr{B}\) containing the material as \[\mathscr{F}[\boldsymbol{n}]:=\int_{\mathscr{B}}W(\boldsymbol{n},\nabla \boldsymbol{n})\,\mathrm{d}V, \tag{1}\] where \(W\) is a function of the nematic director \(\boldsymbol{n}\) and its gradient \(\nabla\boldsymbol{n}\), which here plays the role of a local (tensorial) measure of distortion, and \(\mathrm{d}V\) is the volume element. We start by summarizing the classical quadratic theory for the elasticity of nematic liquid crystals, albeit formulated in a novel, equivalent way that serves better our purpose. It will then become easier to present the quartic twist theory proposed in [14]. ### Classical Quadratic Energy The classical elastic theory of liquid crystals goes back to the pioneering works of Oseen [34] and Frank [35].4 In this theory, the elastic free-energy density \(W\) in (1) is chosen to be the most general frame-indifferent,5 even function quadratic in \(\nabla\boldsymbol{n}\), Footnote 4: Also a paper by Zocher [36], mainly concerned with the effect of a magnetic field on director distortions, is often mentioned among the founding contributions. Some authors go to the extent of also naming the theory after him. Others, in contrast, name the theory only after Frank, as they only deem his contribution to be fully aware of the nature of \(\boldsymbol{n}\) as a _mesoscopic_ descriptor of molecular order. Footnote 5: This requirement amounts to assume that \(W(\mathbf{Q}\boldsymbol{n},\mathbf{Q}(\nabla\boldsymbol{n})\mathbf{Q}^{ \intercal})=W(\boldsymbol{n},\nabla\boldsymbol{n})\), for all rotations \(\mathbf{Q}\) in three-dimensional space. \[\begin{split} W=W_{\mathrm{OF}}(\boldsymbol{n},\nabla \boldsymbol{n})&:=\frac{1}{2}K_{11}\left(\mathrm{div}\, \boldsymbol{n}\right)^{2}+\frac{1}{2}K_{22}\left(\boldsymbol{n}\cdot\mathrm{ curl}\,\boldsymbol{n}\right)^{2}+\frac{1}{2}K_{33}|\boldsymbol{n}\times \mathrm{curl}\,\boldsymbol{n}|^{2}\\ &+K_{24}\left[\mathrm{tr}(\nabla\boldsymbol{n})^{2}-(\mathrm{ div}\,\boldsymbol{n})^{2}\right].\end{split} \tag{2}\] Here \(K_{11}\), \(K_{22}\), \(K_{33}\), and \(K_{24}\) are elastic constants characteristic of the material. They are traditionally referred to as the _splay_, _twist_, _bend_, and _saddle-splay_ constants, respectively, by the features of four different orientation fields, each with a distortion energy proportional to a single term in (2) (see, for example, Chap. 3 of [37]). Recently, Selinger [38] has reinterpreted the classical formula (2) by decomposing the saddle-splay mode into a set of other independent modes. The starting point of this decomposition is a novel representation of \(\nabla\mathbf{n}\) (see also [39]), \[\nabla\mathbf{n}=-\mathbf{b}\otimes\mathbf{n}+\frac{1}{2}T\mathbf{W}(\mathbf{n})+\frac{1}{2}S \mathbf{P}(\mathbf{n})+\mathbf{D}, \tag{3}\] where \(\mathbf{b}:=-(\nabla\mathbf{n})\mathbf{n}=\mathbf{n}\times\operatorname{curl}\mathbf{n}\) is the _bend_ vector, \(T:=\mathbf{n}\cdot\operatorname{curl}\mathbf{n}\) is the _twist_, \(S:=\operatorname{div}\mathbf{n}\) is the _splay_, \(\mathbf{W}(\mathbf{n})\) is the skew-symmetric tensor that has \(\mathbf{n}\) as axial vector, \(\mathbf{P}(\mathbf{n}):=\mathbf{I}-\mathbf{n}\otimes\mathbf{n}\) is the projection onto the plane orthogonal to \(\mathbf{n}\), and \(\mathbf{D}\) is a symmetric tensor such that \(\mathbf{D}\mathbf{n}=\mathbf{0}\) and \(\operatorname{tr}\mathbf{D}=0\). By its own definition, \(\mathbf{D}\neq\mathbf{0}\) admits the following biaxial representation, \[\mathbf{D}=q(\mathbf{n}_{1}\otimes\mathbf{n}_{1}-\mathbf{n}_{2}\otimes\mathbf{n}_{2}), \tag{4}\] where \(q>0\) and \((\mathbf{n}_{1},\mathbf{n}_{2})\) is a pair of orthogonal unit vectors in the plane orthogonal to \(\mathbf{n}\), oriented so that \(\mathbf{n}=\mathbf{n}_{1}\times\mathbf{n}_{2}\).6 In the local frame \((\mathbf{n}_{1},\mathbf{n}_{2},\mathbf{n})\), \(\mathbf{b}\) is represented as Footnote 6: It is argued in [40] that \(q\) should be given the name _tetrahedral_ splay, to which we would actually prefer _octupolar_ splay for the role played by a cubic (octupolar) potential on the unit sphere [41] in representing all scalar measures of distortion, but \(T\). \[\mathbf{b}=b_{1}\mathbf{n}_{1}+b_{2}\mathbf{n}_{2}. \tag{5}\] By use of the following identity, \[2q^{2}=\operatorname{tr}(\nabla\mathbf{n})^{2}+\frac{1}{2}T^{2}-\frac{1}{2}S^{2}, \tag{6}\] we can easily give (2) the equivalent form \[W_{\operatorname{OF}}(\mathbf{n},\nabla\mathbf{n})=\frac{1}{2}(K_{11}-K_{24})S^{2}+ \frac{1}{2}(K_{22}-K_{24})T^{2}+\frac{1}{2}K_{33}B^{2}+2K_{24}q^{2}, \tag{7}\] where \(B^{2}:=\mathbf{b}\cdot\mathbf{b}=b_{1}^{2}+b_{2}^{2}\). Since \((S,T,b_{1},b_{2},q)\) are all independent _distortion characteristics_, it readily follows from (7) that \(W_{\operatorname{OF}}\) is positive semi-definite whenever \[K_{11} \geqq K_{24}\geqq 0, \tag{8a}\] \[K_{22} \geqq K_{24}\geqq 0,\] (8b) \[K_{33}\geqq 0, \tag{8c}\] which are the celebrated _Ericksen's inequalities_[11]. If these inequalities are satisfied in strict form, the global ground state of \(W_{\operatorname{OF}}\) is attained on the uniform director field, characterized by \[S=T=B=q=0. \tag{9}\] As already mentioned in the Introduction, inequality (8b) must be violated for the ground state of \(W_{\operatorname{OF}}\) to be different from (9), involving a non-vanishing \(T\). The class of _uniform_ distortions was defined in [10] as the one comprising all director fields for which the distortion characteristics are constant in space. Equivalently said, a uniform distortion is a director field that can _fill_ three-dimensional space. It was proven that there are two distinct families of uniform distortions, characterized by the following conditions [10], \[S=0,\quad T=\pm 2q,\quad b_{1}=\pm b_{2}=b, \tag{10}\] where \(q\) and \(b\) are arbitrary parameters. The general director field corresponding to (10) is the _heliconical_ ground state of twist-bend nematic phases,7 in which \(\boldsymbol{n}\) makes a fixed _cone_ angle with a given axis in space (called the _helix_ axis), around which \(\boldsymbol{n}\) precesses periodically [10].8 The special instance in which \(b=0\) corresponds to the _single twist_ that characterizes _cholesteric_ liquid crystals. Footnote 7: With opposite _chiralities_, one for each sign in (10). Footnote 8: In opposite senses, according to the sign of chirality. The distortion for which all characteristics vanish, but \(T\), is a _double twist_.9 It is _not_ uniform and cannot fill space; it can possibly be realized locally, but not everywhere. In words, we say that it is a _frustrated_ ground state. As shown in [12], a double twist is indeed attained exactly only on the symmetry axis of cylinders enforcing degenerate planar anchoring on their lateral boundary. Footnote 9: Here we adopt the terminology proposed by Selinger [40] (see also [42]) and distinguish between _single_ and _double_ twists, the former being _uniform_ and the latter not. ### Quartic Twist Energy The essential feature of the quartic twist theory proposed in [20] is to envision a double twist with two equivalent chiral variants as ground state of CLCs in three-dimensional space, \[S=0,\quad T=\pm T_{0},\quad B=0,\quad q=0. \tag{11}\] The degeneracy of the ground double twist in (11) arises from the achiral nature of the molecular aggregates that constitute these materials, which is reflected in the lack of chirality of their condensed phases. The elastic stored energy must equally penalize both ground chiral variants. Our minimalist proposal to achieve this goal was to add a _quartic twist_ term to the Oseen-Frank stored-energy density, and so take \(W=W_{\rm QT}\), with \[W_{\rm QT}(\boldsymbol{n},\nabla\boldsymbol{n}):=\frac{1}{2}(K_{11}-K_{24})S^ {2}+\frac{1}{2}(K_{22}-K_{24})T^{2}+\frac{1}{2}K_{23}B^{2}+\frac{1}{2}K_{24}( 2q)^{2}+\frac{1}{4}K_{22}a^{2}T^{4}, \tag{12}\] where \(a\) is a _characteristic length_. Unlike \(W_{\rm OF}\), \(W_{\rm QT}\) is bounded below whenever \[K_{11} \geqq K_{24}\geqq 0, \tag{13a}\] \[K_{24} \geqq K_{22}\geqq 0,\] (13b) \[K_{33} \geqq 0. \tag{13c}\] If these inequalities hold, as we shall assume here, then \(W_{\rm QT}\) is minimum at the degenerate double-twist (11) characterized by \[T_{0}:=\frac{1}{a}\sqrt{\frac{K_{24}-K_{22}}{K_{22}}}. \tag{14}\] The parameter \(a\) encodes the _bare_ length scale over which distortions would be locally stored in the ground state.10 As to the physical size of such a length scale, it may be comprised in a wide range. While at the lower end we may place the persistence length of the molecular order, which characterizes the flexibility of CLC aggregates,11 the upper end is hard to make definite. We expect that \(a\) would be exposed to the same indeterminacy that affects many (if not all) supramolecular structures in lyotropic systems. The most telling example is perhaps given by cholestric liquid crystals, which give rise to a chiral structure (characterized by a single twist \(T=\pm 2q\)) starting from chiral molecules. If the macroscopic pitch were determined by the molecular chirality,12 it would result several orders of magnitude smaller than the observed ones.13 Here, we shall treat \(a\) as a phenomenological parameter, to be determined experimentally. An estimate derived in [20] from a comparison with published data placed \(a\) in the order of microns. Footnote 10: In the elastic model proposed in [10] for twist-bend nematics, a quartic free energy was posited that admits as ground state either of two families of uniform heliconical fields with opposite chirality. There too, a length scale appears in the equilibrium pitch. The distortion state characterized by this length is the same everywhere. Footnote 11: The persistence length of a flexible aggregate is the shortest length over which unit vectors tangent to the aggregate’s contour lose correlation. For CLCs, it is estimated on the order of tens to hundreds of nm [43] Footnote 12: \(Via\) the naive geometric argument that represents chiral molecules as cylindrical screws and derives the pitch of their assemblies by close packing them so as to fit grooves with grooves. Footnote 13: For lyotropic cholesterics, the mismatch between microscopic and macroscopic pitches, which has recently received new experimental evidence in systems of basic living constituents [44, 45], is still debated. Interesting theories based on either molecular shape fluctuations [46, 47] or surface charge patterns [48] have met with some experimental disagreement [49]. ## 3 Twisted Hedgehog So far we have presented, mostly on equal terms, two elastic theories for chromonics, one quadratic and the other quartic in the director gradient. Here we see how these theories can be differentiated on the basis of the different structures they predict for the core of hedgehogs, the most common of nematic defects in three space dimensions. Many mathematical details needed to follow our development are collected in Appendix A. We first discuss the distortion of a trial director field within a ball of radius \(R\) enforcing homeotropic anchoring on its boundary. This is a field with a point defect at the center of the ball, potentially rich in twist, as would seem fit for a material with small \(K_{22}\) constant. We shall then see the analytical implications and the potential experimental significance of this field. The point defects that we shall study are a special family of _hedgehogs_, which place themselves in between the most common defects in liquid crystal science, the _radial_ and the _hyperbolic_ hedgehogs. The former is represented by the director field \[\mathbf{n}_{\rm R}(\mathbf{x}):=\frac{\mathbf{x}-\mathbf{x}_{0}}{|\mathbf{x}-\mathbf{x}_{0}|}, \tag{15}\] which has a point defect at \(\mathbf{x}_{0}\), while the latter is formally obtained by the following transformation of \(\mathbf{n}_{\rm R}\), \[\mathbf{n}_{\rm H}:=\mathbf{R}(\pi)\mathbf{n}_{\rm R}, \tag{16}\] where \[\mathbf{R}(\pi):=-\mathbf{I}+2\mathbf{e}\otimes\mathbf{e} \tag{17}\] is the special orthogonal tensor describing a rotation by angle \(\pi\) about a unit vector \(\mathbf{e}\in\mathbb{S}^{2}\). Figure 1 illustrates the field lines of both \(\mathbf{n}_{\rm R}\) and \(\mathbf{n}_{\rm H}\). The _topological charge_ of a unit vector field \(\mathbf{n}\) with a point defect at \(\mathbf{x}_{0}\) is defined as \[N(\mathbf{n}):=\frac{1}{4\pi}\int_{\mathscr{S}}\mathbf{n}\cdot(\nabla_{\!\!s}\mathbf{n})^{*} \mathbf{\nu}\,\mathrm{d}A, \tag{18}\] where \(\mathscr{S}\) is a any surface enclosing \(\mathbf{x}_{0}\), \(\nabla_{\!\!s}\) denotes the surface gradient on \(\mathscr{S}\), \(\mathbf{\nu}\) is the unit normal to \(\mathscr{S}\), the operation \((\cdots)^{*}\) takes the cofactor of a tensor,14 and \(\mathrm{d}A\) is the area element. \(N(\mathbf{n})\) is an integer of \(\mathbb{Z}\) independent of \(\mathscr{S}\), provided the latter embraces \(\mathbf{x}_{0}\), and so \(N(\mathbf{n})\) can be attributed to \(\mathbf{x}_{0}\) itself. The absolute value \(|N(\mathbf{n})|\) indicates the number of times \(\mathbf{n}\) restricted to \(\mathscr{S}\) covers the unit sphere \(\mathbb{S}^{2}\); the sign of \(N(\mathbf{n})\) tells whether \(\mathbb{S}^{2}\) is covered coherently or not with the orientation of the unit normal \(\mathbf{\nu}\). Historically, we learn from [51] that the representation in (18) for \(N(\mathbf{n})\) was first derived in [52]. Footnote 14: This is a tensor whose representative matrix is the cofactor matrix of the matrix representing the original tensor, see [50, p. 22] for a formal definition. \(N(\mathbf{n})\) is additive: if the surface \(\mathscr{S}\) encloses more than one defect, the topological charge computed on it through (18) is the algebraic sum of the topological charges computed on surfaces enclosing the single defects comprised in \(\mathscr{S}\). As pointed out in Sect. VII.E.3 of [53], a defect with topological charge \(N(\mathbf{n})\) can be transformed _continuously_ into a defect with _opposite_ topological charge, thus making \(|N(\mathbf{n})|\), and not \(N(\mathbf{n})\) itself, a topological invariant apt to classify point defects for director fields on \(\mathbb{S}^{2}\). The mapping \[\mathbf{n}_{\mathrm{R}}\mapsto\overline{\mathbf{n}_{\mathrm{R}}}:=-\mathbf{n}_{\mathrm{H}} \tag{19}\] was described in [54] as a _parity_ transformation, as it changes the sign of the topological charge,15 Footnote 15: This equation follows from Appendix A.2 and the general property of (18) stating that \(N(-\mathbf{n})=-N(\mathbf{n})\), which stems from being \((\nabla_{\!\!s}\mathbf{n})^{*}\) even in \(\mathbf{n}\). \[N(\overline{\mathbf{n}_{\mathrm{R}}})=-N(\mathbf{n}_{\mathrm{R}})=-1. \tag{20}\] This was meant to identify \(\overline{\mathbf{n}_{\mathrm{R}}}\) as an _anti_ radial hedgehog, which would neutralize the topological Figure 1: Field lines of \(\mathbf{n}_{\mathrm{R}}\) and \(\mathbf{n}_{\mathrm{H}}\) in (15) and (16), representing a radial and a hyperbolic hedgehog, respectively. Field lines are drawn on the equatorial plane (in black) and on a meridian plane (in red). The whole picture is obtained by rotating these lines around the polar axis. By their definitions, both \(\mathbf{n}_{\mathrm{R}}\) and \(\mathbf{n}_{\mathrm{H}}\) share the same topological charge \(N\) as introduced in (18). charge of the radial hedgehog and annihilate it when combined together in a director field on \(\mathbb{S}^{2}\) with zero total topological charge.16 Footnote 16: Actually, in [54], \(\mathbf{n}_{\mathrm{H}}\) was defined to be precisely \(\overline{\mathbf{n}_{\mathrm{R}}}\), so that, being opposite to the field in (16), would form with it a defect-anti-defect pair, as would also be clear from Fig. 1 once the field lines orientation in panel (b) are reversed. Here, instead, as shown in Appendix A.2, \[N(\mathbf{n}_{\mathrm{H}})=N(\mathbf{n}_{\mathrm{R}})=+1, \tag{21}\] so that \(\mathbf{n}_{\mathrm{R}}\) and \(\mathbf{n}_{\mathrm{H}}\) not only belong to the same topological class, but also have the same topological charge. We consider as domain \(\mathscr{B}\) a ball \(\mathbb{B}_{R}(\mathbf{x}_{0})\) with radius \(R\) and center at \(\mathbf{x}_{0}\). We study a trial _twisted_ hedgehog field \(\mathbf{n}_{\mathrm{T}}\), which "interpolates" in space between \(\mathbf{n}_{\mathrm{R}}\) and \(\mathbf{n}_{\mathrm{H}}\). Formally, \(\mathbf{n}_{\mathrm{T}}\) is obtained by acting on the radial hedgehog \(\mathbf{n}_{\mathrm{R}}(\mathbf{x})\) in (15) with a rotation \(\mathbf{R}(\alpha)\) of variable angle \(\alpha=\alpha(r)\) about a fixed axis \(\mathbf{e}\in\mathbb{S}^{2}\), where \(r\) is the distance of \(\mathbf{x}\) from the defect at \(\mathbf{x}_{0}\), \[\mathbf{n}_{\mathrm{T}}(\mathbf{x}) :=\mathbf{R}(\alpha(r))\mathbf{n}_{\mathrm{R}}(\mathbf{x}), \tag{22a}\] \[\mathbf{R}(\alpha) :=\mathbf{I}+\sin\alpha\mathbf{W}(\mathbf{e})+(1-\cos\alpha)\mathbf{W} (\mathbf{e})^{2}. \tag{22b}\] In (22b), \(\mathbf{W}(\mathbf{e})\) is the skew-symmetric tensor associated with \(\mathbf{e}\), whose action on any vector \(\mathbf{v}\) is given by \(\mathbf{W}(\mathbf{e})\mathbf{v}=\mathbf{e}\times\mathbf{v}\). The field \(\mathbf{n}_{\mathrm{T}}\) reduces to the radial hedgehog \(\mathbf{n}_{\mathrm{R}}\) for \(\alpha\equiv 0\) and to the hyperbolic hedgehog \(\mathbf{n}_{\mathrm{H}}\) for \(\alpha\equiv\pi\). As shown in Appendix A.2, the topological charge of \(\mathbf{n}_{\mathrm{T}}\) equals that of both \(\mathbf{n}_{\mathrm{R}}\) and \(\mathbf{n}_{\mathrm{H}}\), irrespective of the function \(\alpha\), \[N(\mathbf{n}_{\mathrm{T}})=+1. \tag{23}\] We shall call \(\alpha\) the _twist_ angle. ### Inversion Ring A peculiar property of the field \(\mathbf{n}_{\mathrm{T}}\) is illustrated by letting \(\mathbf{e}\) be the polar axis of a system of spherical coordinates \((r,\vartheta,\varphi)\) with origin at \(\mathbf{x}_{0}\). On the equatorial plane \(\vartheta=\frac{\pi}{2}\), in the coordinate frame \((\mathbf{e}_{r},\mathbf{e}_{\vartheta},\mathbf{e}_{\varphi})\), \(\mathbf{n}_{\mathrm{T}}\) reduces to (see (A9)) \[\mathbf{n}_{\mathrm{T}}=\cos\alpha\mathbf{e}_{r}+\sin\alpha\mathbf{e}_{\varphi}, \tag{24}\] and so it lies entirely on the equatorial plane. If, for some \(r^{*}\), \(\alpha(r^{*})=\frac{\pi}{2}\), then \(\mathbf{n}_{\mathrm{T}}\) is tangent to the circle of radius \(r^{*}\) around \(\mathbf{x}_{0}\). Where \(\alpha(r)<\frac{\pi}{2}\) the field \(\mathbf{n}_{\mathrm{T}}\) spirals outward (relative to \(\mathbf{x}_{0}\)), where \(\alpha(r)>\frac{\pi}{2}\) it spiral inward. Figure 2 illustrates this feature within the ball \(\mathbb{B}_{R}(\mathbf{x}_{0})\) when the condition \[\alpha(R)=0 \tag{25}\] is enforced, so that \(\mathbf{n}_{\mathrm{T}}=\mathbf{n}_{\mathrm{R}}\) on the boundary \(\partial\mathbb{B}_{R}(\mathbf{x}_{0})\). The ring at \(r=r^{*}\) separates two opposite spiraling regimes; there, the field lines of \(\mathbf{n}_{\mathrm{T}}\) appear to coalesce in a ring, which looks like a disclination, but is instead _regular_, as it bears no discontinuity of the director. We shall call the ring at \(r=r^{*}\), if present, an _inversion_ ring, as it marks the inversion of the spiraling sense. It is perhaps the seminal work of Lavrentovich and Terentjev [32] where a first experimental evidence of an inversion ring within a twisted hedgehog was ever found and documented in ordinary nematics.17 Footnote 17: In a temperature regime where the twist constant \(K_{22}\) is sufficiently small. Here, we shall use \(\mathbf{n}_{\rm T}\) as a trial field to describe the twisted distortion that replaces \(\mathbf{n}_{\rm R}\) in \(\mathbb{B}_{R}(\mathbf{x}_{0})\) when \(\mathbf{n}_{\rm R}\) becomes unstable. We shall determine the function \(\alpha\) subject to (25) that minimizes the elastic free energy \(\mathscr{F}\) in (1) with \(\mathscr{B}=\mathbb{B}_{R}(\mathbf{x}_{0})\). We shall do so for either \(W=W_{\rm OF}\) in (7) and \(W=W_{\rm QT}\) in (12) to see whether the quadratic and quartic elastic theories for chromonics recalled in Sect. 2 can be distinguished on the basis of the predictions they make about the occurrence of a twisted hedgehog and its inversion ring. We start by considering under what conditions \(\mathbf{n}_{\rm R}\) is locally stable for either theory. ### Local Stability of Radial Hedgehog First, we observe that \(\mathbf{n}_{\rm R}\) is a _universal_ solution, as it solves the equilibrium equation for all possible elastic free-energy functionals \(\mathscr{F}\) in (1) associated with a frame-indifferent density \(W=W(\mathbf{n},\nabla\mathbf{n})\), see [55]. Thus, \(\mathbf{n}_{\rm R}\) is an equilibrium configuration for \(\mathscr{F}\), irrespective of \(W\). Moreover, it was proved in [56] and [57] that, when \(W=W_{\rm OF}\), \(\mathbf{n}_{\rm R}\) is a local minimizer of \(\mathscr{F}\) in the admissible class of director fields \(\mathbf{n}\) with finite energy in \(\mathbb{B}_{R}(\mathbf{x}_{0})\) and such that \[\mathbf{n}|_{\partial\mathbb{B}_{R}(\mathbf{x}_{0})}=\mathbf{n}_{\rm R}, \tag{26}\] provided that the following inequality is satisfied,18 Footnote 18: A result which was independently rediscovered in [58]. \[0<k_{1}<1+\frac{k_{3}}{8}. \tag{27}\] Figure 2: Field lines of \(\mathbf{n}_{\rm T}\) in (22) within the ball \(\mathbb{B}_{R}(\mathbf{x}_{0})\) enforcing condition (25), so that \(\mathbf{n}_{\rm T}=\mathbf{n}_{\rm R}\) on \(\partial\mathbb{B}_{R}(\mathbf{x}_{0})\). An inversion ring is present, which is depicted in blue. Black lines are field lines lying on the equatorial plane; red lines are field lines coming out of the equatorial plane. As in Fig. 1, the whole 3D picture is obtained my rotating this drawing about the polar axis. Here and below, the elastic constants will be scaled to \(K_{22}\), \[k_{1}:=\frac{K_{11}}{K_{22}},\quad k_{3}:=\frac{K_{33}}{K_{22}},\quad\text{and} \quad k_{24}:=\frac{K_{24}}{K_{22}}\quad\text{with}\quad K_{22}>0. \tag{28}\] This local stability result is based on the study of the second variation of \(\mathscr{F}\) at \(\mathbf{n}=\mathbf{n}_{\text{R}}\); the latter is the _same_ for both \(W_{\text{OF}}\) and \(W_{\text{QT}}\), as these only differ by a quartic term that does _not_ affect the second variation of \(\mathscr{F}\) at \(\mathbf{n}_{\text{R}}\), see Appendix A.3. It is remarked in [59] that when (27) is violated the free-energy functional \(\mathscr{F}\) with \(W=W_{\text{OF}}\) subject to (26) admits a _continuum_ of minimizers, all sharing the same energy. Since the proof of this result is based on frame-indifference only, it also holds within our quartic twist theory where \(W=W_{\text{QT}}\). Figure 3 illustrates inequality (27) for \(k_{1}>1\), which is the situation that applies to chromonics, as also shown by the dot representing data for SSY. Hereafter, we assume that \[k_{1}>1+\frac{k_{3}}{8}. \tag{29}\] The special family of twisted director configurations described by \(\mathbf{n}_{\text{T}}\) are parameterized by the scalar function \(\alpha\) and the symmetry axis \(\mathbf{e}\in\mathbb{S}^{2}\). Once, for a given \(\mathbf{e}\), \(\alpha\) is chosen so as to minimize \(\mathscr{F}\), letting \(\mathbf{e}\) vary in \(\mathbb{S}^{2}\) potentially embodies the continuum of minimizers expected to arise when the radial hedgehog \(\mathbf{n}_{\text{R}}\) is no longer locally stable. ### Minimum Problem Here, we study the problem of minimizing the functional \(\mathscr{F}\) in (1) for \(\mathscr{B}=\mathbb{B}_{R}(\mathbf{x}_{0})\) and \(W=W_{\text{QT}}\) subject to (26). We introduce the change of variables \[r\mapsto\rho:=\frac{r}{R}, \tag{30}\] Figure 3: Regions of interest for the local stability of the radial hedgehog \(\mathbf{n}_{\text{R}}\) in CLCs. In the pink region, \(\mathbf{n}_{\text{R}}\) is a local minimizer of \(\mathscr{F}\) subject to (26) when \(\mathscr{B}=\mathbb{B}_{R}(\mathbf{x}_{0})\), for either \(W=W_{\text{OF}}\) and \(W=W_{\text{QT}}\). In the blue region, \(\mathbf{n}_{\text{R}}\) is no longer a minimizer; there is a continuum of minimizers, all with the same energy. Bulk elastic constants of chromonics fall in the region of instability; the red dot represents data for SSY, \(k_{1}\approx 6.1\) and \(k_{3}\approx 8.7\), taken from [60]. which maps \([0,R]\) onto \([0,1]\). In the new variable, (25) becomes19 Footnote 19: We shall continue to adopt the same old symbol for the function \(\alpha\), even if it is expressed in the new variable. \[\alpha(1)=0. \tag{31}\] Standard computations (deferred to Appendix A.1) show that for \(\mathscr{B}=\mathbb{B}_{R}(\mathbf{x}_{0})\) and \(W=W_{\rm QT}\) the functional \(\mathscr{F}\) in (1) can be given the following scaled form \[\mathcal{F}_{\lambda}[\alpha]:=\frac{15\mathscr{F}[\mathbf{n}_{\rm T} ]}{8\pi K_{22}R}-\mathcal{F}_{\rm R}\] \[\quad=\int_{0}^{1}\left\{g(\alpha(\rho))(\rho\alpha^{\prime}( \rho))^{2}+2(k_{1}-1)f_{0}(\alpha(\rho))+\frac{32}{7}\frac{\lambda^{2}}{\rho^{ 2}}\bigg{[}\sum_{n=1}^{4}f_{n}(\alpha(\rho))(-\rho\alpha^{\prime}(\rho))^{n} \bigg{]}\right\}\mathrm{d}\rho, \tag{32}\] where a prime \({}^{\prime}\) denotes differentiation, \(\mathcal{F}_{\rm R}:=15(k_{1}-k_{24})\) is the scaled energy of the radial hedgehog, so that \[\mathcal{F}_{\lambda}[0]=0, \tag{33}\] and the functions \(g\), and \(f_{n}\) are defined as \[g(\alpha) =2k_{1}\sin^{2}\alpha+\frac{2}{7}(1-\cos\alpha)^{2}+\frac{k_{3}}{ 14}\left(24\cos^{2}\alpha+8\cos\alpha+3\right), \tag{34a}\] \[f_{0}(\alpha) =2\cos^{2}\alpha+\cos\alpha-3,\] (34b) \[f_{1}(\alpha) =3(1-8\cos\alpha)\sin^{3}\alpha,\] (34c) \[f_{2}(\alpha) =(1-\cos\alpha)^{2}\sin^{2}\alpha,\] (34d) \[f_{3}(\alpha) =\frac{2}{11}(1-\cos\alpha)^{3}\sin\alpha,\] (34e) \[f_{4}(\alpha) =\frac{2}{143}(1-\cos\alpha)^{4}. \tag{34f}\] \(\mathcal{F}_{\lambda}[\alpha]\) is invariant under the change of \(\alpha\) into \(-\alpha\), for any \(\alpha\). Thus, every non-trivial equilibrium solution \(\alpha_{\lambda}\) would be accompained by its parity conjugate \(-\alpha_{\lambda}\). The corresponding fields \(\mathbf{n}_{\rm T}\) differ as they have opposite chirality, but they have one and the same energy. For \(\lambda>0\), integrability of the quartic term in \(\mathcal{F}_{\lambda}\) in (32) requires that the limiting value \(\alpha(0)\) of \(\alpha\) at \(\rho=0\) be either \(0\) or \(\pi\). Under the assumption that, to within parity conjugacy, \(\mathcal{F}_{\lambda}\) has a unique minimizer subject to (31), the choice \(\alpha(0)=0\) would lead us to \(\alpha\equiv 0\), that is, to \(\mathbf{n}_{\rm R}\), which is a contradiction since the radial hedgehog is unstable when (27) applies. Thus, we shall enforce the condition \[\alpha(0)=\pi\quad\text{for}\quad\lambda>0. \tag{35}\] For \(\lambda=0\), \(\alpha(0)\) is instead free to vary, as in (32) integrability is guaranteed by the integrability of \(\alpha^{\prime}\).20 Footnote 20: Which requires that \(\rho\alpha^{\prime}(\rho)\) be bounded as \(\rho\to 0^{+}\). #### 3.3.1 Equilibrium Solutions Here, we specialize the analysis to _positive_ solutions of the equilibrium equation for \(\mathcal{F}_{\lambda}\): we assume that \(\alpha_{\lambda}\geqq 0\) since the minimizer of \(\mathcal{F}_{\lambda}\) is not expected to change sign. Clearly, this positive branch of solutions remains associated with the conjugate negative branch, which has equal energy. The equilibrium equation is too complicated to lend itself to analytic solutions; it was symbolically manipulated and will be conventionally called (E).21 Footnote 21: It is equivalent to the equation of motion (B6) for the effective dynamical system described in Appendix B. We could establish the asymptotic behaviour of the solutions \(\alpha_{\lambda}\) of (E) near \(\rho=0\) and \(\rho=1\), for every \(\lambda>0\). As shown in Appendix A.4, for \(k_{1}>1\) \[\alpha_{\lambda}(\rho)=\pi(1-B\rho)+\mathcal{O}\left(\rho^{2}\right)\quad \text{for}\quad\rho\to 0^{+}, \tag{36}\] where \[B=\sqrt{\frac{7}{32}}\frac{1}{\lambda}\frac{\sqrt{58058}\sqrt{21k_{1}+19k_{3}- 5}}{1421\pi}. \tag{37}\] Similarly, \[\alpha_{\lambda}(\rho)\approx C\left(\frac{1}{\rho}-1\right)\quad\text{as} \quad\rho\to 1, \tag{38}\] where \(C\) is a positive constant to be determined. #### 3.3.2 Energy Minimizers Here we explore numerically the minimizers \(\alpha_{\lambda}\) of \(\mathcal{F}_{\lambda}\), focusing on the positive equilibrium branch (thus selecting one chirality for \(\boldsymbol{n}_{\text{T}}\)). For \(\lambda=0\), this problem is solved in [33] by reinterpreting \(\mathcal{F}_{0}\) as an infinite-horizon action functional associated with an equivalent autonomous dynamical system in two-dimensional phase space. For \(\lambda>0\), a similar reinterpretation for \(\mathcal{F}_{\lambda}\) is still viable, but the associated dynamical system is _not_ autonomous; it is studied numerically in Appendix B and contrasted with the autonomous system associated with \(\mathcal{F}_{0}\). The major difference between these dynamical systems resides in their equilibrium points; in the language of the twist angle \(\alpha\), this translates into two different asymptotic values at the center of the ball \(\mathbb{B}_{R}(\boldsymbol{x}_{0})\), \[\alpha_{\lambda}(0)=\begin{cases}\widehat{\alpha}_{0}:=\arccos(-1/4)&\text{ for}\quad\lambda=0,\\ \pi&\text{for}\quad\lambda>0.\end{cases} \tag{39}\] One may say that the classical quadratic theory (\(\lambda=0\)) predicts that \(\boldsymbol{n}_{\text{R}}\) and \(\boldsymbol{n}_{\text{H}}\) are _not_ completely bridged inside the confining ball \(\mathbb{B}_{R}(\boldsymbol{x}_{0})\), whereas the quartic theory (\(\lambda>0\)) predicts that they are. Hence we could possibly use hedgehogs in chromonics confined within a ball to discriminate these theories from one another. However, although this is a qualitative difference, its observation might be experimentally precluded. A further, quantitative feature must be called upon; this is the size (relative to ball's radius \(R\)) of the inversion ring \(r^{*}\) associated with the stable twisted hedgehogs predicted by both theories, as by (39) an inversion ring is present in both cases. For definiteness, we consider a specific case, which was suggested by the experimental study in [31]. This is the case of chromonic liquid crystal SSY in an aqueous solution (at a wt/wt concentration of 30% and a temperature of \(25\,^{\circ}\)C) confined within a spherical cavity produced inside a polymeric matrix enforcing homeotropic anchoring for the director on its boundary (see Fig. 5). Material constants are derived from [60] and deliver \(k_{1}\approx 6.1\) and \(k_{3}\approx 8.7\),22 which, as shown in Fig. 3, locate the radial hedgehog in its unstable domain. The radius of the spherical cavity in Fig. 5 is \(R\approx 40.4\mu\)m. For the same SSY solution in the same physical conditions, in [20] we estimated \(a\approx 6.4\mu\)m, thus here we take \(\lambda=0.16\). The profile of the minimizing twist angle \(\alpha_{\lambda}\) corresponding to these parameters is shown in Fig. 4 (red curve) against the minimizing profile \(\alpha_{0}\) for \(\lambda=0\) (blue curve). It is apparent that the inversion rings associated with these solutions are appreciably different. Figure 5 reproduces a spherical cavity (in a polymeric matrix) observed in [31]; there we also draw the inversion rings predicted by both classical and quadratic theories. Judging from this single comparison and taking for granted that the defect shown here is indeed a twisted hedgehog, we may say that the quartic theory seems to capture better the size of the inner structure enclosed by the inversion ring. This core structure will be further detailed in Sect. 4. Letting \(\rho^{*}:=r^{*}/R\) designate the scaled radius of the inversion ring, we explored the dependence of \(\rho^{*}\) on \(\lambda\). The plot in Fig. 6 summarizes the outcomes of this analysis; it shows how \(\rho^{*}\) saturates to \(\rho^{*}_{\infty}\approx 0.82\) as \(\lambda\) grows indefinitely. Not only does the inversion ring size increase monotonically with \(\lambda\), but also the defect core inside the inversion ring is qualitatively different for \(\lambda=0\) and \(\lambda>0\). These differences will be highlighted in the following section. ## 4 Spiraling Cores Here we go into deeper details of the twisted hedgehog \(\mathbf{n}_{\rm T}\) that minimizes the elastic free energy \(\mathcal{F}_{\lambda}\); we are especially interested in the behaviour if its field lines within the defect core, which is conveniently identified with a sphere of radius \(r^{*}\), the radius of the inversion ring. We shall again study primarily the distortion afforded by the quartic theory with \(\lambda>0\); this case will also be contrasted against the case \(\lambda=0\) of the classical quadratic theory. We shall see that the differences between the two cases are both qualitative and quantitative. We split our analysis in two steps; in the first, we study the field lines of \(\mathbf{n}_{\rm T}\) on the equatorial plane of \(\mathbb{B}_{R}(\mathbf{x}_{0})\) (orthogonal to the symmetry axis); in the second, we see how these lines behave away from that plane. ### Equatorial Field Lines In a spherical coordinate system \((r,\vartheta,\varphi)\) with polar angle \(\vartheta\in[0,\pi]\), the equatorial plane is described by \(\vartheta=\frac{\pi}{2}\) and \(r\geqq 0\), \(\varphi\in[0,2\pi)\). Scaling lengths to the radius \(R\) of the spherical cavity and letting \(\rho\) be still defined as in (30) above, we see from (24) that the field lines of \(\mathbf{n}_{\rm T}\) on the equatorial plane are the solutions \((\varphi(\tau),\rho(\tau))\) to the differential system \[\frac{\mathrm{d}\varphi}{\mathrm{d}\tau} =1, \tag{40a}\] \[\frac{\mathrm{d}\rho}{\mathrm{d}\tau} =\frac{\rho(\tau)}{\tan(\alpha_{\lambda}(\rho(\tau)))}, \tag{40b}\] subject to \[\varphi(0)=0\quad\text{and}\quad\rho(0)=\rho_{0}\quad\text{with}\quad 0< \rho_{0}<1, \tag{40c}\] Figure 5: Reproduction of Fig. 5b of [31] showing a spherical cavity (in a polymeric matrix) of radius \(R\approx 40.4\mu\)m enclosing a SSY solution in water with concentration 30% (wy/wt) and temperature \(25\,^{\circ}\)C. The homeotropic anchoring on the boundary of the sphere induces a (presumably twisted) hedgehog at the center exhibiting the typical Maltese cross when observed between crossed polarizers. The larger (green) and smaller (blue) circles superimposed to the figure are the inversion rings predicted by the quartic and quadratic theories, respectively. In absolute terms, with \(a\approx 6.4\mu\)m (from [20]), that is, \(\lambda\approx 0.16\), we have \(r_{0}^{*}=\rho_{0}^{*}R\approx 1\mu\)m and \(r_{\lambda}^{*}=\rho_{\lambda}^{*}R\approx 8.1\mu\)m. where \(\tau\) is a parameter. The curves solving (40) may be winding several times around the origin as \(\tau\to+\infty\); the appropriate solution of (40a) is then \[\varphi=\tau\mod 2\pi. \tag{41}\] It follows from (40) that every field line that starts inside or outside the inversion ring, remains inside or outside that ring, respectively. The inversion ring at \(\rho=\rho^{*}\) is a field line itself, since \(\rho\equiv\rho^{*}\) is a solution of (40b). Moreover, a field line that starts from \(\rho_{0}<\rho^{*}\) keeps spiraling (clockwise) around the point defect at the origin, while a field line that starts from \(\rho_{0}>\rho^{*}\) is soon bent (anti-clockwise) towards the equator of \(\mathbb{B}_{R}(\mathbf{x}_{0})\), where it points radially away from the defect (see Fig. 7). Another qualitative feature of the field lines of \(\mathbf{n}_{\mathrm{T}}\) is revealed by (40). Given the monotonicity of \(\rho(\tau)\) both inside and outside the inversion ring, this function is invertible; a straightforward integration yields the following formula for its inverse, \[\tau(\rho)=\begin{cases}\int_{\rho_{0}}^{\rho}\frac{\tan\alpha_{\lambda}( \xi)}{\xi}\,\mathrm{d}\xi&\text{for}\quad\rho>\rho^{*},\\ \int_{\rho}^{\rho_{0}}\frac{\tan\alpha_{\lambda}(\xi)}{\xi}\,\mathrm{d}\xi& \text{for}\quad\rho<\rho^{*}.\end{cases} \tag{42}\] Two noteworthy consequences follow from (42). First, from the divergence of both integrals as \(\rho\to\rho^{*}\) (from above and from below, respectively), we see that the field lines of \(\mathbf{n}_{\mathrm{T}}\) wind infinitely many times around the inversion ring, no matter which elastic theory is employed to describe a twisted hedgehog. Second, by taking the limit as \(\rho\to 0^{+}\) in the second integral, we see that this diverges or not, depending on the limiting value \(\alpha_{\lambda}(0)\). Since the latter depends on being \(\lambda=0\) or \(\lambda>0\), the two theories being compared here afford different qualitative predictions. According t to the quadratic theory, for which \(\alpha_{0}=\arccos(-1/4)\), the second integral in (42) diverges and the field lines of \(\mathbf{n}_{\rm T}\) wind infinite many times around the point defect at the origin; asymptotically, they are _logarithmic_ spirals. On the contrary, according to the quartic theory, for which \(\alpha_{\lambda}=\pi\) for all \(\lambda>0\), by (36), the second integral in (42) converges and the field lines of \(\mathbf{n}_{\rm T}\) wind a finite number of times around the defect; asymptotically, they are _Archimedean_ spirals. In Fig. 7, the field lines of \(\mathbf{n}_{\rm T}\) in the equatorial plane are contrasted for the two theories, when the twist angle is given by the functions \(\alpha_{0}\) and \(\alpha_{\lambda}\) whose graphs are shown in Fig. 4. In both cases the inversion ring is zoomed in to highlight the different nature of the asymptotic spirals around the point defects. ### Field Lines in Space As is easily seen from (A9), the field lines of \(\mathbf{n}_{\rm T}\) away from the equatorial plane of \(\mathbb{B}_{R}(\mathbf{x}_{0})\) are described in spherical coordinates \((r,\vartheta,\varphi)\) by the solutions to the following differential system \[\frac{\mathrm{d}\rho}{\mathrm{d}\tau} =\rho(\tau)\frac{1+(\cos\alpha_{\lambda}(\rho(\tau))-1)\sin^{2} \vartheta}{\sin\alpha_{\lambda}(\rho(\tau))}, \tag{43a}\] \[\frac{\mathrm{d}\vartheta}{\mathrm{d}\tau} =\frac{(\cos\alpha_{\lambda}(\rho(\tau))-1)\cos\vartheta\sin \vartheta}{\sin\alpha_{\lambda}(\rho(\tau))},\] (43b) \[\frac{\mathrm{d}\varphi}{\mathrm{d}\tau} =1, \tag{43c}\] where \(\tau\) is a parameter chosen again so that (41) holds. Figure 7: Field lines of \(\mathbf{n}_{\rm T}\) in the equatorial plane of \(\mathbb{B}_{R}(\mathbf{x}_{0})\) according to the two elastic theories considered here. Material constants correspond to SSY in the same conditions that apply to both Figs. 4 and 5. The flow described by (43) is mirror-symmetric with respect to the equatorial plane (\(\vartheta=\frac{\pi}{2}\)) and, as shown in Fig. 8, possesses two families of _negatively_ invariant sets, balls and circular cylinders with radii larger than the radius \(r^{*}\) of the inversion ring. This means that field lines of \(\boldsymbol{n}_{\mathrm{T}}\) may only leave the regions enclosed by these sets and never enter them. To prove this qualitative property, we denote by \(\mathbb{B}_{r}\) and \(\mathbb{C}_{r}\) these families of balls and cylinders, respectively, and by \(\boldsymbol{\nu}\) their outer unit normal. It readily follows from (A9) that \[\boldsymbol{n}_{\mathrm{T}}\cdot\boldsymbol{\nu}|_{\partial \mathbb{B}_{r}} =\sin^{2}\vartheta\cos\alpha+\cos^{2}\vartheta, \tag{44a}\] \[\boldsymbol{n}_{\mathrm{T}}\cdot\boldsymbol{\nu}|_{\partial \mathbb{C}_{r}} =\sin\vartheta\cos\alpha, \tag{44b}\] which are both non-negative for all \(\vartheta\in[0,\pi]\) whenever \(\alpha\leqq\frac{\pi}{2}\), that is, for \(r>r^{*}\). A further geometric illustration of this property is given in Fig. 9. ## 5 Conclusion In [20], we proposed a quartic twist theory for the curvature elasticity of chromonic liquid crystals, for which we have been seeking corroborating evidence. This theory introduces a phenomenological length \(a\), which in [20] was estimated to be of the order of microns by fitting published data for chromonics filling cylinders with degenerate planar anchoring on their lateral boundary. These data could also be interpreted by use of the classical quadratic Oseen-Frank theory [6, 7], which however would be unable to predict stable shapes for the tactoidal droplets observed in the biphasic region of these materials [14]. Figure 8: Field lines of \(\boldsymbol{n}_{\mathrm{T}}\) away from the equatorial plane of \(\mathbb{B}_{R}(\boldsymbol{x}_{0})\), for the same choice of parameters in both Figs. 4 and 5. Only the two limiting negatively invariant sets, the ball \(\mathbb{B}_{r^{*}}\) and the cylinder \(\mathbb{C}_{r^{*}}\) built on the inversion ring, are shown. Field lines are back inside \(\mathbb{C}_{r^{*}}\) and red outside. The zoomed region on the right is the ball of radius \(r^{*}\); two field lines are drawn that start near the boundary of \(\mathbb{B}_{r^{*}}\), one inside \(\mathbb{C}_{r^{*}}\) (black) and the other outside (red). We turned to hedgehog defects and their core structure to find an instance where the two theories would afford different predictions, which could serve to differentiate them. We considered a spherical cavity of radius \(R\) enforcing homeotropic anchoring on its boundary, like those produced in [31], and studied the _twisted_ hedgehogs predicted by both theories in the region in parameter space where the radial hedgehog would be unstable. The defect core of a twisted hedgehog director field \(\mathbf{n}_{\rm T}\) is characterized by an inversion ring that encloses the defect core. Two properties of the defect core are predicted in stark contrast by the two theories: one is qualitative, the other quantitative. We start with the latter. The radius \(r^{*}\) of the inversion ring depends only on the elastic anisotropy for the quadratic theory and also on the ratio \(\lambda=a/R\) for the quartic theory. For SSY in the same physical conditions as in [31], taking \(a\) from [20], we estimated \(r^{*}\) to be nearly an order of magnitude larger for the quartic theory compared to the quadratic one, \(8.1\,\mu\)m against \(1\,\mu\)m. On the qualitative side, we showed that the field lines of \(\mathbf{n}_{\rm T}\) spiral differently around the point defect according to which theory is adopted: in the quadratic theory, they are logarithmic spirals; in the quartic theory, they are instead Archimedean spirals. We may perhaps say that the defect core of twisted defects, with its distinctive quantitative and qualitative features, could be the hallmark of a quartic elastic theory for chromonics. However, such a clear distinction between quadratic and quartic theories rests on being \(a\) in the order of microns; were it much smaller, the differences highlighted here could not be appreciated. A thorough study with direct observations of the core structure of twisted hedgehogs would be desirable. Another critical issue that deserves further research concerns the splay constant \(K_{11}\). If the recent theoretical estimate for the elastic constants in [61] is to be confirmed by different, independent approaches, not only \(K_{22}\), but also \(K_{11}\) would be smaller than \(K_{24}\) for chromonics. This, as shown in [14], would ignite the instability of chromonic droplets in an isotropic fluid environment enforcing Figure 9: For the same field \(\mathbf{n}_{\rm T}\) in Fig. 8, the director profiles are shown on two parallel sections of \(\mathbb{B}_{R}(\mathbf{x}_{0})\) with planes parallel to the equator: one cuts the ball \(\mathbb{B}_{r^{*}}\) at mid-height, \(z=r^{*}/2\), while the latter cuts the ball \(\mathbb{B}_{R}(\mathbf{x}_{0})\) at mid-height, \(z=R/2\). homeotropic anchoring at the interface. The defects studied in this paper inhabit a spherical cavity of _fixed_ shape, and so they are saved from that instability. However, should homeotropic anchoring be realistic for chromonic droplets, if \(K_{11}<K_{24}\), our quartic _twist_ theory could _not_ prevent such a shape instability, as it would be driven by a concentration of splay. Thus, were homeotropic chromonic droplets actually observed, our elastic theory would need to be amended. ## Appendix A Trial Twisted Hedgehog This Appendix contains ancillary results instrumental to our analysis in the main text. ### Useful Computations Identifying the the unit vector \(\mathbf{e}\) designating in (22b) the symmetry axis of \(\mathbf{n}_{\mathrm{T}}\) as the polar axis \(\mathbf{e}_{z}\) of standard spherical coordinates \((r,\vartheta,\varphi)\), where \(\vartheta\in[0,\pi]\) is the polar angle and \(\varphi\in[0,2\pi)\) is the azimuthal angle, we represent the gradient of the trial twisted field through the formula \[\nabla\mathbf{n}_{\mathrm{T}}= \frac{1}{r}\left[\mathbf{P}_{r}+\sin\alpha\mathbf{W}_{z}+(1-\cos \alpha)\mathbf{W}_{z}^{2}\right]+\left(\alpha^{\prime}\cos\alpha-\frac{1}{r} \sin\alpha\right)\mathbf{W}_{z}\mathbf{e}_{r}\otimes\mathbf{e}_{r}\] \[+ \left[\alpha^{\prime}\sin\alpha-\frac{1}{r}(1-\cos\alpha)\right] \mathbf{W}_{z}^{2}\mathbf{e}_{r}\otimes\mathbf{e}_{r}, \tag{10}\] where \(\mathbf{P}_{r}:=\mathbf{I}-\mathbf{e}_{r}\otimes\mathbf{e}_{r}\) is the projection onto the plane orthogonal to \(\mathbf{e}_{r}\), \(\mathbf{W}_{z}\) is the skew-symmetric tensor with axial vector \(\mathbf{e}_{z}\), and a prime \({}^{\prime}\) denotes differentiation with respect to \(r\). The following expressions for the traditional measures of distortion of \(\mathbf{n}_{\mathrm{T}}\) in (22a) are consequences of (A1); they are written in the local frame \((\mathbf{e}_{r},\mathbf{e}_{\vartheta},\mathbf{e}_{\varphi})\) of spherical coordinates: \[\operatorname{div}\mathbf{n}_{\mathrm{T}} =\frac{1}{r}\left[-(r\alpha^{\prime})\sin\alpha\sin^{2}\vartheta+ 1-(1-\cos\alpha)\cos^{2}\vartheta+\cos\alpha\right], \tag{11a}\] \[\operatorname{curl}\mathbf{n}_{\mathrm{T}} =\frac{1}{r}\left\{2\sin\alpha\cos\vartheta\mathbf{e}_{r}-\sin \vartheta\left[(r\alpha^{\prime})\cos\alpha+\sin\alpha\right]\mathbf{e}_{\vartheta}\right.\] \[+\cos\vartheta\sin\vartheta\left[-(r\alpha^{\prime})\sin\alpha+ (1-\cos\alpha)\right]\mathbf{e}_{\varphi}\right\},\] (11b) \[\mathbf{n}_{\mathrm{T}}\cdot\operatorname{curl}\mathbf{n}_{\mathrm{T}} =\frac{1}{r}\left\{\cos\vartheta\left[-(r\alpha^{\prime})(1- \cos\alpha)\sin^{2}\vartheta+2\sin\alpha\right]\right\},\] (11c) \[\mathbf{n}_{\mathrm{T}}\times\operatorname{curl}\mathbf{n}_{\mathrm{T}} =\frac{1}{r}\{\sin^{2}\vartheta[(r\alpha^{\prime})\sin\alpha(1- (1-\cos\alpha)\sin^{2}\vartheta)\] \[\qquad\qquad-(1-\cos\alpha)^{2}\cos^{2}\vartheta+\sin^{2}\alpha] \mathbf{e}_{r}\] \[\qquad\qquad+\sin\vartheta[-(r\alpha^{\prime})\cos\alpha(1-(1- \cos\alpha)\sin^{2}\vartheta)\] \[\qquad\qquad+\sin\alpha(-1+(1-\cos\alpha)(1+\cos^{2}\vartheta))] \mathbf{e}_{\vartheta}\}\] \[\qquad\qquad+\sin\vartheta\cos\vartheta[(r\alpha^{\prime})\sin \alpha(1-(1-\cos\alpha)\sin^{2}\vartheta)\] \[\qquad\qquad\qquad-(1-\cos\alpha)(1-(1-\cos\alpha)\sin^{2} \vartheta)+2\sin^{2}\alpha]\mathbf{e}_{\varphi},\] (11d) \[\operatorname{tr}(\nabla\mathbf{n}_{\mathrm{T}})^{2}-(\operatorname{ div}\mathbf{n}_{\mathrm{T}})^{2}=-2(\cos^{2}\vartheta-(r\alpha^{\prime})\sin\alpha\sin^{2} \vartheta+\cos\alpha\sin^{2}\vartheta). \tag{11e}\] Making use of (11) and (31) in the free energy density \(W_{\mathrm{QT}}\) in (12), and integrating over \(\mathscr{B}=\mathbb{B}_{R}(\mathbf{x}_{0})\), we arrive at the following scaled form for \(\mathscr{F}\) in (1), \[\frac{15\mathscr{F}[\mathbf{n}_{\mathrm{T}}]}{8\pi K_{22}R}=:\mathcal{F}_{\lambda }[\alpha]+\mathcal{F}_{\mathrm{R}}, \tag{12}\] where \({\cal F}_{\lambda}[\alpha]\) and \({\cal F}_{\rm R}\) are given by (32) and \[{\cal F}_{\rm R}=15(k_{1}-k_{24}), \tag{100}\] respectively. ### Topological Charge Here we compute the topological charge \(N(\mathbf{n}_{\rm T})\) of the twisted hedgehog \(\mathbf{n}_{\rm T}\) in (22). To this end, we first note that \[\nabla_{\!\rm s}\mathbf{n}_{\rm T}=(\nabla\mathbf{n}){\bf P }_{r}=\frac{1}{r}[{\bf P}_{r}+\sin\alpha{\bf W}_{z}{\bf P}_{r}+(1-\cos\alpha){ \bf W}_{z}^{2}{\bf P}_{r}], \tag{101}\] where use has been made of (100). In the frame \((\mathbf{e}_{r},\mathbf{e}_{\vartheta},\mathbf{e}_{ \varphi})\), \(\nabla_{\!\rm s}\mathbf{n}_{\rm T}\) in (101) is also represented as \[\nabla_{\!\rm s}\mathbf{n}_{\rm T}=\frac{1}{r}\{ (\sin^{2}\vartheta+\cos^{2}\vartheta\cos\alpha)\mathbf{e}_{ \vartheta}\otimes\mathbf{e}_{\vartheta}-\cos\vartheta\sin\alpha\mbox {\boldmath$e$}_{\vartheta}\otimes\mathbf{e}_{\varphi}\] \[+\cos\vartheta\sin\alpha\mathbf{e}_{\varphi}\otimes \mathbf{e}_{\vartheta}+\cos\alpha\mathbf{e}_{\varphi} \otimes\mathbf{e}_{\varphi}\] \[-\sin\vartheta\cos\vartheta(1-\cos\alpha)\mathbf{e}_{r }\otimes\mathbf{e}_{\vartheta}-\sin\vartheta\sin\alpha\mathbf{e}_{r}\otimes\mathbf{e}_{\varphi}\}, \tag{102}\] where we have employed the identity \[\mathbf{e}_{\vartheta}=\frac{1}{\sin\vartheta}(\cos \vartheta\mathbf{e}_{r}-\mathbf{e}_{z}), \tag{103}\] having identified \(e\) and \(\mathbf{e}_{z}\), as above. It is now a simple matter to compute in the basis \((\mathbf{e}_{r},\mathbf{e}_{\vartheta},\mathbf{e}_ {\varphi})\) the tensor \((\nabla_{\!\rm s}\mathbf{n}_{\rm T})^{*}\), as it is represented by the cofactor matrix of the matrix representing \(\nabla_{\!\rm s}\mathbf{n}_{\rm T}\) in (102). A tedious, but simple calculation delivers \[(\nabla_{\!\rm s}\mathbf{n}_{\rm T})^{*}=\frac{1}{r^{2}}\{\sin \vartheta\cos\vartheta(\cos\alpha-1)\mathbf{e}_{\vartheta}\otimes \mathbf{e}_{r}+\sin\vartheta\sin\alpha\mathbf{e}_{\varphi} \otimes\mathbf{e}_{r}+(\sin^{2}\vartheta\cos\alpha+\cos^{2}\vartheta) \mathbf{e}_{r}\otimes\mathbf{e}_{r}\}. \tag{104}\] Since it follows from (22) and (103) that \[\mathbf{n}_{\rm T}=\cos\vartheta\sin\vartheta(\cos\alpha-1)\mathbf{e}_{\vartheta}+\sin\vartheta\sin\alpha\mathbf{e}_{\varphi} +(\sin^{2}\vartheta\cos\alpha+\cos^{2}\vartheta)\mathbf{e}_{r}, \tag{105}\] it is easily concluded that \[\mathbf{n}_{\rm T}\cdot(\nabla_{\!\rm s}\mathbf{n}_{\rm T}) ^{*}\mathbf{e}_{r}=\frac{1}{r^{2}}. \tag{106}\] Taking \(\mathscr{S}\) in (18) to be a sphere of radius \(r\) and center at \(\mathbf{x}_{0}\), we readily obtain that \(N(\mathbf{n}_{\rm T})=+1\), for any function \(\alpha\), which is precisely (23) in the main text. In particular, by taking \(\alpha\equiv 0\) or \(\alpha\equiv\pi\), we recover (21). ### Second Variation We let \(\mathscr{F}_{4}\) denote the quartic term contribution to \(\mathscr{F}\) in (1) arising from \(W_{\rm QT}\) in (12), \[\mathscr{F}_{4}[\mathbf{n}]:=\frac{1}{4}K_{22}a^{2}\int_{\mathscr{B}}(\mathbf{n}\cdot{ \rm curl}\,\mathbf{n})^{4}\,{\rm d}V. \tag{101}\] By applying to \(\mathscr{F}_{4}\) the method illustrated in [12], we readily see that the second variation \(\delta^{2}\mathscr{F}_{4}\) of \(\mathscr{F}_{4}\) at \(\mathbf{n}\) can be given the general form \[\delta^{2}\mathscr{F}_{4}(\mathbf{n})[\mathbf{v}]=K_{22}a^{2}\int_{ \mathscr{B}}\Big{\{} (\mathbf{n}\cdot{\rm curl}\,\mathbf{n})^{2}\Big{[}3(\mathbf{v}\cdot{\rm curl }\,\mathbf{n}+\mathbf{n}\cdot{\rm curl}\,\mathbf{v})^{2}\] \[+2(\mathbf{n}\cdot{\rm curl}\,\mathbf{n})(v^{2}\mathbf{n}\cdot{\rm curl}\, \mathbf{n}+\mathbf{v}\cdot{\rm curl}\,\mathbf{v})\Big{]}\Big{\}}\,{\rm d}V, \tag{102}\] which is a quadratic functional in the perturbation field \(\mathbf{v}\) subject to the orthogonality condition \[\mathbf{n}\cdot\mathbf{v}\equiv 0. \tag{103}\] It is a very simple matter to check that \(\delta^{2}\mathscr{F}_{4}(\mathbf{n}_{\rm R})\equiv 0\), as \({\rm curl}\,\mathbf{n}_{\rm R}\equiv\mathbf{0}\). ### Asymptotic Behaviours Here we give a few details about the derivation of the asymptotic behaviours in (36) and (38) of the equilibrium solutions \(\alpha_{\lambda}\) for \(\mathcal{F}_{\lambda}\). We renounce writing the equilibrium equation of \(\mathcal{F}_{\lambda}\) since it is too complicated; as in the main text, it will denoted by (E) and manipulated by symbolic calculus. An equivalent form of (E) will be encountered in Appendix B below. When \(\rho\) is near \(0\), we write \(\alpha_{\lambda}\) as \[\alpha_{\lambda}(\rho)\approx\pi\left(1-B\rho^{\beta}\right), \tag{104}\] which satisfies (35), and seek \(B>0\) and \(\beta>1/4\), assumed to exist, the latter requirement being a direct consequence of the integrability of \(\mathcal{F}_{\lambda}\). In our asymptotic method, which is an adaptation of the classical method of Frobenious (see, for example, p. 396 of [62]), we determine both \(\beta\) and \(B\) by requiring that the dominant term of (E) vanishes. The first two powers of (E) near \(\rho=0\) are as follows \[B^{2}\pi^{2}\lambda^{2}P_{4}(\beta)\rho^{-2+3\beta}+\frac{143}{64}P_{2}(\beta )\rho^{\beta},\] (105a) where the polynomials \[P_{2}\] and \[P_{4}\] are defined as \[P_{2}(\beta) :=\left(\frac{19k_{3}}{42}+\frac{8}{21}\right)\beta^{2}+\left( \frac{19k_{3}}{42}+\frac{8}{21}\right)\beta+k_{1}-1, \tag{105b}\] \[P_{4}(\beta) :=\beta^{4}+4\beta^{3}+\frac{13}{3}\beta^{2}-\frac{143}{48}\beta- \frac{1287}{128}. \tag{105c}\] The first power in (105a) is dominant over the second for \(\rho\to 0\) if \(\frac{1}{4}<\beta<1\), and so \(\beta\) should be chosen as a real root of \(P_{4}\) in that interval, which however fails to exist. On the other hand, if \(\beta>1\), the second power in (105a) becomes dominant and \(\beta\) should be chosen as a real root of in that range, which too fails to exist whenever \(k_{1}>1.\) Thus, (A14) could be the asymptotic form of \(\alpha_{\lambda}\) only if \(\beta=1,\) which makes (E) take the asymptotic form \[\left(\frac{1421}{192}\pi^{2}\lambda^{2}B^{2}-\frac{2717}{672}k_{3}-\frac{143}{ 32}k_{1}+\frac{715}{672}\right)\rho+\mathcal{O}\left(\rho^{3}\right)=0. \tag{16}\] Requiring the dominant power of (16) to vanish determines \(B\) as in (37). In a similar, but perhaps more customary way, by linearizing (E) about \(\alpha=0,\) we obtain that \[\alpha^{\prime}(\rho)\rho+2\alpha(\rho)\approx 0. \tag{17}\] By solving it subject to (31), we readily arrive at (38) in the main text. ## Appendix B Equivalent Dynamical System In this Appendix we construct a dynamical analogy for the positive branch of equilibrium solutions \(\alpha_{\lambda}\) for \(\mathcal{F}_{\lambda}\) in (32) and give a phase space representation for them. We reinterpret \(\mathcal{F}_{\lambda}\) as the action of a dynamical system by introducing the _effective time_ \[t:=-\ln\rho. \tag{18}\] Thus, in particular, the center of the ball \(\mathbb{B}_{R}(\mathbf{x}_{0})\) at \(\rho=0\) is approached in the new variable when \(t\rightarrow+\infty,\) while the initial time \(t=0\) corresponds to the boundary \(\partial\mathbb{B}_{R}(\mathbf{x}_{0})\) at \(\rho=1.\) Correspondingly, the twist angle \(\alpha\) becomes a function on \([0,\infty),\) which is defined by \[a(t):=\alpha\left(e^{-t}\right) \tag{19}\] and by (31) satisfies \[a(0)=0. \tag{20}\] \(\mathcal{F}_{\lambda}[\alpha]\) thus acquires the form of an infinite-horizon action, \[\mathcal{A}_{\lambda}[a]:=\int_{0}^{\infty}\mathcal{L}_{\lambda}(a,\dot{a},t) \,\mathrm{d}t, \tag{21}\] where the Lagrangian \(\mathcal{L}_{\lambda}\) is defined as \[\mathcal{L}_{\lambda}(a,\dot{a},t):=e^{-t}\left[g(a)\dot{a}^{2}+2(k_{1}-1)f_{0 }(a)\right]+\frac{32}{7}\lambda^{2}e^{t}\left[\sum_{n=1}^{4}f_{n}(a)\dot{a}^{ n}\right] \tag{22}\] and a superimposed dot denotes differentiation with respect to \(t.\) The _orbits_ of the system are solutions of the equation of motion for \(\mathcal{L}_{\lambda}\), \[\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial\mathcal{L}_{\lambda}}{ \partial\dot{a}}-\frac{\partial\mathcal{L}_{\lambda}}{\partial a}=\] \[=e^{-t}\left[-\gamma^{\prime}(a)\dot{a}^{2}-2g(a)\ddot{a}+2g(a) \dot{a}+2(k_{1}-1)f_{0}^{\prime}(a)\right]\] \[-\frac{32}{7}\lambda^{2}e^{t}\left[\sum_{n=2}^{4}(n-1)f_{n}^{ \prime}(a)\dot{a}^{n}+\ddot{a}\left(\sum_{n=2}^{4}n(n-1)f_{n}(a)\dot{a}^{n-2} \right)+\sum_{n=1}^{4}nf_{n}(a)\dot{a}^{n-1}\right]=0,\] (B6) where a prime \({}^{\prime}\) denotes differentiation. We are interested in the orbits that start from the initial condition (B3)(and arbitrary \(\dot{a}(0)\)) and whose action \(\mathcal{A}_{\lambda}\) is bounded and a minimum. To this end, we first identify the critical points of the dynamical system; these are obtained when both \(\dot{a}\equiv 0\) and \(\ddot{a}\equiv 0\) in (B6), i.e., whenever \[2(k_{1}-1)f_{0}^{\prime}(a)-(\lambda e^{t})^{2}f_{1}(a)=0.\] (B7) For \(\lambda>0\), they are \[a=k\pi\quad\text{with}\quad k\in\mathbb{Z}.\] (B8) For \(\lambda=0\), which is the case studied in [33], they are instead \[a=k\pi\quad\text{and}\quad a=\pm\arccos(-1/4)+2k\pi\quad\text{with}\quad k\in \mathbb{Z}.\] (B9) The trajectory \(a(t)\equiv 0\) represents the radial hedgehog with action \(\mathcal{A}_{\lambda}[0]=0\). On the other hand, if for \(\lambda>0\) there is a trajectory \(a_{\lambda}=a_{\lambda}(t)\) such that \(\lim_{t\to\infty}a_{\lambda}(t)=\pi\) and the action \(\mathcal{A}_{\lambda}[a_{\lambda}]\) is finite; it remains to be seen whether \(\mathcal{A}_{\lambda}[a_{\lambda}]<0\), to decide whether the orbit \(a_{\lambda}\) minimizes the action. Multiplying both sides of (B6) by \(\dot{a}\), we get \[2g(a)\dot{a}e^{-t}- \frac{32}{7}\lambda^{2}e^{t}\sum_{n=1}^{4}nf_{n}(a)\dot{a}^{n}=\] (B10) \[=e^{t}\frac{\mathrm{d}}{\mathrm{d}t}\left[g(a)\dot{a}^{2}-2(k_{1 }-1)f_{0}(a)\right]+\frac{32}{7}\lambda^{2}e^{t}\frac{\mathrm{d}}{\mathrm{d}t }\left[\sum_{n=2}^{4}(n-1)f_{n}(a)\dot{a}^{n}\right],\] whose integration with respect to \(t\in[0,\infty)\) gives the following expression for the action of \(a_{\lambda}\): \[\mathcal{A}_{\lambda}[a_{\lambda}]=-g(0)\dot{a}_{\lambda}(0)^{2}+\frac{32}{7} \lambda^{2}\left[\lim_{t\to\infty}\left(e^{t}\sum_{n=2}^{4}(n-1)f_{n}(a_{ \lambda})\dot{a}_{\lambda}^{n}\right)+2\int_{0}^{\infty}e^{t}\left(\sum_{n=1}^ {4}f_{n}\dot{a}_{\lambda}^{n}\right)\mathrm{d}t\right],\] (B11) under the assumption that the limit exists. We are interested in _bounded_ orbits \(a_{\lambda}\) with _bounded_ action \(\mathcal{A}_{\lambda}[a_{\lambda}]\). We call these orbits _admissible_. For a solution \(a_{\lambda}\) of (B6) to be an admissible orbit, the initial value \(\dot{a}_{\lambda}(0)\) must be chosen so as to ensure convergence of the orbit \(a_{\lambda}(t)\) to \(\pi\) as \(t\to\infty\). The asymptotic behaviour in (38) here translates into \[\dot{a}_{\lambda}(t)=a_{\lambda}(t)+C\quad\text{for}\quad t\approx 0,\] (B12) where \(C>0\) is a constant to be determined. One can show that, for \(\lambda>0\) and material constants \(k_{3}\) and \(k_{1}\) chosen in the pink region of Fig. 3, there is a positive \(C\) for which an admissible orbit \(a_{\lambda}\) exists, but it has positive action \(\mathcal{A}_{\lambda}\). Thus, a twisted hedgehog \(\mathbf{n}_{\mathrm{T}}\) exists, but it has more energy than the radial hedgehog \(\mathbf{n}_{\mathrm{R}}\), which is locally stable. Hereafter we assume that (29) holds, so that material constants are chosen in the blue region of Fig. 3. In the phase plane \((x,y)\), where \[x(t)=a(t),\quad y(t)=\dot{a}(t), \tag{101}\] (102) can be rewritten as \[\dot{x} =y, \tag{103a}\] \[\dot{y} =\] \[= \frac{\left[-g^{\prime}(x)y^{2}+2g(x)y+2(k_{1}-1)f_{0}^{\prime}(x )-\tfrac{32}{7}(\lambda e^{t})^{2}\left(\sum_{n=2}^{4}(n-1)f_{n}^{\prime}(x)y^ {n}+\sum_{n=1}^{4}nf_{n}(x)y^{n-1}\right)\right]}{\left[2g(x)+\tfrac{32}{7}( \lambda e^{t})^{2}\sum_{n=2}^{4}n(n-1)f_{n}(x)y^{n-2}\right]}, \tag{103b}\] a system which we next study in some detail. ### Asymptotically Autonomous Limit The two-dimensional dynamical system described by (103) is _not_ autonomous23 and this makes it more difficult to predict the qualitative properties of its orbits, as the standard phase plane portraits (such as those discussed, for example, Chapt. 2 of [63]) do not apply here. However, system (103) has the interesting property of reducing to an autonomous system in the limit as \(t\to\infty\). For this reason, it is called _asymptotically autonomous_.24 Footnote 23: It is explicitly dependent on time. Footnote 24: Asymptotically autonomous dynamical systems have an interesting literature, recalled for example in Chapt. 17 of [64]. For orbits that do not intersect either of the axes of the \((x,y)\) phase plane, the autonomous asymptotic limit of (103) is \[\dot{x} =y, \tag{104a}\] \[\dot{y} =\frac{\sum_{n=2}^{4}(n-1)f_{n}^{\prime}(x)y^{n}+\sum_{n=1}^{4}nf _{n}(x)y^{n-1}}{\sum_{n=2}^{4}n(n-1)f_{n}(x)y^{n-2}}. \tag{104b}\] For any \(\lambda\geqq 0\), its equilibrium points are \[p_{1}=(\arccos(1/8),0)\quad\text{and}\quad p_{2}=(\pi,0) \tag{105}\] and their periodic replica. The eigenvalues of the linear approximation of (104) near these points are \[\Lambda_{1}^{\pm}=-\frac{1}{2}\pm\frac{5\sqrt{119}}{14}\text{i}, \quad\Lambda_{2}^{\pm}=\frac{59}{44}\pm\frac{\sqrt{10015}}{44}, \tag{106}\] respectively. Thus \(p_{1}\) is a stable spiral node, while \(p_{2}\) is a saddle. A phase portrait for the asymptotic limit (104) is shown in Fig. 1 along with the equilibrium points in (105).25 The correspondence between the solutions to an asymptotically autonomous system and those to its autonomous asymptotic limit is a delicate one and has not been completely characterized, even in the two-dimensional case, for which a larger number of results are available (see, for example, [65, 66, 67]). In particular, a result of Markus [65] (see his Theorem 7) applies to our system: it says that the \(\omega\)-limit set26 of a solution to (B14) either contains the equilibria of (B15) or is the union of periodic orbits of (B15). Footnote 26: The \(\omega\)-limit set of a forward solution to a dynamical system is the collection of all limiting points attained by the solution on any diverging time sequence (see, p. 242 of [64] for a formal definition.) Since (B15) has no periodic orbits, we conclude that the \(\omega\)-limit set of any bounded solution of (B14) must contain the equilibria of (B15). Among these latter, only \(p_{2}\) can be reached by an admissible orbit (according to our definition); this justifies our numerical search for a trajectory in phase space starting from a point \((0,y_{0})\) of the \(y\)-axis and approaching in infinite time the equilibrium point \(p_{2}\). The successful outcome of this search is shown in Fig. B2, where an admissible orbit of (B14) for \(\lambda>0\) is contrasted against that obtained in [33] for \(\lambda=0\), which is when (B14) becomes autonomous. The solutions illustrated in Fig. B2 are the same as those in Fig. 4 in the main text; actually, the latter were generated from the former by inverting the change of variables in (B1) and (B2).
2310.19209
Parametric factorization of non linear second order differential equations
In this paper the factorization method introduced by Rosu \& Cornejo-P\'erez for second order non linear differential equations is generalized by adding a parameter in order to obtain the general solutions for the mixed quadratic and linear Li\'enard type equation. The new parametric factorization is used to obtain complete analytic solutions for nonlinear second order differential equations. The parametric factorization introduced in this article reduces to the standard factorization scheme when the parameter goes to zero. As an example, we apply the parametric factorization approach to solve the generalized Fisher equation and the Israel-Stewart cosmological model. The parametric factorization presented in this paper can be used in other non linear mixed Li\'enard type equations.
Gabriel Gonzalez Contreras
2023-10-30T00:32:01Z
http://arxiv.org/abs/2310.19209v1
# Parametric factorization of non linear second order differential equations ###### Abstract In this paper the factorization method introduced by Rosu & Cornejo-Perez for second order non linear differential equations is generalized by adding a parameter in order to obtain the general solutions for the mixed quadratic and linear Lienard type equation. The new parametric factorization is used to obtain complete analytic solutions for nonlinear second order differential equations. The parametric factorization introduced in this article reduces to the standard factorization scheme when the parameter goes to zero. As an example, we apply the parametric factorization approach to solve the generalized Fisher equation and the Israel-Stewart cosmological model. The parametric factorization presented in this paper can be used in other non linear mixed Lienard type equations. ## I Introduction Non linear second order differential equations are widely used to describe various phenomena in physics and mathematics and the vast majority of them do not have analytic solutions and are very difficult to analyze them. Developing methods for finding solutions for non linear differential equations has been a problem of interest for a long time. At the present time there are many methods for finding exact solutions of non linear equations such as: the \(\tanh\)-expansion method [1], the Darboux transformation [2], the Backlund transformation [3], Hirota bilinear method [4], Painleve truncation expansion [5; 6], generalized Sundman transformation [7], point transformations and contact transformations[8]. All these approaches have yielded many interesting exact solutions of the kink and soliton type for well-known nonlinear equations. Despite the vast and rich variety of different methods to solve non linear differential equations, the fundamental problem of finding explicit and exact analytic solutions to nonlinear differential equations constitutes yet an active area of research. In the process of learning to solve non linear differential equations it is convenient to begin with simple and efficient methods which provide a way to obtain exact analytic solutions. One of the most popular existing simple methods to solve non linear differential equations is by using travelling wave transformations and direct integration. However, the travelling wave transformation method is only suitable for a certain type of non linear differential equations. Recently, Rosu & Cornejo proposed a factorization method which allows one to obtain travelling wave solutions of the reaction-diffusion equations with polynomial nonlinearities[9]. Using the factorization method, Cornejo & Rosu obtained particular solutions of several important equations, among which are the Fisher equation, the FitzHugh-Nagumo equation and the generalized Burgers-Huxley equation, which maps into a second order non linear differential equation of Lienard type in the travelling coordinate frame of the form[10] \[\ddot{x}+f(x)\dot{x}+g(x)=0 \tag{1}\] where \(f(x)\) and \(g(x)\) are polynomial functions and overdot denotes differentiation with respect to time. In this paper we extend the factorization method introduced by Rosu & Cornejo to solve a mixed quadratic-linear Lienard type equation of the form \[\ddot{x}+\mu\frac{\dot{x}^{2}}{x}+F(x)\dot{x}+G(x)=0 \tag{2}\] where \(\mu\) is an auxiliary parameter to be determined. One can see that equation (1) is a subcase of equation (2). In applications one often encounters differential equations in which both linear and quadratic terms are present. Equation (2) has a particular form of a mixed type Lienard equation which frequently appears as a mathematical model in several areas of physics, for example equation (2) belongs to the type of second order Gambier equation when the coefficients are assumed to be constant parameters. The Gambier equation written as a second order differential equation takes the form[11] \[\ddot{x}= \frac{n-1}{n}\frac{\dot{x}^{2}}{x}+\left(\frac{n+2}{n}ax-\frac{n -2}{n}\frac{\sigma}{x}+b\right)\dot{x} \tag{3}\] \[-\frac{a^{2}}{n}x^{3}+(\dot{a}-ab)x^{2}+\left(cn-\frac{2a\sigma} {n}\right)x-b\sigma-\frac{\sigma^{2}}{nx}. \tag{4}\] where \(a\), \(b\) and \(c\) are functions of the independent variable and \(\sigma\) is a constant. Interestingly, the parametric factorization introduced in this paper is of the type of an autonomous second order Gambier equation. The importance of the second order Gambier equation is due to the fact that it is related with very important non linear differential equations, such as the second order Ricatti equations, second order Kummer-Schwartz equation and Milne-Pinney equation, to name a few[12]. More recently, Zheng and Shang[13] showed that the amplitude part of the solution in phase amplitude format of the nonlinear Schrodinger equation with dual power nonlinearities satisfies a mixed Lienard type equation of the form given in equation (2). Thus, the goal of this paper is to present an extension of the factorization method in which one can obtain solutions of a mixed Lienard type equation. The parametric factorization introduced in this paper has the advantage that allows to obtain solutions of second order non linear differential equations with linear and quadratic damping terms and contains as a particular case the standard factorization. The article is organized as follows. In the first section we will review the Rosu & Cornejo-Perez factorization scheme and introduce the parametric factorization. In the second section we will apply the parametric factorization to obtain particular and parametric solutions of the generalized Fisher. In the third section we will use the parametric factorization to obtain particular and parametric solutions for the Israel-Stewart cosmological model. The conclusions are summarized in the last section. ## II Parametric factorization An elegant procedure to solve second order non linear differential equations consists in using the factorization method, where a given non linear differential operator is factorized in two first order differential operators. In 2005, Rosu and Cornejo-Perez [9] introduced an effective factorization of second-order ordinary differential equations with polynomial nonlinearities by taking additional advantage from the polynomial factorization of the nonlinear part. Using the factorization technique, Rosu and Cornejo-Perez obtained particular solutions of the following Lienard type differential equation \[\ddot{q}+f(q)\dot{q}+g(q)=0, \tag{5}\] where the dot represents the derivative with respect to time. Equation (5) admits the following factorization \[\left(D-\phi_{2}(q)\right)\left(D-\phi_{1}(q)\right)q=0\,\qquad D=\frac{d}{ dt}. \tag{6}\] By expanding equation (6) and comparing with equation (5), one obtains the following conditions over the functions \(\phi_{1}\) and \(\phi_{2}\): \[\phi_{1}+\phi_{2}+q\frac{d\phi_{1}}{dq}=-f(q) \tag{7}\] \[\phi_{1}\phi_{2}=\frac{g(q)}{q}. \tag{8}\] To obtain a particular solution, Rosu and Cornejo-Perez solved the following first order differential equation \[\left(D-\phi_{1}(q)\right)q=0 \tag{9}\] obtaining a particular solution of (11) by one quadrature \[t-t_{0}=\int\frac{dq}{q\phi_{1}(q)} \tag{10}\] In this article we will extend the factorization method outlined above by adding a parameter \(\mu\) in equation (6) such that the factorization is given now by \[\left(D-\varphi_{2}(x)\right)\left(D-\varphi_{1}(x)\right)x^{\mu+1}=0, \tag{11}\] where \(\mu\neq-1\). If we expand the factorization given in equation (11) we get the following non linear second order differential equation \[\ddot{x}+\mu\frac{\dot{x}^{2}}{x}-\dot{x}\left(\varphi_{1}(x)+\varphi_{2}(x) +\frac{x}{\mu+1}\frac{d\varphi_{1}}{dx}\right)+\frac{x}{\mu+1}\varphi_{1}(x) \varphi_{2}(x)=0. \tag{12}\] Equation (12) is a particular form of the mixed Lienard type equation with quadratic and linear terms. Note that equation (12) reduces to the standard Lienard type equation when \(\mu\to 0\). By comparing (2) and (12), one obtains the conditions for the parametric factorization over the functions \(\varphi_{1}\) and \(\varphi_{2}\): \[\varphi_{1}+\varphi_{2}+\frac{x}{\mu+1}\frac{d\varphi_{1}}{dx}=-F(x) \tag{13}\] \[\varphi_{1}\varphi_{2}=(\mu+1)\frac{G(x)}{x}\;. \tag{14}\] An interesting feature between the standard factorization and the parametric factorization is that we can transform one to the other by a non trivial space transformation once written in factorized form. This result allows us to map solutions into solutions between the Lienard equation (1) and the mixed Lienard equation (2). Let us now work out a simple example to illustrate this point. Consider the following Lienard type equation given by \[\ddot{q}+(2m+3)q^{2m+1}\dot{q}+q+q^{4m+3}=0, \tag{15}\] with \(m\) a non negative integer. Equation (15) represents a class of solvable nonlinear oscillators with isochronous orbits [14], i.e. orbits with fixed period, not dependent with the amplitude. Equation (15) can be written in the following standard factorization form \[\left(D+q^{2m+1}+i\right)\left(D+q^{2m+1}-i\right)q=0 \tag{16}\] We can make the following space transformation \(q^{2m+1}=x\) so that equation (16) becomes \[\left(D+x+i\right)\left(D+x-i\right)x^{1/(2m+1)}=0 \tag{17}\] Equation (17) is now written in the parametric factorization form where \(\mu=-2m/(2m+1)\). We can expand equation (17) to get \[\ddot{x}-\left(\frac{2m}{2m+1}\right)\frac{\dot{x}^{2}}{x}+(2m+3)x\dot{x}+(2m+ 1)(x+x^{3})=0. \tag{18}\] Equations (16) and (17) share the same solution given by \[q(t)=x^{1/(2m+1)}(t)=\frac{\sin(t-t_{0})}{\left(C+(2m+1)\int\sin^{2m+1}(t-t_{0 })dt\right)^{1/(2m+1)}} \tag{19}\] where \(t_{0}\) and \(C\) are arbitrary constant. Therefore, the solution for the mixed Lienard equation (18) is given by \[x(t)=\frac{\sin^{2m+1}(t-t_{0})}{C-\cos(t-t_{0})\sum_{r=0}^{m}A_{mr}\sin^{2r}( t-t_{0})} \tag{20}\] with \[A_{mr}=\frac{2^{2(m-r)}(m!)^{2}(2r)!}{(2m)!(r!)^{2}}, \tag{21}\] where the condition for periodic solutions is \(|C|>A_{m0}\)[14]. The solution for the mixed Lienard type equation given in equation (18) are shown in figure (1) for \(m=1\) and \(m=2\), respectively. Consequently, solutions from the standard factorization scheme and their properties can be used to obtain solutions of the parametric factorization approach through a space coordinate transformation. ## III Generalized Fisher equation We will now show how to apply the parametric factorization approach to obtain solutions of the generalized Fisher equation which is used in biology[15]. The generalized Fisher equation is a non linear partial differential equation which describes diffusion models for insects and biology invasion. From the perspective of biology invasion, the generalized Fisher equation predicts how the population of a particular species will spread via travelling waves. Let us consider the generalized Fisher equation given by[16] \[\frac{\partial u}{\partial t}=u^{p}\left(1-u^{q}\right)+\frac{\partial}{\partial x }\left(u^{m}\frac{\partial u}{\partial x}\right) \tag{22}\] where \(u\) represents the population density and \(p\), \(q\) and \(m\) are positive parameters. Solutions for equation (22) have been found for some values of the parameters \(p\), \(q\) and \(m\); in particular, the standard factorization method has been used for the case \(m=0\), \(p=q=1\), i.e. the standard Fisher equation, and for the case \(m=0\), \(p=1\) and \(q=2\), i.e. the Burguers-Huxley equation, respectively[10]. In this section we will consider the generalized Fisher equation for the case \(m\neq 0\) which can be written in the travelling reference frame \(\tau=kx-\omega t=k(x-vt)\) as the following mixed Lienard non linear differential equation \[\ddot{u}+m\frac{\dot{u}^{2}}{u}+\omega\frac{\dot{u}}{k^{2}u^{m}}+\frac{u^{p-m} }{k^{2}}-\frac{u^{p+q-m}}{k^{2}}=0 \tag{23}\] where the over-dot represents \(D=\frac{d}{d\tau}\). Using the second factorization condition given in equation (14) we have \[\varphi_{1}\varphi_{2}=\frac{1+m}{k^{2}}u^{p-(1+m)}\left(1-u^{q}\right). \tag{24}\] Therefore, we can choose \[\varphi_{1} = a_{1}\frac{\sqrt{1+m}}{k}u^{(p-(1+m))/2}\left(1-u^{q/2}\right)\] \[\varphi_{2} = \frac{\sqrt{1+m}}{a_{1}k}u^{(p-(1+m))/2}\left(1+u^{q/2}\right)\] where \(a_{1}\) is a nonzero constant to be determined with the first factorization condition given in equation (13) which reads \[\varphi_{1}+\varphi_{2}+\frac{u}{\mu+1}\frac{d\varphi_{1}}{du} = \frac{a_{1}}{k}u^{(p-(1+m))/2}\left[\sqrt{1+m}+\frac{p-(1+m)}{2 \sqrt{1+m}}-u^{q/2}\left(\sqrt{1+m}+\frac{q-p+(1+m)}{2\sqrt{1+m}}\right)\right]\] \[+ \frac{\sqrt{1+m}}{a_{1}k}u^{(p-(1+m))/2}\left[1+u^{q/2}\right]=- \frac{\omega}{k^{2}u^{m}}\] It follows that we have to choose \(p=1-m\) in order to satisfy the parametric factorization conditions such that we get the following values for \(a_{1}\) and \(\omega\) \[a_{1}=\pm\sqrt{\frac{2(1+m)}{2+q}}\quad\text{and}\quad\omega=\mp\frac{k(q+4)} {\sqrt{2(q+2)}}. \tag{26}\] Therefore, the travelling wave solutions we are going to obtain are moving with a constant velocity of \(v=\omega/k=\mp\frac{(q+4)}{\sqrt{2(q+2)}}\), which means that with increasing value of \(q\) the velocity modulus increases from \(2\) to \(\infty\). Figure 1: The figure shows the solutions \(x(t)\) of the mixed Liénard type equation for \(m=1\) and \(m=2\) respectively. Equation (23) admits the following parametric factorization \[\left[D\mp\sqrt{\frac{2+q}{2}}\frac{1}{ku^{2m}}\left(1+u^{q/2}\right)\right]\left[ D\mp(1+m)\sqrt{\frac{2}{2+q}}\frac{1}{ku^{2m}}\left(1-u^{q/2}\right)\right]u^{m+1}=0. \tag{27}\] If one wants to find a particular solution to equation (23) we have to solve only a compatible first order differential equation given by \[\left[D\mp(1+m)\sqrt{\frac{2}{2+q}}\frac{1}{ku^{2m}}\left(1-u^{q/2}\right) \right]u^{m+1}=0, \tag{28}\] which has the following implicit solution \[u^{2m}{}_{2}F_{1}\left[1,\frac{4m}{q},1+\frac{4m}{q},u^{q/2}\right]=\pm 2m \sqrt{\frac{2}{2+q}}(\tau-\tau_{0}) \tag{29}\] where \({}_{2}F_{1}\) is the hypergeometric function and \(\tau_{0}\) is an integration constant. In figure (2) we show a travelling wave solution for the generalized Fisher equation obtained with the parametric factorization approach. It is possible to solve the generalized Fisher equation given by \[\frac{\partial u}{\partial t}=u^{1-m}\left(1-u^{q}\right)+\frac{\partial}{ \partial x}\left(u^{m}\frac{\partial u}{\partial x}\right) \tag{30}\] in a different way by using the parametric factorization conditions given in equations (13) and (14) [17]. Solving \(\varphi_{2}\) from the first equation and substituting in the second equation we have \[-F(u)\varphi_{1}-\varphi_{1}\left(\varphi_{1}+\frac{u}{\mu+1}\frac{d\varphi_{ 1}}{du}\right)=(\mu+1)\frac{G(u)}{u}, \tag{31}\] which is transformed into an Abel equation of the second kind \[ww^{\prime}=F(u)u^{\mu}w-G(u)u^{2\mu} \tag{32}\] by using the substitution \[w(u)=-\frac{u^{\mu+1}\varphi_{1}(u)}{\mu+1}. \tag{33}\] Figure 2: The figure shows the travelling wave solution \(u(\tau)\) of the generalized Fisher equation for \(m=1/4\), \(p=3/4\) and \(q=2\), moving with constant velocity of \(v=3/\sqrt{2}\). The Abel equation given in (32) admits exact parametric solutions for special cases. For our particular case, we have \(\mu=m\), \(F(u)=\omega u^{-m}/k^{2}\) and \(G(u)=u^{p-m}(1-u^{q})/k^{2}\), therefore we need to solve the following Abel equation given by \[ww^{\prime}-\frac{\omega}{k^{2}}w=-\frac{u^{p-m}}{k^{2}}+\frac{u^{p-m+q}}{k^{2}} \tag{34}\] which for the case when \(k^{-2}=2(2+q)/(4+q)^{2}\) and \(\omega=(4+q)^{2}/(4+2q)\) has the following solution in parametric form[18] \[u=\frac{(q+4)}{q}a\xi E_{1+q}^{2/q},\quad\text{and}\quad w=aE_{1+q}^{2/q}\left( R_{1+q}E_{1+q}+\frac{2}{q}\xi\right) \tag{35}\] where \[E_{1+q}=\int(1\pm\xi^{2+q})^{-1/2}d\xi+C,\quad R_{1+q}=\sqrt{1\pm\xi^{2+q}} \quad\text{and}\quad a=\frac{4+q}{q}\left(\frac{2}{q}\right)^{2/q}, \tag{36}\] where \(C\) is an integral constant. Therefore, we have two different methods to obtain the solution of the non linear differential equation, one gives a particular solution and the other one gives a parametric solution. ## IV Israel-Stewart cosmological model A description of the relativistic thermodynamics of non-perfect fluids is given by the so called Israel-Stewart cosmological model. For the case when the bulk viscosity coefficient \(\xi\) is given as a power law function of the energy density by \(\xi=\xi_{0}\rho^{1/2}\) a cosmological solution of the polynomial type given by \(H\propto(t+const.)^{-1}\) has been found by applying the standard factorization method [19; 20], where \(H\) denotes the Hubble rate function. The nonlinear differential equation for the Hubble function is given as the following mixed Lienard equation[21] \[\ddot{H}+\alpha_{1}\frac{\dot{H}^{2}}{H}+\alpha_{2}H\dot{H}+\alpha_{3}H^{3}=0 \tag{37}\] where \[\alpha_{1}= -\frac{3}{2\delta} \tag{38}\] \[\alpha_{2}= \frac{3}{2}+3(1+\omega)-\frac{9}{4\delta}(1+\omega)+\frac{\sqrt{3 }\epsilon(1-\omega^{2})}{\xi_{0}}\] (39) \[\alpha_{3}= \frac{9}{4}(1+\omega)+\frac{9}{2}\epsilon(1-\omega^{2})\left[ \frac{1+\omega}{\sqrt{3}\xi_{0}}-1\right]\] (40) \[\delta(\omega)\equiv \frac{3}{4}\left(\frac{1+\omega}{1/2+\omega}\right), \tag{41}\] are constant coefficients and \(0\leq\omega<1\). In Ref.[19], Cruz _et al._ factorized equation (37) in the following form \[\left(D-\phi_{1}(H)\dot{H}-\phi_{2}(H)\right)\left(D-\phi_{3}(H)\right)H=0, \tag{42}\] where, after some algebra they found the following factorization \[\left(D+\alpha_{1}\frac{\dot{H}}{H}-a_{1}^{-1}H\right)\left(D-a_{1}\alpha_{3}H \right)H=0, \tag{43}\] where \[a_{1}=\frac{-\alpha_{2}\pm\sqrt{\alpha_{2}^{2}-4\alpha_{3}(2+\alpha_{1})}}{2 \alpha_{3}(2+\alpha_{1})}. \tag{44}\] Note that the factorization given in equation (42) uses three functions and therefore it is more difficult to apply than the parametric factorization introduced in this paper. A particular solution is obtained after solving the first order differential equation given by \[\dot{H}-a_{1}\alpha_{3}H^{2}=0. \tag{45}\] In this section we are going to obtain the Hubble function by means of the parametric factorization. By assuming that the functions have the following form \(\varphi_{1}=\tilde{a}_{1}\sqrt{(1+\alpha_{1})\alpha_{3}}H\) and \(\varphi_{2}=\tilde{a}_{1}^{-1}\sqrt{(1+\alpha_{1})\alpha_{3}}H\) and using the parametric factorization conditions given in equations (13) and (14) we obtain \[\varphi_{1}(H)= (1+\alpha_{1})\left[\frac{-\alpha_{2}\pm\sqrt{\alpha_{2}^{2}-4 \alpha_{3}(2+\alpha_{1})}}{2(2+\alpha_{1})}\right]H \tag{46}\] \[\varphi_{2}(H)= \frac{2\alpha_{3}(2+\alpha_{1})}{-\alpha_{2}\pm\sqrt{\alpha_{2}^{ 2}-4\alpha_{3}(2+\alpha_{1})}}H. \tag{47}\] Therefore, equation (37) admits the following parametric factorization \[\left(D-\frac{2\alpha_{3}(2+\alpha_{1})}{-\alpha_{2}\pm\sqrt{\alpha_{2}^{2}-4 \alpha_{3}(2+\alpha_{1})}}H\right)\left(D-(1+\alpha_{1})\left(\frac{-\alpha_{2 }\pm\sqrt{\alpha_{2}^{2}-4\alpha_{3}(2+\alpha_{1})}}{2(2+\alpha_{1})}\right)H \right)H^{1+\alpha_{1}}=0. \tag{48}\] To obtain a particular solution we have to solve the following first order differential equation \[\left(D-(1+\alpha_{1})\left(\frac{-\alpha_{2}\pm\sqrt{\alpha_{2}^{2}-4\alpha_ {3}(2+\alpha_{1})}}{2(2+\alpha_{1})}\right)H\right)H^{1+\alpha_{1}}=H^{\alpha_ {1}}(1+\alpha_{1})\left[\dot{H}-a_{1}\alpha_{3}H^{2}\right]=0, \tag{49}\] which has the same solution as the previous standard factorization given by \[H(t)=\frac{A_{\pm}}{t-\left(t_{0}-\frac{A_{\pm}}{H_{0}}\right)}, \tag{50}\] where \(A_{\pm}=-(a_{1}\alpha_{3})^{-1}\) and \(H_{0}=H(t_{0})\) is the Hubble constant. We are now going to solve equation (37) by changing it to an Abel equation as we did in the previous section. By using equation (33) we arrive at the following Abel equation of the second kind \[w\frac{dw}{dH}=\alpha_{2}H^{1+\alpha_{1}}w-\alpha_{3}H^{3+2\alpha_{1}}=\alpha _{2}H^{1+\alpha_{1}}\left(w-\frac{\alpha_{3}}{\alpha_{2}}H^{2+\alpha_{1}} \right). \tag{51}\] By making the transformation \(\eta=(\alpha_{2}/(2+\alpha_{1}))H^{2+\alpha_{1}}\), equation (51) becomes the Abel equation in canonical form \[w\frac{dw}{d\eta}=w-\alpha_{3}\left(\frac{2+\alpha_{1}}{\alpha_{2}^{2}}\right)\eta, \tag{52}\] which has a solution in parametric form given by[18] \[\eta(\tau)=\frac{\alpha_{2}}{2+\alpha_{1}}H^{2+\alpha_{1}}(\tau)=C\exp\left(- \int\frac{\tau}{\tau^{2}-\tau-A}d\tau\right),\quad w(\tau)=C\tau\exp\left(- \int\frac{\tau}{\tau^{2}-\tau-A}d\tau\right), \tag{53}\] where \(C\) is a constant and \(A=\alpha_{3}(2+\alpha_{1})/\alpha_{2}^{2}\). It is interesting to note that the particular and parametric solutions satisfy the following dynamic equation \[\frac{dH^{1+\alpha_{1}}}{dt}=-\left(1+\alpha_{1}\right)w\left(H(t)\right) \tag{54}\] where \[w(H)=-\frac{H^{1+\alpha_{1}}}{1+\alpha_{1}}\varphi_{1}(H) \tag{55}\] is the transformation used to obtain Abel's equation. Therefore, using equation (54) and comparing with equation (48) we can express the particular solution as the following dynamic equation \[\frac{dH^{1+\alpha_{1}}}{dt}=-\left(1+\alpha_{1}\right)\left(1\pm\sqrt{1-\frac{4 \alpha_{3}(2+\alpha_{1})}{\alpha_{2}^{2}}}\right)\eta. \tag{56}\] In figure (3) we show the graph \(w\) vs. \(\eta\) for the particular solution for several values of \(0\leq\omega<1\). In figure (4) we compare the particular and the parametric solutions where we have taken \(\alpha_{3}(2+\alpha_{1})/\alpha_{2}^{2}=1/5\). Note that the particular solutions are the upper and lower curves of the figure. ## VI Conclusions In summary, compared with the factorization proposed by Rosu & Cornejo-Perez the parametric factorization presented in this paper is more general and contains as a special case the standard factorization. The parametric factorization can deal with a larger class of non linear second order differential equations and also one can use the factorization conditions in order to obtain parametric solutions for a given non linear differential equation. Using the parametric factorization technique we have obtained a particular solution and a parametric solution of the generalized Fisher equation and the Israel-Stewart cosmological model in order to illustrate this approach. The parametric factorization presented in this paper can be used in other non linear mixed Lienard type equations. Figure 4: The figure shows the particular (blue and orange lines) and parametric solutions (green lines) given as a function of \(\eta\) for the case when \(\alpha_{3}(2+\alpha_{1})/\alpha_{2}^{2}=1/5\) and \(\epsilon=\xi_{0}=3/4\). For the parametric solution we have used different values of the integration constant \(C\). The upper and lower lines correspond to the particular solution. Figure 3: The figure shows the particular solution for the Hubble rate function given in terms of \(w\) as a function of \(\eta\) for different values of \(0\leq\omega<1\). We have taken \(\epsilon=\xi_{0}=3/4\).
2302.01657
Made to measure: an introduction to quantification in microscopy data
Images are at the core of most modern biological experiments and are used as a major source of quantitative information. Numerous algorithms are available to process images and make them more amenable to be measured. Yet the nature of the quantitative output that is useful for a given biological experiment is uniquely dependent upon the question being investigated. Here, we discuss the 3 main types of visual information that can be extracted from microscopy data: intensity, morphology, and object counts or categorical labels. For each, we describe where they come from, how they can be measured, and what may affect the relevance of these measurements in downstream data analysis. Acknowledging that what makes a measurement "good" is ultimately down to the biological question being investigated, this review aims at providing readers with a toolkit to challenge how they quantify their own data and be critical of conclusions drawn from quantitative bioimage analysis experiments.
Siân Culley, Alicia Cuber Caballero, Jemima J Burden, Virginie Uhlmann
2023-02-03T11:14:13Z
http://arxiv.org/abs/2302.01657v1
# Made to measure: an introduction to quantification in microscopy data ###### Abstract Images are at the core of most modern biological experiments and are used as a major source of quantitative information. Numerous algorithms are available to process images and make them more amenable to be measured. Yet the nature of the quantitative output that is useful for a given biological experiment is uniquely dependent upon the question being investigated. Here, we discuss the 3 main types of visual information that can be extracted from microscopy data: intensity, morphology, and object counts or categorical labels. For each, we describe where they come from, how they can be measured, and what may affect the relevance of these measurements in downstream data analysis. Acknowledging that what makes a measurement "good" is ultimately down to the biological question being investigated, this review aims at providing readers with a toolkit to challenge how they quantify their own data and be critical of conclusions drawn from quantitative bioimage analysis experiments. ## Introduction Quantification is at the heart of modern biological research, and image data is one of the major sources of quantitative information[1, 2, 3]. The past decade has seen an explosion of software tools dedicated to bioimage analysis[4]. These methods most often adapt computer vision algorithms to the specificities of biological image data, such as poor contrast and the presence of complex objects, with the aim to speed up and streamline the quantification process[5]. Nowadays, visual information held in microscopy images can be processed and quantified by computer programs in a largely automated manner, opening up the possibility to analyse data at scale, in a reproducible and objective manner[6]. While greatly enabling biological research, the democratisation of quantitative image analysis tools also poses some challenges. One of the biggest challenges is identifying which of the dozens of quantifications that can be generated by these tools are most informative and relevant to the biological question being studied. To navigate this, a helpful strategy is to scrutinise all steps from experimental design to image acquisition and ultimately data processing and analysis. This process allows each experimental stage to be tailored to best inform the hypothesis being investigated, but can also highlight aspects that may adversely influence the quantitative output. The first step consists of defining the goal of the imaging experiment (what quantitative property is at the centre of the question being investigated) prior to acquiring data, and using this information to guide experimental design[7]. Once data starts being generated, the focus shifts to the many experimental factors and acquisition parameters affecting the quality of imaging. These can include sample and labelling properties, such as photobleaching and cross-talk, as well as hardware specifics[8, 9]. Whenever appropriate imaging parameters are identified and used throughout an experiment, ensuring that they are appropriately recorded as metadata is crucial to ensure that downstream analysis is carried out in a meaningful way[10]. The same considerations apply to any image processing applied post-acquisition, such as methods to remove noise or improve contrast/resolution. After optimising experimental design, acquisition parameters, post-processing, and recording metadata, one is left with analysing the acquired image data. At this point, the last remaining step is to outline the best way to pull out the desired measurements. This raises questions about which aspects of visual data can be measured, which methods are available to do so, and what are their limitations. Several excellent review papers assess the performance of image analysis methods in a benchmarking setting, meaning that methods are evaluated on the one general task they were designed to solve. General tasks, such as segmentation[11, 12] and denoising[13], are however generally not the end goal of an experiment. The quantitative output needed to explore a biological question is indeed rarely to improve the quality of an image or partition it. Instead, segmentation and denoising are examples of operations that enable the final quantification task, which is usually to measure a specific phenotype. To complement the existing literature on image analysis algorithms, this review focuses on the quantification problem. We review common categories of quantitative readouts that can be extracted from visual data and the appropriate metrics to do so. We identify three main categories of quantitative information: image intensity, morphology, and counts or labels. For each of these categories, we describe the process that leads to this information being available in image data. We then review how that type of quantitative information can be extracted and interpreted, and discuss aspects impacting its quantification. We close the paper with a discussion on quality control and confidence. Considering the breadth of the topic, we choose to limit the scope of this paper to individual 2D images and 3D volumes, acquired with either fluorescence microscopy or electron microscopy (EM). We therefore do not cover measurements that are specific to time-series, multi-channel imaging, or to data obtained with specialised imaging modalities such as super-resolution microscopy or single particle electron microscopy. In order to be able to formulate what one _wants_ to measure, one must first understand what _can_ be measured. By providing an overview of the big categories of quantitative measurements in image analysis, the goal of this review is to provide life science researchers with a framework to appreciate and scrutinise their own image data. We also aim to give insights on aspects of an experiment that impact the relevance of measurements in downstream analysis, and therefore enable readers to be critical about whether the conclusions of studies involving image quantification are meaningful or not. ### Image Intensity When acquiring microscopy data one of the first things a researcher checks, either via visual assessment of the image or digital inspection of pixel values, is the intensity. But what do the pixel intensity values in microscopy images actually mean, and where do they come from? #### Generating contrast and capturing intensity The image generation process fundamentally differs across imaging modalities. Contrast is most often induced by biochemical labels that are artificially introduced in samples specifically for imaging. Contrast can also be obtained in a label-free manner with dedicated optical components, as is the case for phase contrast and Differential Interference Contrast (DIC) imaging[14]. In fluorescence microscopy, molecules of interest are labelled with fluorescent species such as organic dyes and fluorescent proteins. Regardless of the microscope used and downstream analysis performed, the manner in which labelling is performed must be taken into account to provide context to any quantification. For example, if non-endogenous fluorescent protein fusion constructs are being used, how closely do measured intensities reflect endogenous protein distributions[15]? If immunofluorescence techniques are being used, what factors are impacting the ratio between number of epitopes of interest and the number of fluorophores (e.g. primary and secondary antibody concentrations, antigen masking affinity, clonality of antibodies)[16]? Most fluorescence intensity quantifications made from images are of the fluorophores themselves, not directly of the biological molecules of interest. This should be taken into consideration when translating any results from fluorescence intensity quantification into biological conclusions. The microscope modality also affects how much out-of-focus fluorescence contaminates the in-focus fluorescence measurements (**Figure 1a-d**). With EM imaging, the mechanism of contrast generation varies significantly depending upon the type of experiment performed, as well as the type of microscope and detectors used (**Figure 1e-h**). For the majority of the different types of EM experiments, contrast is introduced into the sample as part of sample preparation. Sample contrast is enhanced by the introduction of heavy metals (eg. osmium, lead, uranium, gold, silver etc) which bind to lipids, proteins, carbohydrates etc via chemical reactions, whereby the method, sequential order of addition, temperature, time of incubation, etc can impact the process by which the contrast is incorporated into the sample[17]. The entire purpose of any microscope is to transmit the biological information contained within the sample to the detector. The pixel intensities in the acquired images will depend on a number of factors in the imaging process, and understanding these factors is important for contextualising quantitative intensity measurements and assessing their accuracy. In fluorescence microscopy, the intensities in an image represent the number of photons emitted by excited fluorophores. The absolute number of photons emitted by fluorescent molecules in the sample is primarily determined by the intensity of the excitation illumination incident on the sample. Depending on the imaging modality used in fluorescence microscopy, intensity information may also arise from fluorescent sources in the sample other than the labelled structures in the focal plane. In techniques capable of optical sectioning, such as confocal microscopy (**Figure 1b**), two-photon microscopy, and TIRF[(see Glossary in **Table 1**, **Figure 1d**), out-of-focus fluorescence does not reach the detector, whereas images acquired using widefield microscopy will contain out-of-focus intensities (**Figure 1a**). All fluorescence microscopy images may also contain intensity contributions from autofluorescence (endogenous fluorescent species present within the sample in the absence of intentional labelling). Confocal z-stacks are frequently projected into a single 2D image for visualisation and analysis (**Figure 1c**); a'sum slices' or average intensity projection will retain intensity information, whereas a maximum intensity projection will produce sharper images but with intensities that do not correspond to the total amount of fluorescence below each pixel and thus shouldn't be used for intensity quantification. Increasing illumination intensity, regardless of the source, usually results in an increase in the number of photons emitted by fluorescence. However, increasing excitation intensities can lead to non-linear saturation of emitted fluorescence (**Figure 2a, 2b**), and increase the rate at which permanent photobleaching of fluorophores occurs (**Figure 2c**). When fluorescence intensity measurements are important, inhomogeneity of the illumination across the field-of-view can also create unwanted variability. The 'flatness' of the illumination can be characterised, and corrected for further quantitative measurements; this can be done using a homogeneously fluorescent test sample[8, 18] (**Figure 2d**), or via computational methods[19, 20] without a test sample. For EM, the sample preparation protocol, the type of experiment, microscope and detectors used all impact the information captured within the images obtained. In conventional transmission electron microscopy (TEM), images are collected of ultrathin sections (50-100nm thick) of a resin embedded, contrasted sample. The image is created by the detection of electrons that pass through the sample and reach the detection mechanism. Contrast is generated by localised heavy metals, introduced during the sample preparation steps, scattering electrons, preventing their transmission through the sample (**Figure 1e**). Scanning electron microscopes (SEM) are another common type of electron microscope used for biological investigations, where a focused beam of electrons is scanned across the sample and resulting secondary and/or backscattered electrons or x-rays are collected to generate an image. The most common approach is the collection of secondary electrons that have interacted with the surface of a sample that has been fixed, dehydrated, dried and the surface coated in a thin layer of metal. The resulting image is a view of the surface topology of the sample from a particular view point (unless rotational imaging and photometry is applied), with a large depth of field. Contrast is provided by the differential angles of the detector, source and how the electron beam interacts with the limiting shape of the sample (**Figure 1f**). Back-scattered electrons (BSE) can also be separately collected and mapped onto the sample and provide information about the sample's elemental composition (**Figure 1g**). BSE imaging has recently been exploited in the development of a collection of volume EM (vEM) techniques, where either arrayed sections or blocks of resin embedded, fixed, contrasted samples are automatically imaged (**Figure 1h**), generating a large 3D volume of ultrastructural data at nm resolution, across scales of 10's-100's of microns[21]. Regardless of the detection technique, it is important to be aware that working distance, magnification, accelerating voltage and probe size and current are just some of the parameters that impact the resulting images in terms of resolution, depth of field, focus and contrast. The final acquired images are formed by binning (see Glossary in **Table 1**) the detected photons (fluorescence microscopy) or electrons (EM) into pixels. The pixel size in a microscopy image plays a critical role in determining what quantitative information can be retrieved. The physical distance that each pixel represents (the pixel size) is primarily determined by properties of the detection path (for cameras, the physical size of the pixels on the chip, and for point detectors, the scanning parameters), the magnification, and the numerical aperture of the microscope. As a result, the accuracy of both intensity and morphology measurements from the same biological structures varies depending on the magnification and numerical aperture (NA) of the microscope objective, as shown on **Figure 3**. Per the Nyquist-Shannon sampling theorem, retaining the resolution of a continuous signal (i.e. the spatially varying distribution of photons/electrons incident on the detector) in discrete digital space (i.e. the pixels in the acquired image) requires sampling at at least double the frequency of the smallest resolvable feature[22]. When the Nyquist-Shannon theorem is applied to the two-dimensional nature of images, the theoretical pixel size for adequate sampling should in fact be ~2.8 times smaller than the resolving power of the microscope[23]. This sampling should be observed if very fine structures within the sample are to be measured and quantified, as larger pixel sizes will lead to a loss of information due to undersampling. In addition to spatially binning detected photons or electrons into pixels, detectors also convert the measured intensity into an integer number. This value depends on the intensity of emitted fluorescence or scattered electrons, as well as detector settings such as gains and offsets (see Glossary in **Table 1**). However, it also depends on the bit depth of the acquired images. Bit depth determines the range of values that can be digitally stored within a pixel; most microscopy data is acquired at 8-, 12-, or 16-bit depth. A pixel can only store a number within the range 0 \(\rightarrow\) (2\({}^{\text{N}}\)-1), where N is the bit depth. The effect of bit depth on intensity information is somewhat analogous to the effect of pixel size on spatial information; higher bit depths provide higher'sampling' of intensities, which can provide higher precision for quantitative measurements. Critically, measurements should not be made from any pixels having either the minimum (0) or maximum (2\({}^{\text{N}}\)-1) value as this is likely to represent incomplete or 'clipped' information in the image; unless the image was acquired in a very low fluorescence microscopy regime, pixels of value 0 may in fact represent a range of different'real' fluorescence intensities that are below the range of the detector settings, and pixels of value 2\({}^{\text{N}}\)-1 may represent a range of real fluorescence intensities that are saturating the detector. Extracting and interpreting intensity measurements When it comes to extracting an intensity-based measurement from a fluorescence microscopy image, be that from the raw data or after processing, it is important to remember that fluorescence intensity measurements are always comparative. Standalone measurements of pixel or object intensity in images are often meaningless; they must be reported in the context of some baseline condition such as the background intensity, or the intensity of a comparable object under a different biological condition. For such comparisons to be made accurately, it is critical that acquisition parameters such as illumination intensity, magnification, pixel dwell time (point detectors) or exposure time (cameras), and detector gains are recorded and ideally kept consistent between different images. Any image processing pipelines should be applied equivalently to each image, including the ones that do not look like they 'need' it. For some biological measurements, it makes sense to work with the absolute fluorescence values in images, such as monitoring the expression of a GFP-tagged protein during successive cell divisions[24]. However, when aggregating results from different images, relative fluorescence intensities are often used so that results can be aggregated. On the whole, any absolute measurements of intensity from fluorescence microscopy modalities such as widefield and confocal microscopy are critically dependent on labelling, acquisition settings, and post-processing. If comparisons of fluorescence intensity are to be made between different images then these parameters should be as identical as possible in each case. In EM, contrast intensity in images is rarely absolutely quantified as routinely controlling the contrast incorporation process and calibrating the detection process is fraught with challenges. ### Factors impacting intensity information A vast range of image processing operations can be applied to raw images following acquisition. If quantitative intensity information is to be extracted following image processing, it is then important to understand how processing affects image intensity (**Figure 1c**, **Figure 4**). During image processing, there are occasions where the bit depth of an image is changed. This is typically when a mathematical operation is performed on the image that generates values that are beyond the range of the bit depth (for example, a negative number) or have a non-integer component. Intensity quantification is still valid when performed on images after increasing the bit depth, but no intensity quantifications should be made from images following conversion to a lower bit depth. This is because conversion to a lower bit depth requires a rescaling of pixel values so that they fit within the smaller range, which results in a loss of information from the image. If intensity measurements are to be made following image processing, then it is important that the processed values are still linearly related to the number of fluorescent molecules present in a given region of the image (**Figure 4D**). Iterative deconvolution (see Glossary in **Table 1**) methods have been shown experimentally to be largely linear with respect to intensity, although this can be microscope-dependent[25] (**Figure 4**, **'Deconvolution'**). An example of a non-linear image processing operation is the Super-Resolution Radial Fluctuations (SRRF) method[26] (**Figure 4**, **'SRRF'**). This is an example of a method which can increase both contrast and resolution of an image dataset, but should not be used for quantitative intensity measurements. An emerging field of processing methods for fluorescence microscopy images are deep learning based methods. Such methods typically require training a neural network with pairs of high-quality and low-quality images of the same field of view; the network attempts to 'learn' what series of image processing operations should be applied to reliably convert low-signal images into images closely matching the high-signal equivalent. New low-quality images (without a high-quality equivalent) can then be provided to the trained neural network, and the network will output a high-quality prediction. Example applications of these algorithms are for increasing the signal-to-noise ratio of low-signal images[27] and increasing resolution of images[28], among others[13]. Because these methods impact image intensity in a non-linear manner (**Figure 4**, **'CARE'**), it is strongly recommended that intensity-dependent quantification is not performed on images processed with deep learning methods. ### Morphology Loosely characterised as the visual appearance in terms of form or structure, morphology is critical in many biological processes because it reflects and influences the physiological state of living systems [29, 30]. The vast majority of microscopy data, regardless of the modality, hold visual information that pertains to morphology. Being able to extract this type of information from image data is therefore of utmost interest, as exemplified by the popularity of software dedicated to this task such as CellProfiler [31]. Although intuitively understood by everyone, morphology is challenging to define precisely. The first challenge towards identifying the most appropriate strategy to quantify it is therefore to determine what kind of visual information is relevant. The shape of objects of interest, as for example labelled with a membrane marker, is a common readout in fluorescence microscopy [32]. In other modalities such as brightfield and electron microscopy, both shape and ultrastructure (texture) information is available [33, 34]. Although the type of image feature informing on morphology may vary (whether edges, textures, or a mix of both), most morphology measurements are extracted for individual biologically-relevant objects. They therefore share the need for segmentation upstream of the actual quantification step. ### Segmenting and capturing morphology Segmentation is the process of partitioning an image into different regions, whether background and foreground (semantic, see Glossary in **Table 1**) or individual objects (instance, see Glossary in **Table 1**). Segmentation, whether instance or semantic, is a challenging problem to solve because of the diversity of visual appearances in microscopy data. The fundamental nature of this challenge has however led to the development of numerous solutions which are available to reuse and adapt, both relying on classical image processing methods and leveraging recent machine learning tools [11]. Segmentation algorithms generally output a so-called "mask" (see Glossary in **Table 1**), which consists of labels for each pixel in the original image (**Figure 5**). Such masks can either be binary in the case of semantic segmentation, meaning that pixels (or voxels in the case of 3D volumes) are either labelled 0 (background) or 1 (foreground) (**Figure 5b**), or composed of integer numbers for instance segmentation, whereby all pixels (or voxels) labelled with the same integer value belong to the same object instance (**Figure 5c**). Alternatively to masks, instance segmentation algorithms can also output object outlines or surfaces. In 2D images, each individual object is then identified by the list of 2D coordinates of the pixels composing its contour (**Figure 5d**). In 3D volumes, surfaces can either be represented as a list of 3D voxel coordinates, or as a more structured set of vertices and faces called mesh (see Glossary in **Table 1**). Contour (outlines) or surfaces and mask representations of individual objects can easily be converted into one another by filling the former and finding the boundaries of the latter using classical image processing methods such as connected components or boundary tracing (e.g., the marching cubes algorithm). The nature of morphology to be quantified for each biological question (e.g., colony vs individual, purely shape vs mix of shape and texture) determines the kind of readout one needs, which will in turn inform on the choice of algorithm. If individual objects are not needed, then a binary mask is sufficient. If there is no texture information, then individual contours or surfaces are sufficient. While segmentation is a necessary step towards the quantification of morphology, it is a means but not the end. The output of segmentation will be used as a basis to quantify morphology. This is worth keeping in mind to assess the level of accuracy needed from segmentation: the more subtle the morphology to be quantified is, the more accurate the segmentation must be (**Figure 3**, **mitochondria**). Conversely, large morphological properties such as object size may not require segmentation accuracy to the single pixel or voxel (**Figure 3**, **nucleus**). The scale of the morphological readout of interest thus also informs on how precise the segmentation must be for it to be quantitatively captured. Extracting and interpreting morphology measurements The morphology of an object can be quantified by a series of features that are collectively referred to as a feature vector (see Glossary in **Table 1**). There are two broad strategies used to construct these feature vectors. One is based on handcrafted approaches, where the feature collection is composed of measurements that are predefined by the experimenter, whilst the second strategy uses data-driven approaches, where the features list are generated through a machine learning process. A subset of commonly used handcrafted features in 2D is listed in **Table 2**. Some of these metrics are adapted from general concepts of geometry and others have been carefully engineered relying on image processing tools. All have been designed to quantify the geometrical (for shape) or visual (for texture) nature of an object in an intuitive and interpretable manner, and several can be directly extended to 3D. Different features capture different aspects of morphology, sometimes with very subtle differences (e.g., roundness and circularity).[35, 36, 37, 38] Collections of handcrafted features are assembled into large feature vectors to empirically capture as many aspects of morphology as possible. However, when feature vectors are built in that way, they often end up having strong correlated elements. This is due to the fact that different handcrafted measurements may be directly related to one another (for instance roundness and compactness, **Table 2**) or may be derived from the same geometrical properties (for instance area, perimeter, and circularity). Feature selection methods such as the Fisher score can be used to limit redundancy, prune the collection and retain only its most informative elements[39]. An alternate route is to let machines learn morphology descriptors directly from the data. This is relevant in many cases, from situations where morphology is too ambiguous to make it possible to craft a relevant set of features, to cases where the biological phenomenon of interest is too poorly understood to allow predicting which features are discriminative. When used well, machine learning can produce descriptors of morphology that are less biased and better capture information than manually designed ones, at the expense of interpretability and for a more significant computational cost. Features can for instance can be learned directly from flow cytometry images, addressing disagreements among experts as to which morphological readouts are able to assess red blood cell quality[40] following a weakly-supervised strategy: a deep neural network is trained to learn "good" features to describe morphology by exploiting available metadata information that relate to experimental conditions, in that case the storage date of the samples. Since organic tissues degrade, morphology is affected by sample storage time. The features learned from the image data aggregate all available morphological information to be able to describe the quality of the samples. More recent methods reduce the need for external information further by relying on concepts of self-supervised learning, in which deep neural networks only exploit information available in the input images provided for training. Learned descriptors of the morphology of single cells in fluorescence microscopy images are observed to be richer than handcrafted ones as they exploit visual information present in different fluorescence channels[41]. Given the appearance of structural markers (nucleus, cytoskeleton), a deep neural network is trained to predict the visual appearance of a protein of interest and, through this, learns a representation of overall cell morphology. The idea of learning unbiased morphology descriptors can be extended further. For example, Zinchenko et al.[42] use a deep neural network to learn descriptors of individual cell type morphology based on 3D shape and texture alone from EM volumes, without specifying a biological question. The training is here again self-supervised: an specific type of deep neural network (autoencoder[43]) learns to produce a representation that is rich enough to be able to reconstruct the input image volume while bringing together samples that are morphologically similar and pushing apart samples that are not. Many more machine learning strategies can be adopted to learn a representation of morphology from the data. Despite the effectiveness of deep neural networks, it is usually not possible to reverse-engineer the exact nature of the morphological features they rely on, making learned representations potentially difficult to interpret. Efforts to investigate and compare published approaches on benchmark or reference datasets are invaluable to navigate these available options[44]. ### Factors impacting morphological quantification It should be noted that when making quantitative measurements of sizes of objects within images, it is critical to consider both the pixel size and resolution of the image, as these provide information on the lower limits of distances that can be extracted from data (**Figure 3**). This is especially important if any measurement approaches the resolution limit of the acquired image. For example, any measurements of size should not have a dimension smaller than the theoretical resolution limit. If many objects in the image are measured to have sizes comparable to the resolution limit of the system, then this may be a population of objects of varying sizes smaller than what can be resolved. One should also remember that many morphological measurements are computed on 2D projections of structures that are actually tri-dimensional, as for instance in widefield fluorescence (**Figure 1a**) and projected confocal stacks (**Figure 1c**). These images do not take into account the third dimension and may therefore be misleading when quantifying morphology. It is important to keep in mind that the morphology we observe in a microscopy image is a product of both the sample preparation and imaging process. Any measurements extracted to describe it are therefore strongly influenced by factors that may not be immediately relevant or obvious to the biological phenomenon of interest. For instance, some proteins commonly used as organelle markers can demonstrate apparently normal organelle morphology whilst other proteins can reveal aberrant morphology[45]. Image processing also impacts morphology measurements, as nonlinear operations can significantly alter the results of automatic thresholding, for example (**Figure 4b**). Keeping in mind which source of visual information served to construct morphology measurements is therefore crucial, in order to identify situations when one can meaningfully compare morphology readouts across different datasets. Whenever absolute image intensity is involved, for instance when relying on texture descriptors that are not only based on relative variations of intensity, one must question whether intensity can reasonably be compared across different images, as discussed in the **Image Intensity** section. Similarly, the observed shape depends on the image resolution in x, y and z, and can be strongly affected by how biological samples have been prepared for imaging. In EM for instance, each TEM will be technically specified to provide resolution in the angstrom range but the ultimate resolution of the acquired images what can actually be visually resolved - are impacted by the sample, heavy metals introduced, thickness and density of the sample and image acquisition parameters. Introduction of significant amounts of heavy metals, may coat ultrastructural features thickly and make it difficult to resolve finer ultrastructural details, thereby limiting the possibility of quantifying morphology. Besides resolution, sample preparation is a notoriously strong factor influencing morphology. Having a good understanding of how different types of preparation distort the morphology of samples therefore provides crucial information on whether measurements can be considered biologically relevant or not. Electron microscopy has a long-standing history of investigating the effect of sample preparation[46], with examples specifically focusing on morphology preservation[47]. The need for strategies that minimally alter the structure of the imaged sample has inspired several modern fixation techniques[48, 49]. As always, optimization is required to find a sensible balance of all aspects of the experimental design from sample preparation to quantification, with the ultimate aim to address the research question in mind. ### Counts and labels Intensity and morphology can be considered "first order measurements" as they focus on quantifying purely visual information. In contrast, "second order measurements" such as counts and categorical labels (see Glossary in **Table 1**) focus on aggregating and combining morphology and intensity metrics to quantify structures that are externally defined. Counts refer to readouts enumerating the occurrences of a given structure or object in the image, while labels refer to those assigning them a category. #### Extracting and interpreting labels and object counts Individual object count is freely obtained whenever objects can be segmented. Categorical labels can be retrieved from morphology and intensity measurements extracted from individual objects and combined in a feature vector through a classification or clustering process, depending on whether annotated examples of categories are available. However, if no other readouts are required, counting and labelling can just involve a detection process and do not necessarily require the definition of precise object boundaries (and therefore segmentation). Because of its "second order" nature, counting exploits known information of the structure or object to be detected. This can take the form of strong geometrical prior, as exploited by the Hough transform[50], or a curated example of the object of interest, as is the case in template matching[51, 52]. The Hough transform, a classical image processing algorithm designed to detect occurrences of perfect circles in an image, has been successfully adapted to identify nuclei and study division in live-cell fluorescence microscopy of fission yeast[53]. Template matching can be tuned to detect an object of choice (the "template"), and is the preferred method to identify molecular complexes in EM tomograms[54, 55]. Both the Hough transform and template matching are examples of algorithms that have the ability to provide object counts without going through a segmentation step. When visual appearance varies so significantly that a single good object representative is hard to identify, deep learning methods can learn to detect occurrences of complex structures from large collections of visual examples[56, 57]. Although initially designed for the detection of highly structured objects from natural images such as cars and human faces, the same algorithms have shown to generalise enough to provide good enough results in fluorescence microscopy data to allow counting[58]. At the extreme, labelling may neither require segmentation nor even object detection. Classification can be successfully carried out from tiles, obtained by splitting an image into a square grid[59]. Labels are then assigned to each tile, thus providing a readout of the categories present in the image without relying on the individually-defined objects. This approach is successfully exploited in digital pathology, where object segmentation is particularly challenging[60, 61]. #### Aspects impacting labels and count information The number of elements present in an image or their category are seemingly absolute measurements, and it is thus reasonable to expect these readouts to be comparable across microscopy data. It is however important to keep in mind that, due to their "second order" nature, count and label measurements ultimately rely on morphology and intensity features. As such, when comparing across images, one should carefully consider how the nature of the data may reflect on morphology and intensity measurements (see previous two sections) and, in turn, influence the results of counting or labelling quantification pipeline. ### Quality control and confidence While the above sections discuss different types of measurements that can be made from microscopy data, and how to interpret such measurements, the limiting factor will always be the quality of the original image being analysed. A useful phrase to bear in mind, which is used frequently in computer science, is 'garbage in, garbage out'; no matter how well-designed the analysis component of a microscopy experiment, if the images being input have poor quality, or the sample preparation and labelling have been poorly designed or executed, then the results obtained from analysis will have little meaning. In fluorescence microscopy, the most commonly-used metrics for assessing image quality are signal-to-noise ratio (SNR) and spatial resolution. SNR values alone are usually insufficient to tell whether an image will be good enough for quantitative analysis or not. Spatial resolution measurements are not necessarily an indicator of image quality directly, but can be useful for contextualising morphological measurements. For example, any measurements that have a dimension smaller than the measured resolution of the image can be attributed to noise and discarded. Similarly, if there are a large number of measurements clustered at the resolution limit then this is an indicator that it may be necessary to use a higher resolution method to study the structure of interest. Measuring image properties such as SNR and resolution, and recording them alongside other metadata from image acquisition helps to add additional context to results from quantitative analysis. It is also important to recognise and reduce bias in quantitative image analysis. One major avenue for this is investing time in creating automated analysis pipelines whereby batches of images acquired under different biological conditions can be analysed in the same manner, free from any user input. For such automated pipelines however, it is crucial that images are acquired in similar ways and have broadly similar properties such that they all remain within acceptable tolerances of the analytical methods. This may not always be possible though, as different biological perturbations may inherently affect image quality (for example, in fluorescence microscopy, via an increase in autofluorescence species). Where automated analysis is not practical, or manual parameter selection is required, blinding can help reduce user bias[62]. Batch effects, defined as non-biological experimental variations that confound measurements, are a common source of bias, with possibly dramatic consequences on end results[63, 64]. The influence of batch effects is further demonstrated by Shamir et al. who show that intensity and morphology measurements computed on microscopy images composed only of background signal can allow identifying different organelles[65]. As stated throughout this article, sample preparation, acquisition parameters (such as illumination intensity, magnification, exposure time, and detector gains), and experimental parameters (such as timestamps and sample id) must be recorded for each image whenever measurements are meant to be compared. Similarly, all parameters that can be kept constant should ideally remain as identical as possible over images. Batch effects can be further mitigated at the level of image data by correcting for intensity variations[19, 20], or with feature normalisation[66]. A good summary of strategies to identify and correct batch effects is provided in Caicedo et al.[5]. The laboratory standard for assessing the legitimacy of a scientific analysis is quality control and performance metrics, and image quantification gets no exception from that. Although plenty of established metrics are available to assess the success of algorithms that carry out segmentation, detection, counting, and classification among many others, identifying metrics that faithfully reflect performance across datasets and use-cases remains an open challenge[67]. Although not quantitative, visual inspection remains a robust quality control strategy. This endeavour may however be highly non-trivial when dealing with high-dimensional, dynamic datasets or with rare events, and can be greatly facilitated by dedicated software tools[68]. Ultimately, the most powerful measure of quality control is reproducibility: the experimental procedures, microscope hardware specifications, image acquisition parameters, and image quantification algorithms provided in a published study should allow other researchers to recover its quantitative conclusions[69]. ## Conclusions An important message of this paper is that all quantitative readouts extracted from microscopy image data are to some extent the product of the image formation process and of the sample preparation protocol. Here, we focused on three big families of measurements: image intensity, morphology, and object counts or categorical labels. Unsurprisingly, image intensity is most critically affected by the image formation process and by data pre-processing or enhancement. For this reason, it is difficult to accurately compare intensity measurements regardless of the imaging modality, whether electron or light microscopy. Normalisation to a reference provides a way around this, but requires significant care to be done meaningfully. Morphology, unlike intensity, is challenging to define generally as it relates to shape, texture, and complex combinations thereof. Unless carried out on the entire image at once, morphology measurements require a first step of instance segmentation. Many different measurements related to morphology can be extracted from individual objects, making it challenging to know a priori which ones will be most informative. The best approach is therefore to combine a large number of them into a feature vector in an attempt to comprehensively describe each object's morphology. It is however important to keep in mind that morphology as observed in microscopy data is a direct product of sample preparation and of the imaging process, and that it may therefore be worth investigating how the protocol affects the aspects of morphology one wants to measure. Object counts and categorical labels obviously require objects to be identified and assigned, but may not necessarily need precise outlines. These types of readout can therefore often be obtained without explicitly segmenting individual objects. In addition to processes that lead to the image data, operations on the images themselves, prior to quantification, can dramatically affect measurements. One must therefore remain mindful of whether it is appropriate or not to perform some types of image processing given the readout of interest. Thresholding for instance removes "background", but background may as well hold true image intensity information, noise, or non-specific labelling. Other key aspects of image quantification are quality control and confidence. While it is common practice to account for known distortions and aberrations introduced by sectioning and imaging in specific imaging modalities such as EM, assessing the accuracy or "success" of these corrections and their impact on downstream measurement is sometimes challenging. Identifying good metrics to assess whether a quantitative readout makes sense can be difficult, and plenty of confounders may adversely affect the extracted measurements. In addition to informing on the type of measurements that can meaningfully be extracted from image data, the essential information about image acquisition, sample preparation, and processing provided by metadata is therefore also crucial to allow randomization and mitigate batch effects at the analysis stage. When quantifying image data, taking into account the whole process leading from a biological sample to data analysis is ultimately the safest way to ensure that the correct measurements are extracted and that they are handled in a scientifically rigorous manner. ## Acknowledgements VU is supported by EMBL internal funding. JJB was supported by MRC core funding to the MRC Laboratory for Molecular Cell Biology at University College London, award code MC_U12266B. SC is supported by a Royal Society University Research Fellowship (URFR1211329). ACC is funded by the BiPAS CDT at King's College London. The authors thank James Levitt of the Nikon Imaging Centre at King's College London for his support and assistance in this work. The authors thank Gautam Dey and his lab at EMBL, Heidelberg for providing S. _pombe_ strains. ## Figure Legends ### Figure 1 **Image formation in fluorescence and electron microscopy.****a** Widefield microscopy of live S. pombe cells expressing sfGFP-tubulin. The whole sample volume is illuminated simultaneously and fluorescence is captured on a camera. **b** Confocal microscopy of the same field-of-view as **a**. Diffraction-limited laser spots are scanned (laser-scanning confocal) or swept in an array (spinning disk confocal, shown here) across the field of view; a pinhole (or array of pinholes) in the detection path prevents out-of-focus fluorescence from reaching the detector (point detector for laser-scanning confocal, camera for spinning disk). **c** Confocal slices can be acquired at a range of focal heights to form a 'z-stack'. This 3D volume can be projected to a single image by adding the images together ('Sum slices') or picking the most intense value for each pixel ('Max intensity'). **d** TIRF microscopy involves the generation of an evanescent field that only illuminates the volume of the sample within a few hundred nanometres of the coverslip. Emitted fluorescence is captured on a camera. Note, this is a different field of view than in **a-c**. All fluorescence scale bars = 10\(\upmu\)m. **e** Transmission Electron Microscopy (TEM) of a thin section of an embedded sample. Electrons that are able to pass through the thin section of a sample are detected on a charge coupled device detector. Different sample preparation protocols can result in significantly different visualisations of a sample. A thin section of mitochondria within the cell is shown, prepared by either conventional aldehyde and osmium fixation, contrasting and resin embedding or by mild fixation, cryo protection, gelatin embedding and cryosectioning (Tokuyasu protocol). Images courtesy of J. J. Burden and I. J. White. **f** Scanning Electron Microscopy (SEM) with secondary electron (SE) detection of a whole sample. A focused beam of electrons is rastered across the surface of a whole sample (gold coated) and the SE generated from interactions of the primary electron beam with the top surface of the sample are detected by the SE detector, revealing surface topology. An exocytosis event on the surface of an endothelial cell is shown, where the structure of Von Willebrand factor strings released from the cell can be visualised. **g** Scanning Electron Microscopy with back scattered electron (BSE) detection of a whole sample. A focused beam of electrons is rastered across the surface of a whole sample (gold coated) and the BSE generated from interactions of the primary electron beam with the sample are detected by the BSE detector, revealing differences in atomic number. Gold labelled antibodies highlight Von Willebrand factor strings released from the cell as described in **f**. Image courtesy of Krupa Patel and Dan Cutler. The depth from which BSE can be generated is proportional to the voltage of the primary electron beam that reaches the sample, higher kV results in larger interaction volumes and impacts resolution. **h** Scanning Electron Microscopy with back scattered electron (BSE) detection of a thin section of a resin embedded sample. All EM scale bars = 500nm. Figure 2 **Impact of illumination on fluorescence intensity measurements.****a** Measured fluorescence intensities of Alexa Fluor 488 Phalloidin (measured from box 'A'), Mitotracker Red (box 'M') and DAPI (box 'D') in response to increasing LED illumination intensity in fixed BPAE cells (widefield microscopy, Plan Apo VC 60x Oil objective NA=1.4, 100ms exposure). Dashed lines indicate what a linear relationship between illumination intensity and fluorescence intensity would be. **b** Measured fluorescence intensities of NLS-GFP and Nup60-mCherry in live Schizosaccharomyces pombe cells (strain GD250 as described in Dey et al. [53]) in response to increasing laser illumination intensity (spinning disk confocal microscopy, SR HP Apo TIRF 100x AC Oil objective NA=1.49, 100ms exposure). Intensities were measured from regions of each channel above the Otsu threshold (portions of masks shown in corners of image). **c** Continuous spinning disk confocal imaging for 30 seconds of S. pombe cells expressing sfGFP-tubulin (strain AV2434, as described in Vjestica et al. [70]) at either 10% or 70% 488nm laser intensity results in different photobleaching characteristics. Intensity was measured as the mean intensity above the Otsu threshold for each image. **d** Nuclear fluorescence intensities measured from live S. pombe cells expressing NLS-GFP ('Uncorrected') are a product of the true concentration of protein per nucleus and the flatness of the excitation illumination ('Illumination'). Inhomogeneous illumination can be corrected by dividing the acquired image by the illumination image (here, a homogenously-fluorescent slide). Image acquisition as in **b**. Scale bars in all panels = 10\(\upmu\)m. Table 1 **Glossary of key technical terms in image quantification.** Figure 3 **Impact of acquisition parameters on quantitative measurements.** The same field of view of fixed BPAE cells stained with Mitotracker Red CMXRos (green) and DAPI (magenta) imaged with a widefield microscope with different objectives and additional optical magnifications ('Full field-of-view'). The number of cells in the FOV, theoretical resolution (\(\Delta\)d) (emission wavelength/2*NA), and resolution as measured using Image Decorrelation Analysis [71] are listed. Scale bars: 20x - 100\(\upmu\)m, 40x, 60x - 50\(\upmu\)m, 100x, 150x - 20\(\upmu\)m. 'Nucleus' column shows a crop of the same nucleus from each magnification (larger white box in full FOV). The nucleus was segmented using Otsu thresholding after applying a 100nm Gaussian blur to the crop, with the threshold border indicated in yellow. Area, circularity ('Circ.') and roundness ('Round.') values are indicated below. Nucleus scale bars = 5\(\upmu\)m. 'Mitochondria' columns show a crop of mitochondria staining at each magnification (smaller white box in full FOV). A line profile was drawn across the same region (between the arrowheads) and intensity profiles are plotted to in the right hand column (line averaging width of 5 pixels). Distances between prominent adjacent peaks were measured between the dashed lines, and are indicated below the images. Mitochondria scale bars = 1\(\upmu\)m. Figure 4 **Effect of image processing on downstream quantification.****a** Images of a fixed BPAE cell with mitochondria stained. 'Raw' image has not been processed following acquisition; other images are processed versions of this image via the methods stated. Inset corresponds to white rectangle in the left image. Scale bars: 10\(\upmu\)m (large image), 5\(\upmu\)m (magnified inset). **b** Mitochondria segmented from the inset images using Otsu thresholding. Note that morphological analysis of these segmentations would yield different results between different image processing methods. **c** Histograms of pixel values within the large images in **a**, where count refers to the number of pixels. **d** For each pixel in the processed images, the pixel value is plotted against the value of the corresponding pixel in the raw unprocessed image. The grey line indicates a 1:1 relationship between processed and unprocessed pixel values (i.e. no change following processing). Table 2 **Common handcrafted morphology features in 2D.** Descriptors based on an object's a shape (object geometry), **b** texture (image intensity), **c** either shape or texture. Figure 5 **Outputs of segmentation algorithms.****a** A spinning disk confocal image of S. pombe cells expressing the nuclear marker NLS-sfGFP (strain AV1200, Vjestica et al. [70]). Scale bar = 20\(\upmu\)m. **b** Semantic segmentation divides an image into two classes: foreground (i.e. objects of interest, black) and background (white). Such an image is also referred to as a binary mask. **c** Instance segmentation divides an image into background (black) and 'instances' of the object of interest. Each different instance here is randomly assigned a different colour. **d** Segmented objects can alternatively be represented by their boundaries rather than a solid object.
2306.02871
Text-To-KG Alignment: Comparing Current Methods on Classification Tasks
In contrast to large text corpora, knowledge graphs (KG) provide dense and structured representations of factual information. This makes them attractive for systems that supplement or ground the knowledge found in pre-trained language models with an external knowledge source. This has especially been the case for classification tasks, where recent work has focused on creating pipeline models that retrieve information from KGs like ConceptNet as additional context. Many of these models consist of multiple components, and although they differ in the number and nature of these parts, they all have in common that for some given text query, they attempt to identify and retrieve a relevant subgraph from the KG. Due to the noise and idiosyncrasies often found in KGs, it is not known how current methods compare to a scenario where the aligned subgraph is completely relevant to the query. In this work, we try to bridge this knowledge gap by reviewing current approaches to text-to-KG alignment and evaluating them on two datasets where manually created graphs are available, providing insights into the effectiveness of current methods.
Sondre Wold, Lilja Øvrelid, Erik Velldal
2023-06-05T13:45:45Z
http://arxiv.org/abs/2306.02871v1
# Text-To-KG Alignment: ###### Abstract In contrast to large text corpora, knowledge graphs (KG) provide dense and structured representations of factual information. This makes them attractive for systems that supplement or ground the knowledge found in pre-trained language models with an external knowledge source. This has especially been the case for classification tasks, where recent work has focused on creating pipeline models that retrieve information from KGs like ConceptNet as additional context. Many of these models consist of multiple components, and although they differ in the number and nature of these parts, they all have in common that for some given text query, they attempt to identify and retrieve a relevant subgraph from the KG. Due to the noise and idiosyncrasies often found in KGs, it is not known how current methods compare to a scenario where the aligned subgraph is completely relevant to the query. In this work, we try to bridge this knowledge gap by reviewing current approaches to text-to-KG alignment and evaluating them on two datasets where manually created graphs are available, providing insights into the effectiveness of current methods. We release our code for reproducibility.1 Footnote 1: [https://github.com/SondreWold/graph_impact](https://github.com/SondreWold/graph_impact) ## 1 Introduction There is a growing interest in systems that combine the implicit knowledge found in large pre-trained language models (PLMs) with external knowledge. The majority of these systems use knowledge graphs (KG) like ConceptNet (Speer et al., 2017) or Freebase (Bollacker et al., 2008) and either inject information from the graph directly into the PLM (Peters et al., 2019; Chang et al., 2020; Wang et al., 2020; Lauscher et al., 2020; Kaur et al., 2022) or perform some type of joint reasoning between the PLM and the graph, for example by using a graph neural network on the graph and later intertwining the produced representations (Sun et al., 2022; Yasunaga et al., 2021; Zhang et al., 2022; Yasunaga et al., 2022). Beyond their competitive performance, these knowledge-enhanced systems are often upheld as more interpretable, as their reliance on structured information can be reverse-engineered in order to explain predictions or used to create reasoning paths. One of the central components in these systems is the identification of the most relevant part of a KG for each natural language query. Given that most KGs are noisy and contain idiosyncratic phrasings, which leads to graph sparsity (Sun et al., 2022; Jung et al., 2022), it is non-trivial to align entities from text with nodes in the graph. Despite this, existing work often uses relatively simple methods and does not isolate and evaluate the effect of this component on the overall classification pipeline. Furthermore, due to the lack of datasets that contain manually selected relevant graphs, it is not known how well current methods perform relative to a potential upper bound where the graph provides a structured explanation as to why the sample under classification belongs to a class. Given that this problem applies to a range of typical NLP tasks, and subsequently can be found under a range of different names, such as grounding, etc., there is much to be gained from reviewing current approaches and assessing their effect in isolation. In this paper, we address these issues by providing an overview of text-to-KG alignment methods. We also evaluate a sample of the current main approaches to text-to-KG alignment on two downstream NLP tasks, comparing them to manually created graphs that we use for estimating a potential upper bound. For evaluation, we use the tasks of binary stance prediction (Saha et al., 2021), transformed from a graph generation problem in order to get gold reference alignments, and a subset of the Choice of Plausible Alternatives (COPA) (Roemmele et al., 2011) that contain additional ex planning graphs Brassard et al. (2022). As the focus of this work is not how to best combine structured data with PLMs, but rather to report on how current text-to-KG alignment methods compare to manually created graphs, we use a rather simple integration technique to combine the graphs with a pre-trained language model. Through this work, we hope to motivate more research into methods that align unstructured and structured data sources for a range of tasks within NLP, not only for QA. ## 2 Background Combining text with structured knowledge is a long-standing challenge in NLP. While earlier work focused more on the text-to-KG alignment itself, using rule-based systems and templates, recent work often approaches the problem as a part of a system intended for other NLP tasks than the alignment itself, such as question answering Yasunaga et al. (2021), language modelling Kaur et al. (2022) and text summarization Feng et al. (2021). As a consequence, approaches to what is essentially the same problem, namely to align some relevant subspace of a large KG with a piece of text, can be found under a range of terms, such as: _retrieval_Feng et al. (2021); Kaur et al. (2022); Sun et al. (2022); Wang et al. (2020), _extraction_Huang et al. (2021); Feng et al. (2020), _KG-to-text-alignment_Agarwal et al. (2021), _linking_Gao et al. (2022); Becker et al. (2021), _grounding_Shu et al. (2022); Lin et al. (2019), and _mapping_Yu et al. (2022). Although it is natural to use multiple of these terms to describe a specific technique, we argue that it would be beneficial to refer to the task itself under a common name and propose the term _text-to-KG alignment_. The following sections formalise the task and discuss current approaches found in the literature. ### Task definition The task of text-to-KG alignment involves two input elements: a piece of natural text and a KG. The KG is often a multi-relational graph, \(G=(V,E)\), where \(V\) is a set of entity nodes and \(E\) is the set of edges connecting the nodes in \(V\). The task is to align the text with a subset of the KG that is relevant to the text. What defines relevance is dependent on the specific use case. For example, given the question _Where is the most famous church in France located?_ and the KG found in Figure 1, a well-executed text-to-KG alignment could, for example, link the spans _church_ and _France_ from the text to their corresponding entity nodes in the KG and return a subgraph that contains the minimal amount of nodes and edges required in order to guide any downstream system towards the correct behaviour. ### Current approaches Although the possibilities are many, most current approaches to text-to-KG alignment base themselves on some form of lexical overlap. As noted in Aglionby and Teufel (2022); Becker et al. (2021); Sun et al. (2022), the idiosyncratic phrasings often found in KGs make this problematic. One specific implementation based on lexical overlap is the one found in Lin et al. (2019), which has been later reused in a series of other works on QA without any major modifications Feng et al. (2020); Yasunaga et al. (2021); Zhang et al. (2022); Yasunaga et al. (2022); Sun et al. (2022). In the approach of Lin et al. (2019), a schema graph is constructed from each question-answer pair. The first step involves recognising concepts mentioned in the text that exists in the KG. Although they note that exact n-gram matches are not ideal, due to idiosyncratic phrasings and sparsity, they do little to improve on this naive approach besides lemmatisation and filtering of stop words, Figure 1: An example of a multi-relational knowledge graph. leaving it for future work. The enhanced n-gram matching produces two sets of entities, one from the question and one from the answer, \(V_{q}\) and \(V_{a}\). The graph itself is then constructed by adding the \(k\)-hop paths between the nodes in these two sets, with k often being \(2\) or \(3\). This returns a graph that contains a lot of noise in terms of irrelevant nodes found in the \(k\)-hop neighbourhoods of \(V_{q}\) and \(V_{a}\) and motivates some form of pruning applied to \(G_{sub}\) before it is used together with the PLM, such as node relevance scoring (Yasunaga et al., 2021), dynamic pruning via LM-to-KG attention (Kaur et al., 2022), and ranking using sentence representations of the question and answer pair and a linearized version of \(G_{sub}\)(Kaur et al., 2022). Another approach based on lexical matching is from Becker et al. (2021), which is specifically developed for ConceptNet. Candidate phrases are first extracted from the text using a constituency parser, limited to noun, verb and adjective phrases. These are then lemmatized and filtered for articles, pronouns, conjunctions, interjections and punctuation. The same process is also applied to all the nodes in ConceptNet. This makes it possible to match the two modalities better, as both are normalised using the same pre-processing pipeline. Results on two QA dataset show that the proposed method is able to align more meaningful concepts and that the ratio between informative and uninformative concepts are superior to simple string matching. For the language modelling task, Kaur et al. (2022) uses a much simpler technique where a Named Entity Recognition model identifies named entity mentions in text and selects entities with the maximum overlap in the KG. For the tasks of text summarisation and story ending generation, Feng et al. (2021) and Guan et al. (2019) use RNN-based architectures that read a text sequence word by word, and at each time step the current word is aligned to a triple from ConceptNet (We assume by lexical overlap). Each triple, and also its neighbours in the KG, is encoded using word embeddings and then combined with the context vector from the RNN using different attention style mechanisms. As an alternative to these types of approaches based on some form of lexical matching for the alignment, Aglionby and Teufel (2022) experimented with embedding each entity in the KG using a PLM, and then for each question answer pair find the most similar concepts using euclidean distance. They conclude that this leads to graphs that are more specific to the question-answer pair, and that this helps performance in some cases. Wang et al. (2020) also experimented with using a PLM to generate the graphs instead of aligning them, relying on KGs such as ConceptNet as a fine-tuning dataset for the PLM instead of as a direct source during alignment. In a QA setting, the model is trained to connect entities from question-answer pairs with a multi-hop path. The generated paths can then be later used for knowledge-enhanced systems. This has the benefit of being able to use all the knowledge acquired during the PLMs pre-training, which might result in concepts that are not present in KGs. ## 3 KG and Datasets This section explains the data used in our own experiments. ConceptNetAs our knowledge graph, we use _ConceptNet_(Speer et al., 2017) -- a general-domain KG that contains \(799,273\) nodes and Figure 2: An example of the different graph construction approaches for COPA-SSE (Brassard et al., 2022). Here, the premise and answer options are: _P: The bodybuilder lifted weights_; _A1: The gym closed_; _A2: Her muscles became fatigued_, from left to right: Purple: Gold annotation, Brown: Approach 3, Green: Approach 2, and Blue: Approach 1. \(2,487,810\) edges. The graph is structured as a collection of triples, each containing a head and tail entity connected via a relation from a pre-defined set of types. ExplaGraphsExplaGraphs Saha et al. (2021) is originally a graph generation task for binary stance prediction. Given a belief and argument pair _(b,a)_, models should both classify whether the argument counters or supports the belief and construct a structured explanation as to why this is the correct label. An example of this can be seen in Figure 3. The original dataset provides a train (\(n=2367\)) and validation (\(n=397\)) split, as well as a test set that is kept private for evaluation on a leaderboard. The node labels have been written by humans using free-form text, but the edge labels are limited to the set of relation types used in ConceptNet. We concatenate the train and validation split and partition the data into a new train, validation and test split with an 80-10-10 ratio. COPA-SSEIntroduced in Brassard et al. (2022), COPA-SSE adds semi-structured explanations created by human annotators to 1500 samples from Balanced COPA Kavumba et al. (2019) -- which is an extension to the original COPA dataset from Roemmele et al. (2011). In this task, given a scenario as a premise, models have to select the alternative that more plausibly stands in a causal relation with the premise. An example with a manually constructed explanation graph can be seen in Figure 4. As with ExplaGraphs, COPA-SSE uses free-form text for the head and tail entities of the triples and limits the relation types to the ones found in ConceptNet. The dataset provides on average over six explanation graphs per sample. Five annotators have also rated the quality of each graph with respect to how well it captures the relationship between the premise and the correct answer choice. As we only need one graph per sample, we select the one with the highest average rating. As the official COPA-SSE set does not contain any training data, we keep the official development split as our training data and split the official test data by half for our in-house development and testing set. ## 4 Alignment approaches As mentioned, the general procedure for grounding text to a graph is three-fold: we first have to identify entities mentioned in the text, then link them to entities in the graph, and lastly construct a graph object that is returned to the inference model as additional context to be used together with the original text. For QA the text aligned with the graph is typically a combination of the question and answer choices. As our two downstream tasks are not QA, and also different from each other, we have to rely on different pre-processing techniques than previous work. The following section presents the implementation of three different text-to-KG alignment approaches that we compare against manually created graphs. An illustration of the different approaches applied to the same text sample can be seen in Figure 2. ### Approach 1: Basic String Matching Our first approach establishes a simple baseline based on naive string matching. For ExplaGraphs, we first word-tokenize the belief and argument on whitespace, and then for each word we check whether or not it is a concept in ConceptNet by exact lexical overlap. This gives us two sets of entities: \(C_{q}\) and \(C_{a}\). The graph is constructed by finding paths in ConceptNet between the concepts in \(C_{q}\) and \(C_{a}\). For COPA-SSE, we do the same but create \(C_{q}\) from a concatenation of the premise and the first answer choice, and \(C_{a}\) from a concatena Figure 4: An example of a manually created graph from COPA-SSE Brassard et al. (2022) for the premise and options: _P: The man felt ashamed of a scar on his face_; _A1: He hid the scar with makeup_; _A2: He explained the scar to strangers._ Figure 3: An example graph from ExplaGraphs Saha et al. (2021) generated by a PLM for the belief argument pair: _Organ transplant is important_; _A patient with failed kidneys might not die if he gets organ donation_. tion of the premise and the second answer choice. We use Dijkstra's algorithm to find the paths Dijkstra (1959).2 The reason to use this rather simple approach, also pointed out by Lin et al. (2019) and Aglionby and Teufel (2022), is that finding a minimal spanning graph that covers all the concepts from \(C_{q}\) and \(C_{a}\), which seems like a more obvious choice, would be to solve the NP-complete "Steiner tree problem" (Garey and Johnson, 1977), and this would be too resource demanding given the size of ConceptNet. Footnote 2: Using the implementation from [https://networkx.org](https://networkx.org) As many of the retrieved paths are irrelevant to the original text, it is common to implement some sort of pruning. We follow Kaur et al. (2022) and linearize the subject-relation-object triples to normal text and then embed them into the same vector space as the original context using the SentenceTransformer (Reimers and Gurevych, 2019). We then calculate the cosine similarity between the linearized graphs and the original text context and select the one with the highest score. ### Approach 2: Enhanced String Matching Our second approach is based on the widely used method from Lin et al. (2019), found in the works of Feng et al. (2020); Yasunaga et al. (2021); Zhang et al. (2022); Yasunaga et al. (2022); Sun et al. (2022), but modified to our use case. We construct the set of entities \(C_{q}\) and \(C_{a}\) using n-gram matching enhanced with lemmatisation and filtering of stop words.3 As in Approach 1, for ExplaGraphs, \(C_{q}\) is constructed from the belief, and \(C_{a}\) from the argument; for COPA-SSE, \(C_{q}\) is based on a concatenation of the premise and the first answer choice, while \(C_{a}\) is based on a concatenation of the premise and the second answer choice. Footnote 3: We use the implementation from Yasunaga et al. (2021) to construct \(C_{q}\) and \(C_{a}\) The graph is constructed by finding paths in ConceptNet from concepts in between \(C_{q}\) and \(C_{a}\) using the same method as in Approach 1. However, we limit the length of the paths to a variable \(k\). In the aforementioned works, \(k\) is set as to retrieve either two or three-hop paths, essentially finding the 2-hop or 3-hop neighbourhoods of the identified concepts. For our experiments, we set \(k=3\). As with Approach 1, many of the retrieved paths are irrelevant to the original text which warrants some sort of pruning strategy. In the aforementioned works, this is done by node relevance scoring. We follow Approach 1 and use sentence representations via linearization and cosine similarity in order to prune irrelevant paths from the graph. ### Approach 3: Path Generator Our third approach is based on a method where a generative LM is fine-tuned on the task of generating paths between concepts found in two sets. We use the implementation and already trained path generator (PG) from Wang et al. (2020) for this purpose. This model is a GPT-2 model (Radford et al., 2019) fine-tuned on generating paths between two nodes in ConceptNet. 4 One advantage of this method is that since GPT-2 already has unstructured knowledge encoded in its parameters from its original pre-training, it is able to generate paths between entities that might not exist in the original graph. Footnote 4: See Wang et al. (2020) for details on the fine-tuning procedure. For both ExplaGraphs and COPA-SSE, we take the first and last entity identified by the entity linker from Approach 2 as the start and end points of the PG. As the model only returns one generated path, we do not perform any pruning. For the following example from COPA-SSE, _P: The man felt ashamed of a scar on his face_; _A1: He hid the scar with makeup_; _A2: He explained the scar to strangers._, the PG constructs the following path: _masking tape used for hide scar, masking tape is a makeup_. #### 4.3.1 Start and end entities We also experiment with the same setup, but with the first and last entity from the gold annotations as the start and end points for the PG. We do this to assess the importance of having nodes that are at least somewhat relevant to the original context as input to the PG. In our experiments, we refer to this sub-method as Approach 3-G. ### Integration technique As the focus of this work is not how to best combine structured data with PLMs, but rather to report on how current text-to-KG alignment methods compare to manually created graphs, we use a rather simple integration technique to combine the graphs with a pre-trained language model and use it uniformly for the different alignment approaches. We conjecture that the ranking of the different linking approaches with this technique would be similar to a more complex method for reasoning over the graph structures, for example using GNNs. By not relying on another deep learning model for the integration, we can better control the effect of the graph quality itself. For each text and graph pair, we linearize the graph to text as in Kaur et al. (2022). For example, the graph in Figure 4 is transformed to the string _masking tape used for hide scar, masking tape is a makeup_. As linearization does not provide any natural way to capture the information provided by having directed edges, we transform all the graphs to undirected graphs before integrating them with the PLM 5. For a different integration technique, such as GNNs, it would probably be reasonable to maintain information about the direction of edges. Footnote 5: In practice, this is done by simply removing the underscore prepended to all reversed directions. For ExplaGraphs, which consists of belief and argument pairs, we feed the model with the following sequence: belief [SEP] argument [SEP] graph [SEP], where [SEP] is a model-dependent separation token and the model classifies the sequence as either _support_ or _counter_. For COPA-SSE, which has two options for each premise, we use the following format: premise + graph [SEP] a1 [SEP] and premise + graph [SEP] a2 [SEP], where + just adds the linearized graph to the premise as a string and the model has to select the most likely sequence of the two. ## 5 Graph quality The following section provides an analysis of the quality of the different approaches when used to align graphs for both ExplaGraphs and COPA-SSE. Table 1 and Table 2 show the average number of triples per sample identified or created by the different approaches for the two datasets, as well as how many triples we count as containing some form of error ('Broken triples' in the table). The criterion for marking a triple as broken includes missing head or tail entities inside the triple, having more than one edge between the head and tail, and returning nothing from ConceptNet. It is, of course, natural that not all samples contain an entity that can be found in ConceptNet, and consequently, we decided to not discard the broken triples but rather to include them to showcase the expected performance in a realistic setting. As can be seen from the tables, the approach based on the Path Generator (PG) from Wang et al. (2020) (Approach 3) returns fewer triples than the other approaches for both ExplaGraphs and COPA-SSE. When using the entities from Approach 2 as the start and end points, denoted by the abbreviation Approach 3, the number of triples containing some form of alignment error is over twenty percent. When using the gold annotation as the start and end point of the PG, abbreviated Approach 3-G, this goes down a bit but is still considerably higher than the approaches based on lexical overlap. Approach 2 is able to identify some well-formatted triple in all of the cases for both tasks, while Approach 1 fails to retrieve anything for five percent of the samples in COPA-SSE and two percent for ExplaGraphs. In order to get some notion of semantic similarity between the different approaches and the original context they are meant to be a structural representation of, we calculate the cosine similarity between the context and a linearized (see Section 4.4 for details on this procedure) version of the graphs. The scores can be found in Table 3. Unsurprisingly, the similarity increases with the complexity of the approach. The basic string matching technique of Approach 1 creates the least similar graphs, followed by the tad more sophisticated Approach 2, while the generative approaches are able to create a bit more similar graphs despite having a low number of average triples per graph. All of the approaches are still far from the manually created graphs -- which are also linearized using the same procedure as the others. \begin{table} \begin{tabular}{l c c} \hline \hline **Approach** & **Avg. number of triples** & **Broken triples** \\ \hline Approach 1 & \(2.99\) & \(0.05\) \\ Approach 2 & \(2.90\) & \(0.00\) \\ Approach 3 & \(1.39\) & \(0.20\) \\ Approach 3-G & \(1.64\) & \(0.12\) \\ \hline Gold & \(2.12\) & \(0.00\) \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics for the different approaches on the training set of COPA-SSE. The number of broken triples is reported as percentages. \begin{table} \begin{tabular}{l c c} \hline \hline **Approach** & **Avg. number of triples** & **Broken triples** \\ \hline Approach 1 & \(2.99\) & \(0.02\) \\ Approach 2 & \(3.03\) & \(0.00\) \\ Approach 3 & \(1.34\) & \(0.21\) \\ Approach 3-G & \(1.58\) & \(0.15\) \\ \hline Gold & \(4.23\) & \(0.00\) \\ \hline \hline \end{tabular} \end{table} Table 2: Statistics for the different approaches on the training set of ExplaGraphs. The number of broken triples is reported as percentages. ## 6 Experiments We now present experiments where we compare the discussed approaches to text-to-KG alignment for ExplaGraphs and COPA-SSE. As our PLM, we use bertDevlin et al. (2019) for all experiments. We use the base version and conduct a hyperparameter grid search for both tasks. We do the same search both with and without any appended graphs as the former naturally makes it easier to overfit the data, especially since both ExplaGraphs and COPA-SSE are relatively small in size. The grid search settings can be found in Appendix A.2 and the final hyperparameters in Appendix A.3. We run all experiments over ten epochs with early stopping on validation loss with a patience value of five. As few-sample fine-tuning with bert is known to show instability Zhang et al. (2021), we run all experiments with ten random seeds and report the mean accuracy scores together with standard deviations. We use the same random seeds for both tasks; they can be found in Appendix A.4. We find that the experiments are highly susceptible to seed variation. Although we are able to match the performance of some previous work for the same PLM on some runs, this does not hold across seeds. Consequently, we also perform outlier detection and removal. Details on this procedure can be found in Appendix A.5. ## 7 Results Table 4 shows the results on ExplaGraphs and COPA-SSE. For both datasets, we observe the following: Methods primarily based on lexical overlap provide no definitive improvement. The performance of Approach 1 (String matching) and Approach 2 (String matching with added lemmatisation and stop word filtering) is within the standard deviation of the experiments without any appended graph data, and might even impede the performance by making it harder to fit the data by introducing noise from the KG that is not relevant for the classification at hand. For Approach 3, based on a generative model, we see that it too provides little benefit for ExplaGraphs, but that when it has access to the gold annotation entities as the start and end point of the paths, it performs significantly better than having access to no graphs at all for COPA-SSE. For both tasks, having access to manually created graphs improves performance significantly. ## 8 Discussion The most striking result is perhaps the performance of Approach 3-G on COPA-SSE. We hypothesise that this can be explained by the fact that annotators probably used exact spans from both the premise and the correct alternative from the text in their graphs, and consequently, they provide a strong signal as to why there is a relation between the premise and the correct answer choice and not the wrong one. This is easily picked up by the model. For ExplaGraphs, which is a text classification problem, this is not the case: the appended graph might provide some inductive bias, but it does not provide a direct link to the correct choice, as the task is to assign a label to the whole sequence, not to choose the most probable sequence out of two options. This conclusion is further supported \begin{table} \begin{tabular}{l c c} \hline \hline **Approach** & **ExplaGraphs** & **COPA-SSE** \\ \hline No graph & \(69.67^{\pm 3.36}\) & \(67.05^{\pm 2.07}\) \\ Approach 1 & \(66.46^{\pm 8.48}\) & \(51.20^{\pm 2.08}\) \\ Approach 2 & \(70.03^{\pm 2.71}\) & \(53.33^{\pm 1.80}\) \\ Approach 3 & \(73.55^{\pm 1.66}\) & \(56.20^{\pm 8.39}\) \\ Approach 3-G & \(70.57^{\pm 3.27}\) & \(85.86^{\pm 0.75}\) \\ \hline Gold & \(80.28^{\pm 2.31}\) & \(96.60^{\pm 0.28}\) \\ \hline \hline \end{tabular} \end{table} Table 4: Results of the different approaches on ExplaGraphs and COPA-SSE. Results are reported as average accuracy over ten runs together with standard deviations after outlier removal, if any. Figure 5: The graph aligned with ConceptNet for both the approaches based on lexical overlap. The original COPA-SSE context is _Premise: The women met for coffee Alt 1: The cafe reopened in a new location; Alt 2: They wanted to catch up with each other_ \begin{table} \begin{tabular}{l c c} \hline \hline **Approach** & **ExplaGraphs** & **COPA-SSE** \\ \hline Approach 1 & \(0.39\) & \(0.32\) \\ Approach 2 & \(0.45\) & \(0.42\) \\ Approach 3 & \(0.48\) & \(0.45\) \\ Approach 3-G & \(0.55\) & \(0.46\) \\ \hline Gold & \(0.75\) & \(0.57\) \\ \hline \hline \end{tabular} \end{table} Table 3: The different graphs and their average cosine similarity with the original text. the observation that appending the manually constructed graphs in their entirety has a much larger effect on COPA-SSE than ExplaGraphs. Furthermore, for COPA-SSE, as pointed out in Table 1, the average triple length for the generative approaches is rather low, so the majority of the aligned graphs from Approach 3-G are actually from the manually written text, not generated by the model itself. The key finding of our experiments is that having access to structured knowledge relevant to the sample at hand, here represented by the gold annotations, provides a significant increase in performance even with a simple injection technique and judging by today's standards, a small pre-trained language model. They also show that for datasets of low sample sizes, such as ExplaGraphs and COPA-SSE, the results are susceptible to noise. As the approaches based on lexical overlap are within the standard deviations of the experiments without any appended graphs, it is not possible to conclude that they add any useful information to the model. Based on Figure 6, we think it is fair to conclude that these methods based on lexical overlap only provide a signal that has no relation to the correct label. As to why the approaches based on lexical matching do not have any effect here but reportedly have an effect in previous work on QA, there is one major reason that has not been discussed so far: namely that both datasets require knowledge that is not represented in ConceptNet. As shown by Bauer and Bansal (2021), matching the task with the right KG is important. It is reasonable to question whether or not ConceptNet, which aims to represent commonsense and world knowledge, does indeed contain information useful for deciding whether or not an argument counters or supports a belief, in the case of ExplaGraphs, or if it can aid in the selection of the most likely follow-up scenario to a situation, in the case of COPA-SSE. In Figure 5, both the approaches based on lexical overlap (1 & 2) align the same exact graph with the text context, and judging from the result, it is pretty clear that the aligned graph has little to offer in terms of guiding the model towards the most likely follow-up. ## 9 Conclusion In this work, we find that the process of identifying and retrieving the most relevant information in a knowledge graph is found under a range of different names in the literature and propose the term text-to-KG alignment. We systematise current approaches for text-to-KG alignment and evaluate a selection of them on two different tasks where manually created graphs are available, providing insights into how they compare to a scenario where the aligned graph is completely relevant to the text. Our experiments show that having access to such a graph could help performance significantly, and that current approaches based on lexical overlap are unsuccessful under our experimental setup, but that a generative approach using a PLM to generate a graph based on manually written text as start and end entities adds a significant increase in performance for multiple-choice type tasks, such as COPA-SSE. For the approaches based on lexical overlap, we hypothesise that the lack of performance increase can be attributed to the choice of knowledge graph, in our case ConceptNet, which might not contain any information useful for solving the two tasks. Figure 6: The train loss curves for the different approaches on COPA-SSE. ### Limitations While there is a lot of work on creating and making available large pre-trained language models for a range of languages, there is to our knowledge not that many knowledge graphs for other languages than English -- especially general knowledge ones, like ConceptNet. This is a major limitation, as it restricts research to one single language and the structured representation of knowledge found in the culture associated with that specific group of language users. Creating commonsense KGs from unstructured text is a costly process that requires financial resources for annotation as well as available corpora to extract the graph from. ## Ethics Statement We do not foresee that combining knowledge graphs with pre-trained language models in the way done here, add to any of the existing ethical challenges associated with language models. However, this rests on the assumption that the knowledge graph does not contain any harmful information that might inject or amplify unwanted behaviour in the language model.
2303.16002
Optimizing performance of quantum operations with non-Markovian decoherence: the tortoise or the hare?
The interaction between a quantum system and its environment limits our ability to control it and perform quantum operations on it. We present an efficient method to find optimal controls for quantum systems coupled to non-Markovian environments, by using the process tensor to compute the gradient of an objective function. We consider state transfer for a driven two-level system coupled to a bosonic environment, and characterize performance in terms of speed and fidelity. We thus determine the best achievable fidelity as a function of process duration. We show there is a trade-off between speed and fidelity, and that slower processes can have higher fidelity by exploiting non-Markovian effects.
Eoin P. Butler, Gerald E. Fux, Carlos Ortega-Taberner, Brendon W. Lovett, Jonathan Keeling, Paul R. Eastham
2023-03-28T14:20:12Z
http://arxiv.org/abs/2303.16002v2
# Optimizing performance of quantum operations with non-Markovian decoherence: ###### Abstract The interaction between a quantum system and its environment limits our ability to control it and perform quantum operations on it. We present an efficient method to find optimal controls for quantum systems coupled to non-Markovian environments, by using the process tensor to compute the gradient of an objective function. We consider state transfer for a driven two-level system coupled to a bosonic environment, and characterize performance in terms of speed and fidelity. We thus determine the best achievable fidelity as a function of process duration. We show there is a trade-off between speed and fidelity, and that slower processes can have higher fidelity by exploiting non-Markovian effects. The control of quantum systems using time-dependent Hamiltonians is crucial for quantum technologies [1], enabling the implementation of state transfer and gate operations. An important task is to establish how optimal performance can be achieved for such processes. In an ideal closed quantum system perfect operations are possible given sufficient time [2]. A speed limit arises because physical Hamiltonians are bounded, so that energy-time uncertainty gives a maximum rate of time-evolution and hence a minimum operation time. Beyond this ideal case, however, other considerations arise. One is a desire for reliable operation when precise control cannot be guaranteed; this can be achieved at the expense of speed by using adiabatic processes [3]. Another is the impact of decoherence and dissipation. In the standard Markovian description of such processes, this results in effects which accumulate in time. In most cases this means the highest achievable fidelity is then obtained by operating at the speed limit. In the present Letter, we show that this is not the case in non-Markovian systems, and that the highest fidelity can be improved by slower operation. To provide a concrete demonstration of the tradeoff between speed and fidelity we use numerical optimal control to explore the achievable performance for a non-Markovian system, consisting of a driven two-level system interacting with a bosonic environment. Optimal control involves determining a set of time-dependent control fields that maximize an objective function such as the fidelity. Here, we show that this can be done effectively in a non-Markovian system, using an extension of our previously introduced process tensor approach [4] to efficiently compute the gradient of the objective function. This allows us to repeatedly optimize over many hundreds of control parameters for different process durations, and so find the achievable fidelity as a function of the process duration. Our results reveal a marked improvement in fidelity for process durations \(T\) longer than a value \(T_{0}\), set by the speed-limit in the closed system [2]. We further compute a widely-used measure of non-Markovianity, based on trace-distance [5], and show that the improved fidelity coincides with an increase in non-Markovianity. Since many types of device exhibit regimes of non-Markovian decoherence our results could enable performance improvements across a range of quantum technologies [6], including superconducting circuits [7; 8], spins [9], quantum dots [4; 10], color centers in diamond [11], and cold atoms [12; 13; 14]. Our approach applies to a generic open quantum system, comprising a Hamiltonian \(H_{S}\), interacting with an environment (bath) with Hamiltonian \(H_{B}\), through a coupling \(H_{SB}\). Models of this form are typically treated under the assumptions of weak system-bath coupling (Born) and short bath correlation time (Markov). This implies that information about the system only flows away into the environment, and does not return, leading to simple time-local evolution equations for the system reduced density matrix \(\rho_{S}\)[15]. Such equations have been used to establish performance limits [16; 17; 18] using optimal control [19] in various problems. Studying optimal control beyond the Born-Markov approximation has so far been difficult. One approach is to expand the system to incorporate a few modes of the environment, which are then treated exactly, with the remaining environment modes providing Markovian damping. This approach has been used to study controllability [20], optimal control [21; 22] and speed limits [18]. However, it becomes intractable when one considers more than a few modes of the environment. Another method involves computing the time-local dissipator describing non-Markovian dynamics using lowest-order perturbation theory [23] or the hierarchical equations-of-motion [24]. These techniques have been used within optimal control to demonstrate fidelity increases, and are effective when the environment spectral density can be approximated by a small number of Lorentzians. Some works [25; 26; 27] have obtained optimal protocols under the assumption that the dissipation is described by a fixed time-local dissipator, such as that for a pure dephasing channel [28; 29]. An issue, however, is that for optimal control one must consider the effect of the time-dependent control fields on the dissipative processes [30]. To overcome these challenges, in this Letter we extend the process tensor method described in our previous work [4] for solving the dynamics of non-Markovian open systems. The process tensor (PT) [31; 32] is a mathematical object that encodes the full influence of an environment onto a system, not assuming any Markovian approximation. It is a multi linear map from the set of all possible system control operation sequences to the resulting output states. It is constructed by discretizing the time evolution into a series of time steps. Generically the PT is a high rank tensor that grows exponentially with the number of time steps. In many cases, however, it is possible to systematically discard negligible correlations and express the PT in matrix product operator (MPO) form [33; 34], allowing a numerically efficient representation. For the case of a Gaussian bosonic environment one can efficiently compute a PT in MPO form (PT-MPO) employing the method introduced in [35; 36; 37; 38]. However, other methods for bosonic, fermionic, and spin environments are available too [4; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49]. Once computed, the PT can be contracted to time-evolve \(\rho_{S}\) for any \(H_{S}\), as shown in Fig. 1(a). In this tensor network diagram [34] each node represents a tensor, each leg represents an index, and connections between legs correspond to contractions. The diagram is in Liouville space so that operators such as \(\rho_{S}\) are rank-1 objects, and maps between operators, such as the propagators across each time step \(U_{t}=e^{\Delta t\mathcal{L}_{S}(t)}\) with \(\mathcal{L}_{S}(t)\bullet=-i[H_{S}(t),\bullet]\), are rank-2. The PT is the region in the dashed box, which can be contracted with an initial density matrix and the Trotterized propagators [50] to obtain \(\rho_{S}\) at later times. For optimal control we must define an objective function. In the following we consider the example of a state transfer process, for which we use the fidelity \(\mathcal{F}\)[51] between the desired target state and the obtained final state \(\rho_{f}=\rho_{S}(t_{f})\). We restrict ourselves to pure target states, with density matrix \(\sigma=|\sigma\rangle\langle\sigma|\), so that \(\mathcal{F}=\langle\sigma|\rho_{f}|\sigma\rangle\). In Liouville space this is a scalar formed from the vectors representing the final density matrix and the transposed target density matrix. The fidelity then corresponds to the network shown in Fig. 1(a) with the two legs marked with arrows joined. An optimal protocol is found by numerically maximizing \(\mathcal{F}\) over \(N\) control parameters which determine the system propagators at each discrete time step. Such numerical optimization becomes significantly faster if one can efficiently calculate the gradient of \(\mathcal{F}\) with respect to the \(N\)-dimensional vector of control parameters, \(\nabla\mathcal{F}\). A naive calculation of \(\nabla\mathcal{F}\) requires of order \(N\) evaluations of \(\mathcal{F}\), strongly limiting the size of problems that can be treated. Such gradients can however be efficiently obtained using the adjoint method [52], which has been applied to unitary or Markovian dynamics [53]. Crucially, one may see that the tensor network representation of \(\mathcal{F}\) in Fig. 1(a) leads immediately to the form of adjoint method required for non-Markovian dynamics. This diagram represents the fidelity as a multilinear functional of the propagators; thus the derivative with respect to the propagator at a given time step is the same diagram with the node for that propagator removed, as shown in Fig. 1(b). Furthermore, all such derivatives can be constructed from two sets of tensors found as follows. The first set come from evaluating the network in the typical way, proceeding from the initial state forwards in time, and then storing the two-legged tensors obtained after each time step. The second set of tensors are found by starting from the target state and propagating backwards in time to a given time step. One can then contract one index in each pair of stored tensors to obtain the derivative of \(\mathcal{F}\) with respect to all the propagators--and hence the gradient with respect to any and all choices of controls in \(H_{S}\)[32]. The key difference compared to use of the adjoint method in unitary or Markovian dynamics is that we propagate a rank-2 tensor rather than the state \(\rho_{S}\). The additional index in this tensor encodes information about the past and future dynamics and enables optimization of non-Markovian evolutions. To illustrate the general principle, we consider an example of optimal state transfer in a driven two-level system. Our system Hamiltonian is \(H_{S}=h_{x}(t)s_{x}+h_{z}(t)s_{z}\), where the fields \(h_{x,z}(t)\) are the controls. We treat them as piecewise constant with values \(h_{x,z}(t_{n})\) at the time steps \(t_{n}=n\Delta t\). We seek to determine the control fields which steer the state \(\rho_{S}(t)\) from a prescribed initial state (the state with \(s_{x}=-1\)) to a target state (the state \(s_{x}=+1\)). Figure 1: (a) Tensor network representation of the propagation of an initial state over four time steps and calculation of the fidelity. Time increases going up the diagram. The evolved density matrix is given by this diagram without the target state node. The fidelity is obtained by joining the legs indicated by arrows. (b) Derivative of the fidelity with respect to the penultimate propagator and computation of the gradient by backpropagation (see text). The bath is a set of harmonic oscillators, and the coupling is \(H_{SB}=s_{z}\sum_{q}(g_{q}b_{q}+g_{q}^{*}b_{q}^{\dagger})\). This model is known as the spin-boson model, and it describes many different physical systems [54, 55]. For definiteness, we suppose it represents an optical transition on a quantum dot driven by a laser pulse [56], in which case \(h_{z}\) and \(h_{x}\) are the time-dependent detuning and Rabi frequency respectively. In the quantum dot the bosonic modes are acoustic phonon modes, for which we use the super-Ohmic spectral density \(J(\omega)=2\alpha\omega^{3}\omega_{c}^{-2}\exp\left[-\omega^{2}/\omega_{c}^{2}\right]\) with \(\alpha=0.126\) and \(\omega_{c}=3.04\,\mathrm{ps}^{-1}\)[57, 58], and use a bath temperature of \(5\,\mathrm{K}\). We have determined optimal controls \(h_{x}(t_{n}),h_{z}(t_{n})\) by numerically maximizing \(\mathcal{F}\) using the L-BFGS-B algorithm [59], with the gradient and fidelity computed with the PT. We explore the impact of speed on fidelity by performing this optimization for different process durations \(T\), with bounds on the fields, \(|h_{x}|\leq h_{x}^{\mathrm{max}}=5\,\mathrm{ps}^{-1}\) and \(|h_{z}|\leq h_{z}^{\mathrm{max}}=1\,\mathrm{ps}^{-1}\), to consider the realistic case in which the Hamiltonian, and therefore the speed of the unitary evolution, is restricted. We note that in the unitary case, the speed limit for state transfer has been identified from the rate of convergence of the Krotov algorithm [16]. Here we consider a fully non-Markovian dissipative system, and use numerically converged optimization to identify the speed limit from the behavior of the fidelity against process duration. Figure 2(a) shows the resulting infidelity, \(1-\mathcal{F}\) for this optimized protcol, denoted "Control A". It is interesting to compare the optimized results with those for the protocols which are optimal for unitary dynamics in the closed system [60], denoted "Control B", which we used as the starting point for the optimization. These protocols can be understood by noting that for unitary dynamics one can achieve the state transfer with infidelity zero by setting \(h_{x}\)=0 and requiring an area \(\pi\) for the time integral of the field \(h_{z}(t)\). Such a protocol is optimal if it satisfies the constraint \(|h_{z}(t)|<h_{z}^{\mathrm{max}}\) for all times, which is possible when \(T\) is greater than the speed limit time \(T_{0}=\pi/h_{z}^{\mathrm{max}}\) (which saturates both the Mandelstam-Tamm and Margolus-Levitin bounds [2]). Among protocols satisfying the above condition, we choose \(h_{z}(t)=\pi/T\). For \(T<T_{0}\) the state transfer is not fully achievable and the optimal protocol is that with \(h_{z}(t)=\pm h_{z}^{\mathrm{max}}\). Thus, in both regimes, we have an optimal protocol with constant fields. As can be seen in Fig. 2(a), the infidelity of this protocol (Control B) for the unitary dynamics smoothly decreases as the duration \(T\) increases from zero, before saturating at zero for \(T>T_{0}\). Applying this same protocol in the open system gives a similar overall behavior, with the key difference being that the saturated value of the infidelity at large \(T\) is now non-zero. This behavior differs from that with Markovian decoherence, where slower processes would be expected to produce higher infidelity, i.e. growing with \(T\). In the "Control B" protocol, we have \(h_{z}(t)=h_{z}\) and \(h_{x}=0\), so that the model is the exactly solvable independent-boson model [61, 62], which has non-Markovian decoherence. The constant infidelity for \(T>T_{0}\) comes from the decoherence function of the independent-boson model, which approaches a non-zero constant for times \(T\gg\tau_{b}\sim 1/\omega_{c}\). Figure 2 shows that the optimization increases the performance for all process durations, as one would expect, but the increase becomes marked when \(T>T_{0}\). This is because the fidelity in this regime is limited not by the distance to the target state and the speed of unitary evolution under \(H_{S}\), but by the decoherence and dissipation produced by the bath. The improved fidelity thus corresponds to suppressing decoherence. An approach to suppressing decoherence one might consider is Dynamical Decoupling [62] (DD). Such a protocol uses control fields to produce dynamics for the system that averages away the effects of the bath, which requires that the timescale of the system is much shorter than that of the bath \(\tau_{b}\). This regime is not accessible in our optimization due to the bounds on the control fields. The standard DD sequence for the process we consider would be a train of \(\pi\) pulses separated by a time \(\tau\ll\tau_{b}\) and each with duration \(\tau_{p}\ll\tau\); this would produce rapid spin-flips and so average away \(H_{SB}\propto s_{z}\). Thus standard DD requires a field \(|h_{x}|\gg\pi/\tau_{b}\), which with \(\tau_{b}\sim 1/\omega_{c}\sim 0.3\,\mathrm{ps}\) is Figure 2: (a) Infidelity, \(1-\mathcal{F}\), for state transfer as a function of process duration with bounded control fields. Results are shown for the optimal protocol of the non-Markovian spin-boson model (control A) and for driving with a constant field (control B). The latter is an optimal protocol in the closed system. Blue/solid: infidelity of control A in the spin-boson model. Orange/dashed: infidelity of control B in the spin-boson model. Green/dot-dashed: infidelity of control B in the closed system. (b) Non-Markovianity measure as a function of process duration in the spin-boson model for controls A (blue/solid) and B (orange/dashed). The vertical dotted lines are the speed limit time \(T_{0}\) of the closed system. greater than \(h_{x}^{\max}\). To explore the mechanism that is giving the improvement in fidelity we compute a measure of non-Markovianity. There are many such interrelated measures [63; 64; 55] including, among others, those based on divisiblity and complete-positivity, and those based on information backflow. We choose that introduced by Breuer, Lane and Piilo [5], formed from the trace distance between pairs of states \(D_{12}=\mathrm{Tr}\,|\rho_{1}(t)-\rho_{2}(t)|/2\), where \(\rho_{1}(t)\) and \(\rho_{2}(t)\) are obtained by time-evolving initial states \(\rho_{1}\) and \(\rho_{2}\) respectively. The non-Markovianity is \[\mathcal{N}=\max_{\rho_{1},\rho_{2}}\int\frac{dD_{12}}{dt}dt,\] where the integral extends over regions where the integrand is positive, and the maximum is over all pairs of orthogonal states on the surface of the Bloch sphere (which is the same as the maximum over all pairs of initial states, see [65]). The two curves in Fig. 2(b) show the non-Markovianity as a function of process duration for the optimal controls (solid) and the constant field (dashed). For the constant field case, control B, the non-Markovianity increases from zero as the process duration increases from zero, becoming constant for \(T\gtrapprox 2\,\mathrm{ps}\). This corresponds to the build-up of correlations due to the system-bath coupling in the independent boson model, and is unaffected by the system dynamics since \([H_{S},H_{SB}]=0\) when \(h_{x}=0\). The non-Markovianity with the optimal control A is higher, but initially shows a similar increase and saturation as the duration \(T\) increases from zero. However, there is then a marked feature at \(T\approx T_{0}\), beyond which the non-Markovianity increases rapidly with process duration before reaching a new and much higher level. This suggests that the improvement in fidelity for long process durations occurs because the optimization increases the degree of non-Markovianity of the map, restoring some of the information lost to the environment during the earlier parts of the dynamics by the end of the process. Figure 3 shows the optimized control fields (top panel), and the dynamics of the Bloch vector \(\langle\sigma_{x,y,z}\rangle\), for the optimal protocol, control A, at \(T=4.0\,\mathrm{ps}\). For this duration, the optimal \(h_{z}\) remains constant, while the optimized \(h_{x}\) has an overall slope and oscillations of varying magnitude, which saturate the bounds at the beginning and end of the process. The resulting dynamics of the Bloch vector moves down below the equator during the control and develops weak oscillations. The clearest effect of the optimization is seen in the length of the Bloch vector, which is shown for both protocols A and B. In the latter case (constant field) the length of the Bloch vector is given by the decoherence function of the independent boson model, which goes from one, for zero time, to a constant non-zero value at late times. The optimal control A clearly avoids this decay and furthermore shows an increase in the Bloch-vector length at the very end of the pulse, consistent with a decoherence suppression mechanism. The control in \(h_{x}\) is very different from that for a standard DD sequence as described above, although it does act to produce some oscillations in the Bloch vector which may have a similar averaging effect. The oscillations in the Bloch vector and control fields become more pronounced (and saturate the bound) as we move to longer process durations. The behavior shown is otherwise representative of that we find for other process durations [32]. The consistency in the form of these solutions suggests that there is a unique global optimum that is found by our optimization process for durations near the speed limit. This is expected to break down for slower processes [66]. In conclusion, we have presented a method for computing optimal controls in general non-Markovian systems. It extends the PT method [4] so as to efficiently compute gradients, so permitting optimization over large numbers of control parameters. The method takes full account of the changes in the dissipator produced by the control fields, allowing it to discover protocols which exploit non-Markovianity for improved performance. We have used it to investigate the performance of a state transfer process in a time-dependent spin-boson model and find that, contrary to the case of a Markovian system, slow processes can perform better than fast ones. The identified controls increase the degree of non-Markovianity during the state transfer, as measured by the growth of trace-distance, and suppress the decoherence, as measured by the decay of the Bloch vector length. These results show Figure 3: (a) Driving fields \(h_{x}(t)\) (green) and \(h_{z}(t)\) (orange) for the optimal state transfer process, control A, of duration \(4.0\) ps. (b) Dynamics of the components of the Bloch vector (left axis, green/solid: \(\langle\sigma_{x}\rangle\), blue/dot-dashed: \(\langle\sigma_{y}\rangle\), orange/dashed: \(\langle\sigma_{z}\rangle\)), and length of the Bloch vector (right axis, black/solid). The length of the Bloch vector for control B is also shown (right axis, black/dashed). that performance improvements are possible across the many qubit implementations subject to non-Markovian decoherence, and could be identified using the methods described here. We acknowledge discussions with F. Binder. E. P. B. acknowledges support from the Irish Research Council (GOIPG/2019/1871). G. E. F. acknowledges support from EPSRC (EP/L015110/1). B. W. L. and J. K. acknowledge support from EPSRC (EP/T014032/1). P. R. E. acknowledges support from Science Foundation Ireland (21/FFP-P/10142).
2301.07776
Parametric insurance for extreme risks: the challenge of properly covering severe claims
Parametric insurance has emerged as a practical way to cover risks that may be difficult to assess. By introducing a parameter that triggers compensation and allows the insurer to determine a payment without estimating the actual loss, these products simplify the compensation process, and provide easily traceable indicators to perform risk management. On the other hand, this parameter may sometimes deviate from its intended purpose, and may not always accurately represent the basic risk. In this paper, we provide theoretical results that investigate the behavior of parametric insurance products when faced with large claims. In particular, these results measure the difference between the actual loss and the parameter in a generic situation, with a particular focus on heavy-tailed losses. These results may help to anticipate, in presence of heavy-tail phenomena, how parametric products should be supplemented by additional compensation mechanisms in case of large claims. Simulation studies, that complement the analysis, show the importance of nonlinear dependence measures in providing a good protection over the whole distribution.
Olivier Lopez, Maud Thomas
2023-01-18T20:33:06Z
http://arxiv.org/abs/2301.07776v1
# Parametric insurance for extreme risks: the challenge of properly covering severe claims ###### Abstract Parametric insurance has emerged as a practical way to cover risks that may be difficult to assess. By introducing a parameter that triggers compensation and allows the insurer to determine a payment without estimating the actual loss, these products simplify the compensation process, and provide easily traceable indicators to perform risk management. On the other hand, this parameter may sometimes deviate from its intended purpose, and may not always accurately represent the basic risk. In this paper, we provide theoretical results that investigate the behavior of parametric insurance products when faced with large claims. In particular, these results measure the difference between the actual loss and the parameter in a generic situation, with a particular focus on heavy-tailed losses. These results may help to anticipate, in presence of heavy-tail phenomena, how parametric products should be supplemented by additional compensation mechanisms in case of large claims. Simulation studies, that complement the analysis, show the importance of nonlinear dependence measures in providing a good protection over the whole distribution. **Key words:** Parametric insurance; extreme value analysis; risk theory; copula theory. **Short title:** Parametric insurance and extreme risks. \({}^{1}\) Sorbonne Universite, CNRS, Laboratoire de Probabilites, Statistique et Modelisation, LPSM, 4 place Jussieu, F-75005 Paris, France Introduction Parametric insurance is a very elegant and efficient way to simplify risk management in situations where the assessment of losses might be difficult (see e.g. Lin and Kwon, 2020). Parametric solutions have in particular been developed in the case of natural disasters (see e.g. Van Nostrand and Nevius, 2011; Horton, 2018). A typical example is the case of an hurricane striking some area. The damages of such an episode can be very difficult to assess, leading to expertise costs and delays in the compensation process. The solution proposed by parametric insurance is not to cover directly actual losses, but to work with some "parameter", that is a quantity related to the loss and easily measurable. In the case of natural disasters, wind speed, precipitation level, or any index based on relevant physical quantities can be used. Figueiredo et al. (2018) described a detailed methodology in the example of parametric flood insurance in Jamaica. Since the parameter can be measured instantly (or in a short period of time), payment can be made more quickly. Furthermore, when it comes to assessing the risk, the situation is greatly simplified if one is working with an readily available quantity that can be tracked and modeled through standard actuarial methods. This explains the growing popularity of these solutions that are now widely used (see e.g. Prokopchuk et al., 2020; Broberg, 2020; Bodily and Coleman, 2021). Moreover, parametric products open the way to insurance linked securities, with the example of Catbonds (see e.g. Jarrow, 2010; Albrecher et al., 2004; Zimbidis et al., 2007). Nevertheless, parametric insurance is not a miracle solution. Reducing the volatility of the outcome has a cost. One of the difficulties is to convince the policyholder of the relevance of a guarantee based on a given parameter. The attractiveness of such contracts may be reduced by the fear of the customers that the policy does not adequately cover the very risk they want to be protected against. This is particularly true if the parameter is complex and may not be fully understandable. In this case, simplifying the compensation process is not necessarily worth the loss in terms of protection. In addition, there are miscalculations, such as those reported by Johnson (2021), that can discourage policyholders. Johnson (2021) details some "Ex gratia" payments that are sometimes activated to limit the impact of these inconveniences. The aim of this article is to study, in a general simplified framework, under which conditions parametric insurance may still provide (or not) a good protection against risk in case of large claims, taking the point of view of the policyholder. By large claims, we mean that the actual loss of the policyholder is large. These situations, which deviate from the central scenario that is supposed to guide the calibration of the parameter-based payoff, require special attention because they correspond to situations that may not be the most likely, but that correspond to important policyholder concerns: if the potential customers feel that they are not adequately covered in case of severe events--which is at the heart of the decision to use insurance--they may be reluctant to purchase such solutions. In this work, we focus on two special cases. In the first case, the loss variable is supposed to have a Gaussian tail. In this situation, significant deviations from the central scenario are of low probability. Therefore, simply working on the correlation between the parameter and the loss is sufficient to improve the risk coverage. However, in the second case, losses with heavy tail are more challenging. Our results show that extreme losses may be difficult to capture, unless the parameter is able to reflect this heaviness. These results are consistent with practical observations from Johnson (2021), and can be seen as a first step to improving parametric products by anticipating the disappointment of some policyholders in case of very large claims. We illustrate these properties with a simulation study inspired by the case of cyber risk, specifically data breaches insurance. In this case, the volume of data that are breached can be related to an estimated cost, and using this indicator to design parametric insurance products would make sense. The results are also illustrated on a real data set for extreme flood events. The rest of the paper is organized as follows. In Section 2, we discuss the general framework we consider for evaluating the performance of a parameter used in parametric insurance. In particular, we focus on the question of the dependence between the parameter and the actual loss, which is essential to expect to obtain satisfactory properties. Section 3 gathers some theoretical results to measure how the parametric solution is able to approximate the true loss when the amount of the latter is high. The simulation study illustrating these properties in the case of cyber insurance is described in Section 4. Finally, we provide a real data example on extreme flood events in Section 5. The proof of the technical results are listed in the appendix in Section 7. ## 2 Parametric insurance model In this section, we explain the general framework used to model the difference between the actual loss and the payment made via the parametric insurance product. The general framework is described in Section 2.1. The key issue of the dependence between the two variables (actual loss and payment) is addressed in Section 2.2 which introduces tools from copula theory. ### Description of the framework In the following, we consider a random variable \(X\) representing the actual loss incurred by a policyholder. Parametric insurance relies on the fact that \(X\) may be difficult to measure. In case of a natural disaster, \(X\) may be, for example, the total cost of the event on a given area. It may take time to accurately estimate this cost (if even possible), and the idea is rather to pay a cost \(Y\) that is not exactly \(X,\) but is supposed to reflect it. \(Y\) is supposed to be a variable that should be easier to measure. For example, the precipitation level \(\theta,\) or other weather variables, can be obtained instantaneously, and a payoff can be derived from \(Y,\) that is, in this case, \(Y=\phi(\theta)\) for a given non-decreasing function \(\phi.\) We will use the term "parameter" to refer the random variable \(\theta.\) Ideally, we would like \(Y\) to be close to \(X.\) Another advantage of this approach is the potential reduction of volatility: paying \(Y\) instead of \(X\) is interesting in terms of risk management if the variance \(\sigma_{Y}^{2}\) of \(Y\) is small compared to the variance \(\sigma_{X}^{2}\) of \(X.\) Of course, if the variance of \(Y\) is too small compared to \(X,\) the quality of the approximation of \(X\) by \(Y\) may decrease, since the distribution of \(Y\) does not match that of \(X.\) ### Dependence For the parameter \(\theta,\) or, more precisely, the payoff \(Y=\phi(\theta),\) to accurately describe the risk, \(X\) and \(Y\) must be dependent. The simplest way to describe this dependence is by correlation. More precisely, let \(\rho\) be the correlation coefficient of \(X\) and \(Y\) defined by \(\rho=\mathrm{Cor}(X,Y)=\mathrm{Cov}(X,Y)\sigma_{X}^{-1}\sigma_{Y}^{-1},\) where \(\sigma_{X}^{2}=\mathrm{Var}[X]\) and \(\sigma_{Y}^{2}=\mathrm{Var}[Y]\). Considering the quadratic loss, we have \[\mathbb{E}[(X-Y)^{2}]=\sigma_{X}^{2}+\sigma_{Y}^{2}-2\rho\sigma_{X}^{2}\sigma_ {Y}^{2}+(\mathbb{E}[X^{2}]+\mathbb{E}[Y^{2}]-2\mathbb{E}[X]\mathbb{E}[Y]).\] Hence, increasing this correlation reduces the loss. However, correlation is known to be a measure of dependence that primarily considers the center of the distribution, but not the tail. When faced with a large claim, that is when \(X\) is far from its expectation, correlation is not enough to ensure that \(Y\) remains close to its target. To illustrate this issue, consider the case of covering claims where \(X\) exceeds a deductible \(x_{0}.\) If \(X\) is not observed and the insurance product is based on the parameter \(\theta,\) the insurance company will make some mistakes: sometimes a payout will be initiated when \(X<x_{0},\) and sometimes no payout will occur even if \(X\geq x_{0}.\) This is because \(\theta\) is just a proxy to get to \(X\): the payment is actually triggered when \(\theta\geq t_{0},\) and the event \(\{\theta\geq t_{0}\}\) does not exactly match the event \(\{X\geq x_{0}\}.\) A good parameter must be such that \(\pi_{+}(t_{0},x_{0})=\mathbb{P}(\theta\geq t_{0}|X\geq x_{0})\) and \(\pi_{-}(t_{0},x_{0})=\mathbb{P}(\theta<t_{0}|X<x_{0})\) are both close to \(1\). Maximizing \(\pi_{+}\) is supposed to improve the satisfaction of the policyholder: coverage is obtained for almost all claims that are significant. On the other hand, a high value of \(\pi_{-}\) guarantees that the insurer will not pay for claims that were initially beyond the scope of the product. Let us introduce the cumulative distribution function (c.d.f.) \(F_{\theta}(t)=\mathbb{P}(T\leq t)\) (resp. \(F_{X}(x)=\mathbb{P}(X\leq x)\)) defining the distribution of \(\theta\) (resp. of \(X\)), and the joint c.d.f. \(F_{\theta,X}(t,x)=\mathbb{P}(\theta\leq t,X\leq x).\) A common and general way to describe the dependence structure between \(\theta\) and \(X\) is to use copulas. The copula theory is based on the fundamental result of Sklar (1959) which states that \[F_{\theta,X}(t,x)=\mathfrak{C}(F_{\theta}(t),F_{\theta}(x)), \tag{2.1}\] where \(\mathfrak{C}\) is a copula function, that is, the joint distribution function of a two-dimensional variable on \([0,1]^{2}\) with uniformly distributed margins. The decomposition is unique if \(\theta\) and \(X\) are continuous, which is the assumption we make in the following. Hence, (2.1) shows that there is a separation between the marginal behavior of \((\theta,X),\) and the dependence structure contained in \(\mathfrak{C}.\) Many parametric families of copulas have been proposed to describe various forms of dependence (see e.g. Nelsen, 2007). Let us write \(\pi_{+}\) and \(\pi_{-}\) in terms of copulas. Introducing the survival functions \(S_{\theta}(t)=1-F_{\theta}(t),\)\(S_{X}(x)=1-F_{X}(x),\) and \(S(t,x)=\mathbb{P}(\theta\geq t,X\geq x),\) we have \[\pi_{+}(t_{0},x_{0}) = \frac{\mathfrak{C}^{*}(S_{\theta}(t_{0}),S_{X}(x_{0}))}{S_{X}(x_ {0})},\] \[\pi_{-}(t_{0},x_{0}) = \frac{S_{\theta}(t_{0})-S(t_{0},x_{0})}{F_{X}(x_{0})},\] where \(\mathfrak{C}^{*}\) is the survival copula associated with \(\mathfrak{C},\) that is \[\mathfrak{C}^{*}(v,w)=v+w-1+\mathfrak{C}(1-x,1-w).\] To relate this to classical dependence measures for \(\pi_{+},\) consider the case where \(S_{\theta}(t_{0})=S_{X}(x_{0})=u.\) In this situation, a large value of \(\pi_{+}\) means that the deductible on \(\theta\) that we use (based on a quantile of the distribution of \(\theta\)), has approximately the same effect as a deductible directly on \(X\) (based on the same quantile). \(\pi_{+}\) close to \(1\) yields \(\mathfrak{C}^{*}(u,u)/u\approx 1.\) If \(u\) is small (which means that we are focusing on higher quantiles, that is large claims), \(\pi_{+}\) becomes close to \[\lambda=\lim_{u\to 0}\frac{\mathfrak{C}^{*}(u,u)}{u}=\lim_{u\to 0}\mathbb{P}(\theta\geq S _{\theta}^{-1}(u)|X\geq S_{X}^{-1}(u)),\] which is the upper tail dependence (see Nelsen, 2007). This shows that if we focus on large claims, correlation is not sufficient to correctly represent the risk, tail dependence seems more relevant. In this discussion, we only have focused on what triggers the payment, that is, when \(\theta>t_{0}.\) However, the difference between the payment \(Y=\phi(\theta)\) and the actual loss \(X\) must also be studied, which is the purpose of the next section. **Remark 2.1**: \(\pi_{-}\) _can also be expressed in terms of a copula, but is more difficult to link to the classical dependence measure. Our scope being essentially to focus on the tail of the distribution and on the potential difference between what the customer expects and what the customers gets, we do not develop this point._ ## 3 Difference between the actual loss and the payoff based on the parameter In this section, we provide theoretical results to quantify the difference between \(X\) and \(Y\) when a claim is large, that is when \(X\) is large. The quantities measured are defined in Section 3.1. We then consider two types of distribution: Gaussian variables (Section 3.2) are used as a benchmark, while heavy-tailed variables are considered in Section 3.3. ### Measure the difference In the following, we consider two different quantities to measure the distance between \(Y=\phi(\theta)\) and \(X\) for large claims, that is when \(X\) exceeds some high value \(s.\) The first measure is \(\mathbb{E}[X-Y|X\geq s].\) The advantage of this measure is that it shows if \(Y\) tends to be smaller or larger than \(X.\) On the other hand, the conditional bias \(\mathbb{E}[X-Y|X\geq s]\) can be zero (if \(\mathbb{E}[Y|X]=X\)) while the conditional variance can be large, leading to potentially huge gaps between \(X\) and \(Y\) in practice. For this reason, we also consider a classical quadratic loss, that is \(\mathbb{E}[(X-Y)^{2}|X\geq s].\) Note that this quadratic loss may not be defined for distributions that have a too heavy tail (this is also the case for \(\mathbb{E}[X-Y|X\geq s]\) which may not be defined if the expectation is infinite, but the assumption of a finite variance further restrains the set of distributions). To understand how the approximation made by the parametric approach deteriorates when \(X\) is large, we will next derive asymptotic approximations of these quantities when \(s\) tends to infinity. ### Gaussian losses In this section, we assume that \((X,Y)\) are Gaussian variables with \[\left(\begin{array}{c}X\\ Y\end{array}\right)\sim\mathcal{N}\left(\left(\begin{array}{c}\mu_{X}\\ \mu_{Y}\end{array}\right),\left(\begin{array}{cc}\sigma_{X}^{2}&\rho\sigma_{ X}\sigma_{Y}\\ \rho\sigma_{X}\sigma_{Y}&\sigma_{Y}^{2}\end{array}\right)\right). \tag{3.1}\] Considering such a framework is not entirely realistic, in the sense that \(X\) and \(Y\) could take negative values with a non-zero probability. Nevertheless, if \(\mu_{X}\) and \(\mu_{Y}\) are large enough, the probability of such an event is quite small. The Gaussian case is considered here mainly because it gives us an example of variables strongly concentrated around their expectation, in order to measure the difference with heavy-tailed variables of Section 3.3. In addition, another motivation for considering Gaussian variables is the Central Limit Theorem. If \(X\) consists of the aggregation of individual claims, that is \(X=\sum_{i=1}^{n}Z_{i},\) where \((Z_{i})_{1\leq i\leq n}\) are independent identically distributed losses, the Central Limit Theorem states that \(X\) is approximately distributed as a Gaussian random variable with mean \(n\mathbb{E}[Z_{1}]\) and variance \(n^{1/2}\mathrm{Var}(Z_{1}),\) provided that \(n\) is sufficiently large. A Gaussian limit can also be obtained under certain weak forms of dependence for these aggregated losses. This requires of course that the variance of \(Z_{1}\) is finite. A specificity of Gaussian random vectors is that their dependence structure is solely determined by the correlation matrix. Here, the dependence is driven by the correlation coefficient \(\rho.\) As already mentioned, this is somehow a way to define dependence in the central part of the distribution. Because of the particular structure of Gaussian variable, this quantity has also an effect on the tail, that is we consider even situations where \(X\geq s\) with \(s\) large. Propositions 3.1 and 3.2 provide explicit formulas for \(\mathbb{E}[X-Y|X\geq s]\) and \(\mathbb{E}[(X-Y)^{2}|X\geq s].\) **Proposition 3.1**: _Consider a random vector distributed as (3.1). Then, as \(s\rightarrow+\infty\),_ \[\mathbb{E}[X-Y|X\geq s]\sim(\mu_{X}-\mu_{Y})+\left(1-\frac{\rho\sigma_{Y}}{ \sigma_{X}}\right)(s-\mu_{X}), \tag{3.2}\] _where \(\sim\) is the symbol for equivalence._ Note that if \(\mathbb{E}[Y|X]=X,\) that is if \(\rho\sigma_{Y}\sigma_{X}^{-1}=1\) and \(\mu_{X}=\mu_{Y},\) we find that \(\mathbb{E}[X-Y|X\geq s]=0.\) Apart from this trivial case, we can decompose (3.2) into two parts. First, the difference between the expectations of \(\mu_{X}\) and \(\mu_{Y},\) which reflects the ability of \(Y\) to capture the central part of the distribution of \(X.\) The second term increases with \(s\) when \(\rho\sigma_{Y}\sigma_{X}^{-1}<1.\) However, a large value of the correlation coefficient \(\rho\) tends to reduce this effect. Therefore, we can rely on a strong correlation between \(X\) and \(Y\) to improve the ability of the parametric insurance contract to provide good results even for large claims. On the other hand, the case \(\rho\sigma_{Y}\sigma_{X}^{-1}\geq 1\) is less interesting to study: it would correspond to a situation where \(\sigma_{Y}\geq\sigma_{X},\) that is a payoff based on the parameter which is more volatile that the one we would have directly used \(X.\) While this situation may occur, it is not the ideal case where parametric insurance is used to both facilitate the collection of information required to trigger claim payment, and reduce the uncertainty. Similar observations apply from the Proposition 3.2 result for the quadratic loss. **Proposition 3.2**: _Consider a random vector distributed as (3.1). Then, as \(s\rightarrow+\infty\),_ \[\mathbb{E}[(X-Y)^{2}|X\geq s]\sim\left(1-\rho\frac{\sigma_{Y}}{\sigma_{X}} \right)^{2}\frac{s^{2}}{\sigma_{X}^{2}}. \tag{3.3}\] The exact value for \(\mathbb{E}[(X-Y)^{2}|X\geq s]\) can also be computed for a vector distributed as (3.1). The formula can be obtained from the proof of Proposition 3.2, which is made in Section 7.1.2. ### Heavy-tailed losses Heavy-tailed random variables play an important role in Extreme Value Theory, (see e.g. Beirlant et al., 2004; Coles, 2001). Suppose that we are dealing with an i.i.d sample \(Y_{1},\ldots,Y_{n}\) whose c.d.f. \(F\) satisfies the following property \[F(t)=t^{-\gamma}\ell(t) \tag{3.4}\] where \(\ell\) is a slowly-varying function, namely \[\forall x>0,\ \lim_{t\rightarrow\infty}\frac{\ell(tx)}{\ell(t)}=1.\] The fundamental result of Extreme Value Theory states that the excesses above a certain threshold \(u\) of the sample, defined as \(Y_{i}-u\) given that \(Y_{i}>u,\) converges in distribution toward a non-degenerated distribution and that this distribution necessarily belongs to a parametric family of distributions, called the generalized Pareto distributions. More precisely, Balkema and De Haan (1974) has shown, that under the hypothesis (3.4) when \(\gamma>0,\) there exists normalization constants \(a_{u}\) and \(b_{u}>0\) such that \[\mathbb{P}\left(Y_{i}-u\geq a_{u}+b_{u}x\mid Y_{i}>u\right)\underset{u\to \infty}{\longrightarrow}H_{\gamma}(x),\] and \(H_{\gamma}(x)\) is necessarily of the form, for \(x>0,\) \[H_{\gamma}(x)=1-(1+\gamma x)^{1/\gamma}.\] Furthermore, Gnedenko (1943) showed that the survival function \(1-F(t)\) is necessarily of the form (3.4). The parameter \(\gamma\) which reflects the heaviness of tail of \(F\) is called the tail index. The higher \(\gamma,\) the heavier the tail of the distribution: \(X\) tends to take large values with a significant probability. Here, we consider the context where \(\gamma>0\) and \(X\) belongs to the heavy tail domain. The right-tail of heavy tail distributions decrease polynomially toward \(0,\) their moments of order larger than \(1/\gamma\) do not exist. For example, the Pareto, the Student, the log-gamma and the Cauchy distributions are heavy-tailed. Hence, Assumption 3.4 allows us to cover a large set of distributions. In the following, we thus assume that \[S_{X}(t)=\ell_{X}(t)t^{-1/\gamma_{X}},\quad\mbox{ and }\quad S_{Y}(t)=\ell_{Y}(t)t^{-1/ \gamma_{Y}}, \tag{3.5}\] with \(\gamma_{X},\gamma_{Y}>0\) and \(\ell_{X}\) and \(\ell_{Y}\) two slowly-varying functions. Note that heavy-tailed variables can also be used to approximate sums of losses. Considering the example of \(X=\sum_{i=1}^{n}Z_{i},\) if \(Z_{i}\) are heavy-tailed i.i.d. random variables, Mikosch and Nagaev (1998) show that, when dealing high quantiles, \(X\) can be approximated by a heavy-tailed variable. We do not provide here explicit values for \(\mathbb{E}[X-Y|X\geq s]\) and \(\mathbb{E}[(X-Y)^{2}|X\geq s],\) since these quantities depend on \(\ell_{X}\) and \(\ell_{Y}.\) Nevertheless, our results should provide general bounds for large values of \(s\). We first consider the case of \(\mathbb{E}[X-Y|X\geq s]\) in Proposition 3.3. We recall that \(\mathbb{E}[X-Y|X\geq s]\) (resp. \(\mathbb{E}[(X-Y)^{2}|X\geq s]\)) is defined only if \(\gamma_{X}\) and \(\gamma_{Y}\) are less than \(1\) (resp. less than \(0.5\)). **Proposition 3.3**: _Consider \(X\geq 0\) and \(Y\geq 0\) with survival functions as in (3.5), with \(\gamma_{X}>\gamma_{Y}.\) There exists a constant \(c>0\) depending on \(\ell_{X}\) and \(\ell_{Y}\) and not on their dependence structure, such that, for \(s\) large enough,_ \[\mathbb{E}[X-Y|X\geq s]\geq cs.\] Proposition 3.3 shows that there is a linear increase in this difference for large values of \(s.\) However, the situation is quite different from the Gaussian case. In the Gaussian case, we might expect to reduce the slope by relying on a strong correlation between \(X\) and \(Y.\) Here, improving the correlation would certainly have an effect, but not on the leading linear term. In fact, the distribution of \(X\) being heavier than \(Y,\) when \(s\) becomes large, \(X\) belongs to some areas which are inaccessible by \(Y\) except with a very low probability. In practice, the difference between \(\gamma_{Y}\) and \(\gamma_{X}\) also plays a role, but again only on terms of smaller order. We can obtain a more precise result under some additional assumptions, as we see in Proposition 3.4 below. **Proposition 3.4**: _Consider \(X\geq 0\) and \(Y\geq 0\) with survival functions as in (3.5) with \(\gamma_{X}>\gamma_{Y}.\) Moreover, let_ \[\psi(x)=\mathbb{E}[Y|X=x].\] _Assume that \(\psi\) is strictly non decreasing and such that the random variable \(\psi(X)\) is heavy-tailed. Then_ \[\mathbb{E}[X-Y|X\geq s]=s-\psi(s)+o(s).\] Under this additional assumption, we see that we even get a linear increase of \(\mathbb{E}[X-Y|X\geq s],\) since \(\psi(s)\) is less than \(s\) since \(\gamma_{Y}<\gamma_{X}.\) A similar result is obtained for the quadratic loss in Proposition 3.5, where we see that this quantity increases at rate \(s^{2},\) regardless the dependence structure between \(X\) and \(Y.\) **Proposition 3.5**: _Consider \(X\geq 0\) and \(Y\geq 0\) with survival functions as in (3.5), with \(\gamma_{X}>\gamma_{Y}.\)_ \[\mathbb{E}[(X-Y)^{2}|X\geq s]=s^{2}+o(s^{2}).\] From these theoretical results, we see that there is generally a large gap between the payoff \(Y\) and the actual loss \(X\) when the variables are heavy-tailed. We have assumed here that \(\gamma_{X}>\gamma_{Y},\) which seems to be the most interesting case since, in this situation, the risk taken by parametric insurance is less volatile than the initial risk. In the opposite situation, the parametric product would tend to overcompensate the actual loss. The \(\gamma_{X}=\gamma_{Y}\) situation seems purely theoretical, as it would require a very special situation in which the two tail indices are exactly the same. Finally, let us mention that all the results of this section extend to the case where \(X\) is heavy-tailed and \(Y\) is light-tailed. In this situation, the remaining terms in the asymptotic expansions are even smaller. ## 4 Simulation study from the cyber risk context The purpose of this section is to illustrate the theoretical results and to go beyond the asymptotic approximations we have derived. Simulation settings are inspired from issues in the cyber insurance domain. Cyber insurance is an area where parametric insurance is increasingly mentioned as a promising tool to design contracts adapted to the complexity of the risk (see e.g. Dal Moro, 2020; OCDE, 2017, Chapter 5). Various indicators have been proposed to track risk, whether it is the frequency or severity of an event. Here we build a simulation setting inspired (in terms of the distribution used) by calibrations performed in the cyber domain, more precisely in the case of data breaches. This context offers a natural physical parameter which describes the severity. The results that we proposed are based on previous work of the volume of leaked data and its link with the cost of an event, and can be understood as a sensitivity analysis of the models that are considered. In Section 4.1, we give a brief presentation of this context. The simulation settings we consider are described in Section 4.2, with a focus on the copula models used in Section 4.2. The results and analyses are presented in Section 4.2. ### Description of data breaches through number of records Of the various types of situations that underlie the concept of cyber risk, data breaches are probably the ones for which the cost associated with such an event is relatively easy to assess. Indeed, the volume of data that has been breached (i.e. the "number of records") is a good indication of the severity. This quantity, which can be easily measured soon after a claim occurs, can be used as a parameter that should be able to give some indications on the actual loss. Jacobs (2014) proposed a relationship between this number of records, say \(\theta,\) and \(X,\) which can be seen as the formula defining the payoff \(Y.\) This relationship is of the following type \[\log Y=\alpha+\beta\log\theta, \tag{4.1}\] where \(X\) and \(Y\) are expressed in dollars. The formula was calibrated using data from the Ponemon institute (Ponemon, 2018). Jacobs (2014) estimated \(\alpha=7.68\) and \(\beta=0.76\). Nonetheless, Farkas et al. (2021) pointed that the formula, computed from data collected in 2014, was inconsistent with some of the so-called "mega-breaches" observed afterwards. For example, two mega-breaches were reported in the 2018 CODB report (Ponemon, 2018). The first, with 1 million records breached, would result in an estimated cost of nearly $79 million, while the actual cost was approximately $39 million. The largest one, with 50 million records breached, would lead to \(Y=1.5\) billion dollars, far from the $350 million paid out. This is why Farkas et al. (2021) proposed a (very rough) recalibration of the parameters, taking \(\alpha=9.59\) and \(\beta=0.57\). Behind this discussion, we see that, although we are dealing with an indicator (the number of records) that seems physically relevant to describe the magnitude of the event under consideration, the payoff function \(Y\) may be a very rough approximation of the actual loss, especially in the tail. In addition to the variance inherent in the calibration error of (4.1), and the potential lack of fit of the model, we see that there are many uncertainties regarding the estimation of the parameters. The examples we use in the following are inspired by this example, and the corresponding parameter values. ### Difference between the basis risk and the parametric coverage In this section, we illustrate the theoretical results of Propositions 3.3 and 3.5 on simulations inspired by the relationship (4.1) between the expected cost of a data breach and the number of records. Different dependence structures are considered. Simulation settingIn this paragraph, we describe our main simulation setting. First, in order to consider reasonable values for our example, we need an appropriate distribution for the parameter \(\theta.\) We choose to simulate \(\theta\) according a simple Pareto distribution with parameters \(b=0.7\) and \(u=7.10^{4},\) as considered by Maillart and Sornette (2010), namely \[\mathbb{P}(\theta\geq t)=\left(\frac{u}{t}\right)^{b},\text{ for }t\geq u.\] This choice of heavy-tailed distributions is consistent with the work of Maillart and Sornette (2010), and with the analysis of Farkas et al. (2021) based on more recent data (a significant class of the data breaches has been identified to follow a distribution close to that considered by Maillart and Sornette (2010)). Then, if the payoff \(Y\) is defined according to (4.1), \(Y\) inherits the heavy-tail property of \(\theta\). More precisely, if \(\theta\) is distributed as Pareto distribution with parameters \(u\) and \(b\), then \(Y\) is distributed according to a Pareto distribution with parameters \(u^{\prime}=u\exp(\alpha)\) and \(b^{\prime}=b/\beta\). The parameter \(\beta\) is of course the most important when focusing on the tail of the distribution, since it is directly linked to the tail index \(\gamma_{Y}=\beta/b.\) In this work, the coefficients are chosen \(\alpha=9.59\) and \(\beta=0.5\), that is slightly lower than the parameter \(\beta\) calibrated in (Farkas et al., 2021). Now, to simulate the actual loss \(X\), we consider that \(X\) is linked to the volume of breached data, yet we do not have access to the actual number of records. We thus suppose that the actual loss \(X\) also satisfies the relation (4.1) but with a different parameter \(\theta^{\prime}\) and different coefficients \(\alpha^{\prime}\) and \(\beta^{\prime}\). The parameter \(\theta^{\prime}\) is also simulated according to a Pareto distribution with parameters \(u\) and \(b^{\prime}-0.1\). The choice of \(b^{\prime}-0.1\) is motivated by the fact that it corresponds to the margin of error given by Maillart and Sornette (2010). Then, the dependence structure between \(X\) and \(Y\) is created through the dependence between \(\theta\) and \(\theta^{\prime}\) described by a copula function. The different copula families and how the corresponding parameters were chosen are explained in the next paragraph. Copula familiesWe consider three families of copulas, corresponding to different types of dependence structure. The functions of copula are summarized in Table 1. The Frank copula family is classical and flexible, but does not allow for tail dependence. In contrast, the other two considered families, that is the Gumbel and the Clayton survival copulas, have non-zero tail dependence. To make things comparable, we consider values of parameters that provide similar dependence structure according to an appropriate indicator. The dependence measure we use is Kendall's tau coefficient \(\tau\), which is defined, for two random variables \((\theta,\theta^{\prime})\), as \begin{table} \begin{tabular}{c|c|c|c|c} Copula family & Copula function & \(\delta\in\) & \(\tau\) & \(\lambda_{U}\) \\ \hline Clayton survival & \(u+v-1+\left[\max\left(u^{-\delta}+v^{-\delta}-1,0\right)\right]^{-1/\delta}\) & \(\delta\geq-1\) and \(\delta\neq 0\) & \(\frac{\delta}{\delta+2}\) & \(2^{-\delta}\) \\ \hline Gumbel & \(\exp\left(-\left[\left(-\log(-u)\right)^{\delta}-\left(\log(-v)\right)^{ \delta}\right]^{1/\delta}\right)\) & \(\delta\geq 1\) & \(\frac{\delta-1}{\delta}\) & \(2-2^{-\delta}\) \\ \hline Frank & \(-\frac{1}{\delta}\log\left(1+\frac{(e^{-\delta u}-1)(e^{-\delta x}-1)}{e^{- \delta}-1}\right)\) & \(\delta\neq 0\) & Implicit & 0 \\ \end{tabular} \end{table} Table 1: Different copula families for the dependence structure of \((\theta,\theta^{\prime})\). follows \[\tau=\mathbb{P}((\theta_{1}-\theta_{2})(\theta_{1}^{\prime}-\theta_{2}^{\prime})>0 )-\mathbb{P}((\theta_{1}-\theta_{2})(\theta_{1}^{\prime}-\theta_{2}^{\prime})<0),\] where \((\theta_{1},\theta_{1}^{\prime})\) and \((\theta_{2},\theta_{2}^{\prime})\) are independent copies of \((\theta,\theta^{\prime}).\) This gives a simple relation between the parameter \(\tau\) and the classical parametrization of the copula families. We consider three values of the parameters for each copula family, corresponding to \(\tau=0.3,\)\(\tau=0.5\) and \(\tau=0.7,\) respectively. Simulation resultsFigure 1 shows the evolution of \(\mathbb{E}[(X-Y)|X\geq s]\) and \(\mathbb{E}[(X-Y)^{2}|X\geq s]\) for the different copula families with different dependence structure. Let us recall that these models only differ in the dependence structure between \(Y\) and \(X.\) In each case, \(X\) and \(Y\) are heavy-tailed. We can observe that the evolution seems approximately linear for \(s\) large in the case of \(\mathbb{E}[(X-Y)|X\geq s],\) and approximately quadratic for \(\mathbb{E}[(X-Y)^{2}|X\geq s]\). It is clear that the dependence structure matters, allowing to reduce the slope (which is an expected property but not covered by Proposition 3.3 and 3.5). We also see in Figure 1 a) that the slope is close to \(1\) for Frank's copula, and is smaller for the families that allow tail dependence. Note that a slope close to \(1\) is bad news: this means that the difference between \(Y\) and \(X\) is of the same magnitude as \(X\) itself (since \(X\geq s\)). These results highlight the need to use a parameter that is not only related to the Figure 1: Evolution of a) \(\mathbb{E}[X-Y|X\geq s]\) and of b) \(\mathbb{E}[(X-Y)^{2}|X\geq s]\) with respect to the threshold \(s\) for different Kendall’s tau coefficients \(\tau=0.3\) (red), \(\tau=0.5\) (green) \(\tau=0.7\) (blue). basis risk via a form of "central dependence", but can also incorporate tail dependence. Without this property, heavy tailed variables may not be approximated properly due to the large proportion of events in the tail of the distribution. ### Impact of the heaviness of the tail To better understand the impact of the heaviness of the tail of the distributions and the impact of the difference between the values of the tail indices of \(X\) and \(Y\), we consider three different settings, denoted \(B_{1}\), \(B_{2}\) and \(B_{3}\), described below. * Setting \(B_{1}:Y\) is simulated as in the main settings, but \(X=Y+\varepsilon\), with \(\varepsilon\sim\mathcal{N}(0,\sigma_{1}^{2})\). The variance \(\sigma_{1}^{2}\) is taken so that \(X\) has the same variance as in the main settings. * Setting \(B_{2}:Y\) is simulated as in the main settings, but \(\log X=\log Y+\varepsilon\), where \(\varepsilon\sim\mathcal{N}(0,\sigma_{2}^{2}).\) Again, \(\sigma_{2}^{2}\) is taken so that \(X\) has the same variance as in the main settings. * Setting \(B_{3}:(X,Y)\) is a Gaussian vector as in (3.1), with same mean and variance as in the main settings. We consider different values for the correlation coefficient, \(\rho=0.3\), \(\rho=0.5\) and \(\rho=0.7\). All of these benchmark cases can be thought has "favorable cases": in \(B_{1}\) and \(B_{2}\), the tail indices of \(X\) and \(Y\) are the same. In the first situation, we consider an additive Gaussian error, so that \(X\) is relatively concentrated around \(Y.\) In \(B_{2}\), the errors are multiplicative, since the Gaussian error is applied to the logarithms. This typically corresponds to the optimistic case where \(\mathbb{E}[\log X|Y]=\alpha+\beta Y\): that is a standard linear regression model on the logarithm of \(X\) with no misspecification error. Finally, the benchmark \(B_{3}\) is the most optimistic case, where \(X\) and \(Y\) have Gaussian tails. Next, Figure 2 shows comparisons of \(\mathbb{E}[(X-Y)\mid X\geq s]\) and \(\mathbb{E}[(X-Y)^{2}\mid X\geq s]\) between the Clayton case and the benchmark settings. The conclusions for the other settings being similar, we postpone the figures to the appendix section (section 7.3). From these figures, we see that the cases \(B_{1}\) and \(B_{2}\) are much favorable. In \(B_{3}\), we are in the Gaussian case, that is \(X\) and \(Y\) have low light tails and \(Y\) should have the same tail, but we see that, although these cases seem very optimistic, the absence of tail dependence make the results po ## 5 Real data example: extreme floods In this section, we illustrate our theoretical results on a real dataset, in particular, the problem created by extreme losses. The dataset comes from the global analysis of Ritchie and Roser (2014) on natural disasters and in particular on the share of deaths attributable to natural disasters. In this work, we chose to focus only on floods: the dataset contains 1 133 flooding events that occurred between 1950 and 2020 in 142 different countries. The dataset provides the total economic damage of each event, the country, the year and the numbers of affected people. The economic damages range from $3 to $40 317 000, with an average economic damage of $788 750 and a median economic damage of $52 403. This significant difference between the mean and the median already suggests that the economic damage variable should be heavy-tailed. To account for inflation, we performed a linear regression on the logarithm of the yearly mean of the economic damage against the year. The economic damages were then multiplied by \(\exp(-a-b\text{Year})\) where \(a\) and \(b\) are coefficients obtained in the linear regression. From now on, when we refer to economic damages, we mean deflated economic damages. As we wish to illustrate that parametric insurance in the context of extreme claims, we Figure 2: Evolution of the ratio of a) \(\mathbb{E}[(X-Y)|X\geq s]\) computed from the Clayton survival copula model, with respect to the value of \(\mathbb{E}[(X-Y)|X\geq s]\) and b) \(\mathbb{E}[(X-Y)^{2}|X\geq s]\) computed from the Clayton survival copula model, with respect to the value of \(\mathbb{E}[(X-Y)^{2}|X\geq s]\) obtained in the benchmark settings. checked that the economic damage variable is indeed heavy-tailed. For that purpose, we used the so-called "Peaks-over-Treshold" method estimation of the tail index \(\gamma\). The main idea is to choose a high threshold \(u\) and fit a generalized Pareto distribution on the excesses above that threshold \(u\). The estimation of the tail index \(\gamma\) may be done by likelihood maximum. The choice of the threshold \(u\) can be understood as a compromise between bias and variance: the smaller the threshold, the less valid the asymptotic approximation, leading to bias; on the other hand, a too high threshold will generate few excesses to fit the model, leading to high variance. The existing methods are mostly graphical, up to our knowledge, no straight-forward data-driven selection procedure is available (see Coles, 2001, for more details). Here, the threshold \(u\) was chosen at the 0.8-quantile of the damages, equal to $917. The tail index was estimated to be equal to 0.71. Figure 3 represents the quantile-quantile plot showing the empirical quantiles of the excesses of the economic damages above the chosen threshold against the quantiles of an exponential distribution. The fact that in the upper tail, the points are above the line \(y=x\) confirms the heaviness of the tail of the distribution. The actual loss \(Y\) corresponds to the economic damages. The total number of affected people in a given country plays the role of the parameter \(\theta\). We need to propose a function \(\phi\) that links \(Y\) and \(\theta\), we choose the use a Classification and Regression Tree (CART) which will perform a regression of \(Y\) against the number of affected people and the country Figure 3: Quantile-quantile plot of the excesses of the economic damages against the quantiles of an exponential distribution in order to predict an average economic damage given the number of affected people for a given country (see Breiman et al., 1984, for details on the CART algorithm). To illustrate the theoretical results of this paper, we performed 10-fold cross-validation, that is the data were partitioned into 10 subsamples. Of the 10 subsamples, one is kept aside as a test set and the model is trained on the 9 remaining subsamples. The cross-validation process is then repeated 10 times, with each of the 10 subsamples used exactly once as the validation data. The validation is done using the quadratic error for each prediction, that is the squared difference between the real economic damage and the predicted economic damage. At the end of the whole cross-validation procedure, classes of the real economic damages are formed (the classes are formed so that there is 10% of the data in each class). For each class, the square root of the mean of the quadratic errors (RMSE) are computed. Figure 4 shows the boxplots of the RMSE for each cost class, illustrating the fact that prediction of the cost using parametric insurance is challenging for severe costs. Figure 4: Boxplots of the root of the mean squared error within each cost class for the floods data Conclusion In this paper, we have investigated what factors can disturb the parametric insurance project to properly cover the basis risk in case of large claims. This question can be seen of the ability of one random variable to approximate another. In particular, we have focused on the issue created by heavy-tailed losses. In these cases, the difference between what is paid to the policyholder and the actual loss may be quite large, especially if the tail index is even slightly mis-estimated. The ability of parametric insurance to reduce the variance will be diminished in this case--unless one agrees to provide poorer coverage for large claims--since no reduction of the tail index compared to the basis risk leads to, more or less, the same kind of variability. Next, the dependence structure between the parameter and the true loss seems to be an important issue to address. Tail dependence appears to be essential in order to obtain a correct approximation of the losses for large claims. It should be mentioned that designing a tail-dependent parameter for the basis risk is a challenging task: the analysis of "extreme" events requires lots of data, which pleads for a careful statistical analysis to properly define the appropriate metric. Alternatively, a specific treatment for large claims could be considered, in order not to create too large a gap between the expectations of the policyholders and the amount compensated. Such a mixed approach--parametric coverage complemented by special treatment of large claims--could be essential to successfully deploy such strategies when facing heavy tail risk. ## 7 Appendix The Appendix section is organized in the following way. Section 7.1 is devoted to the results of Gaussian variables, while the results regarding variables with Pareto tail are gathered in Section 7.2. The additional comparisons with the benchmarks of the simulation study are shown in section 7.3. ### Results on Gaussian variables #### 7.1.1 Proof of Proposition 3.1 First recall that \(X\) is distributed according to the distribution \(\mathcal{N}(\mu_{X},\sigma_{X}^{2})\) so that \[\mathbb{E}[X|X\geq s]=\mu_{X}+\sigma_{X}^{2}h(s|\mu_{X},\sigma_{X}^{2}), \tag{7.1}\] where \(h(s|\mu_{X},\sigma_{X}^{2})\) is the hazard rate of a Gaussian random variable with mean \(\mu_{X}\) and variance \(\sigma_{X}^{2}\), that is \[h(s|\mu_{X},\sigma_{X}^{2})=\frac{\exp\left(-\frac{(s-\mu_{X})^{2}}{2\sigma_{X}^ {2}}\right)}{\sqrt{2\pi\sigma_{X}^{2}}\bar{\Phi}\left(\frac{s-\mu_{X}}{\sigma_ {X}}\right)}\underset{s\rightarrow+\infty}{\sim}\frac{s-\mu_{X}}{\sigma_{X}^{ 2}},\] with \(\bar{\Phi}\) the survival function of the standard Gaussian distribution \(\mathcal{N}(0,1)\). First of all, let \(m(s)=\mathbb{E}[(X-Y)\mathbf{1}_{X\geq s}].\) Since \[\mathbb{E}[Y|X]=\mu_{Y}+\frac{\rho\sigma_{Y}}{\sigma_{X}}(X-\mu_{X}),\] we have \[m(s)=\left(1-\frac{\rho\sigma_{Y}}{\sigma_{X}}\right)E\left[X\mathbf{1}_{X \geq s}\right]-\left(\mu_{Y}-\frac{\rho\sigma_{Y}}{\sigma_{X}}\mu_{X}\right) \mathbb{P}(X\geq s).\] From (7.1), we get \[\mathbb{E}[X-Y|X\geq s] = (\mu_{X}-\mu_{Y})+\left(1-\frac{\rho\sigma_{Y}}{\sigma_{X}} \right)\sigma_{X}^{2}h(s|\mu_{X},\sigma_{X}^{2})\] \[\sim (\mu_{X}-\mu_{Y})+\left(1-\frac{\rho\sigma_{Y}}{\sigma_{X}} \right)(s-\mu_{X}),\] as \(s\rightarrow\infty\). #### 7.1.2 Proof of Proposition 3.2 We have \[\mathbb{E}[(X-Y)^{2}|X\geq s]=\mathbb{E}[X^{2}|X\geq s]+\mathbb{E}[Y^{2}|Y \geq s]-2\mathbb{E}[XY|X\geq s].\] Moreover, \[m_{2}(s)=\mathbb{E}[X^{2}|X\geq s]=\left(\sigma_{X}^{2}+\mu_{X}^{2}\right)+ \sigma_{X}^{2}h(s|\mu_{X},\sigma_{X}^{2})(s+\mu_{X}),\] and \[\mathbb{E}[XY|X\geq s]=\mathbb{E}[X\mathbb{E}[Y|X]|X\geq s].\] From (7.1), \[\mathbb{E}[XY|X\geq s]=\left(\mu_{Y}-\rho\frac{\sigma_{Y}}{\sigma_{X}}\right) (\mu_{X}+\sigma_{X}^{2}h(s|\mu_{X},\sigma_{X}^{2}))+\rho\frac{\sigma_{Y}}{ \sigma_{X}}m_{2}(s).\] Finally, \[\mathbb{E}[Y^{2}|X\geq s]=\mathbb{E}[\mathrm{Var}(Y|X)|X\geq s]+\mathbb{E}[ \mathbb{E}[Y|X]^{2}|X\geq s],\] which leads to \[\mathbb{E}[Y^{2}|X\geq s] = \sigma_{Y}^{2}(1-\rho^{2})+\left(\mu_{Y}-\rho\mu_{X}\frac{\sigma_{Y }}{\sigma_{X}}\right)^{2}\] \[+\rho^{2}\frac{\sigma_{Y}^{2}}{\sigma_{X}^{2}}m_{2}(s)+2\rho\frac{ \sigma_{Y}}{\sigma_{X}}(\mu_{Y}-\rho\mu_{X}\frac{\sigma_{Y}}{\sigma_{X}})(\mu_ {X}+\sigma_{X}^{2}h(s|\mu_{X},\sigma_{X}^{2})).\] Hence, we see that \[\mathbb{E}[(X-Y)^{2}|X\geq s] = \left(1-\rho\frac{\sigma_{Y}}{\sigma_{X}}\right)^{2}m_{2}(s)+o(m_ {2}(s))\] \[= \left(1-\rho\frac{\sigma_{Y}}{\sigma_{X}}\right)^{2}sh(s|\mu_{X},\sigma_{X}^{2})+o(sh(s|\mu_{X},\sigma_{X}^{2}))\] \[\sim \left(1-\rho\frac{\sigma_{Y}}{\sigma_{X}}\right)^{2}\frac{s^{2}} {\sigma_{X}^{2}},\] as \(s\rightarrow\infty.\) ### Results on heavy-tail variables #### 7.2.1 A preliminary result We start with a preliminary result showing that the variable \(X-Y\) has the same tail index as \(X,\) under the assumptions of Propositions 3.3 to 3.5. Lemma 7.1 consists in providing upper and lower bounds for the survival function of \(X-Y\). **Lemma 7.1**: _Let \(X,\)\(Y\) be as defined in Proposition 3.3, and let \(Z=X-Y\). Then, introducing \(S_{Z}(t)=\mathbb{P}(Z\geq t),\) for \(t\geq 0,\)_ \[h^{-1/\gamma_{X}}t^{-1/\gamma_{X}}\ell_{X}(th)-(h-1)^{-1/\gamma_{Y}}t^{-1/ \gamma_{Y}}\ell_{Y}(t(h-1))\leq S_{Z}(t)\leq t^{-1/\gamma_{X}}\ell_{X}(t),\] _with \(h>0\) fixed._ **Proof.** Since \(Y\geq 0\) almost surely, \(X-Y\geq t\) implies that \(X\geq t.\) Hence, we get \(S_{Z}(t)\leq S_{X}(t),\) and the upper bound of Lemma 7.1 is obtained. To obtain the lower bound, introduce the event, for \(h>0\) fixed, \[E_{h}(t)=\left\{\{X\geq th\}\cap\left\{Y\leq t(h-1)\right\}\right\}.\] We have \(S_{Z}(t)\geq\mathbb{P}(E_{h}(t)).\) Next, \[\mathbb{P}(E_{h}(t)) \geq \mathbb{P}(X\geq th)-\mathbb{P}(Y\geq t(h-1))=S_{X}(th)-S_{Y}(t(h-1 )).\] This shows the lower bound in Lemma 7.1. We can now apply this lemma to obtain the following proposition. **Proposition 7.2**: _Let \(X\) be a heavy tail variable with \(\gamma_{X}>0.\) Let \(Y\) be a non negative variable with tail index \(\gamma_{Y}<\gamma_{X}.\) If \(\gamma_{X}>\gamma_{Y},\)\(Z=X-Y\) is heavy-tailed with tail index \(\gamma_{X}.\)_ **Proof.** Let \(\ell_{Z}(t)=t^{1/\gamma_{X}}S_{Z}(t).\) Let us proof that \(\ell_{Z}\) is slowly-varying. From Lemma 7.1, for \(x>1\) and for all \(h>10,\) \[\frac{\ell_{Z}(tx)}{\ell_{Z}(t)} \leq \frac{\ell_{X}(tx)}{h^{-1/\gamma_{X}}\ell_{X}(th)-(h-1)^{-1/\gamma _{X}}(t(h-1))^{1/\gamma_{X}-1/\gamma_{Y}}\ell_{Y}(t(h-1))}\,, \tag{7.2}\] and, \[\frac{\ell_{X}(thx)}{h^{1/\gamma_{X}}\ell_{X}(t)}-\frac{x^{1/\gamma_{X}}(tx(h- 1))^{-1/\gamma_{Y}}\ell_{Y}(tx(h-1))}{t^{-1/\gamma_{X}}\ell_{X}(t)}\leq\frac{ \ell_{Z}(tx)}{\ell_{Z}(t)}. \tag{7.3}\] Hence, for all \(x>0\) and \(h>1,\) taking the limit of the right-hand side of (7.2), \[\lim\sup_{t\to\infty}\frac{\ell_{Z}(tx)}{\ell_{Z}(t)}\leq h^{1/\gamma_{X}}.\] To see that, let \(\beta=\gamma_{Y}^{-1}-\gamma_{X}^{-1}>0.\) We have \(t^{-\beta/2}l_{Y}(t(h-1))\to_{t\to\infty}0,\) and \(t^{\beta/2}l_{X}(tx)\to_{t\to\infty}\infty.\) Similarly, from (7.3), we get \[\lim\inf_{t\to\infty}\frac{\ell_{Z}(tx)}{\ell_{Z}(t)}\geq\frac{1}{h^{1/\gamma_ {X}}}.\] This is valid for all \(h>1.\) Next, let \(h\) tend to \(1\) in order to obtain that, for all \(x,\)\(\lim_{t\to\infty}\ell_{Z}(tx)/\ell_{Z}(t)=1,\) leading to the result. #### 7.2.2 Proof of Proposition 3.3 From Proposition 7.2, \(Z=X-Y\) is heavy-tailed with tail index \(\gamma_{X},\) that is \(S_{Z}(t)=\ell_{Z}(t)t^{-1/\gamma_{X}}\) with \(\ell_{Z}\) slow-varying. Hence, \[\mathbb{E}[Z|Z\geq s]=s+o(s). \tag{7.4}\] Next, \[\mathbb{E}[Z|X\geq s] = \mathbb{E}[Z|X\geq s,Z\geq s]\mathbb{P}(Z\geq s|X\geq s)\] \[+\mathbb{E}[Z|X\geq s,Z<s]\mathbb{P}(Z<s|X\geq s).\] Since \(Z\geq s\) implies \(X\geq s\), we have \[\mathbb{E}[Z|X\geq s,Z\geq s]\mathbb{P}(Z\geq s|X\geq s)=\mathbb{E}[Z|Z\geq s] \mathbb{P}(Z\geq s|X\geq s).\] Moreover, \[\mathbb{P}(Z\geq s|X\geq s)=\frac{\mathbb{P}\left(\{Z\geq s\}\cap\{X\geq s\} \right)}{\mathbb{P}(X\geq s)}=\frac{\mathbb{P}(Z\geq s)}{\mathbb{P}(X\geq s)}.\] From this, we get \[\mathbb{E}[Z|X\geq s]\geq\mathbb{E}[Z|Z\geq s]\times\frac{\ell_{Z}(s)}{\ell_{ X}(s)}=(s+o(s))\times\frac{\ell_{Z}(s)}{\ell_{X}(s)},\] from (7.4). From Lemma 7.1, we see that, taking for example \(x=2,\) \[\frac{\ell_{Z}(s)}{\ell_{X}(s)}\geq\frac{1}{2^{1/\gamma_{X}}}\frac{\ell_{X}(2 s)}{\ell_{X}(s)}-\frac{1}{s^{\beta}}\frac{\ell_{Y}(s)}{\ell_{X}(s)},\] introducing \(\beta=\gamma_{Y}^{-1}-\gamma_{X}^{-1}>0.\) Since \(\ell_{X}\) and \(\ell_{Y}\) are slow varying, \(s^{\beta/2}\ell_{X}(s)\to\infty,\)\(s^{-\beta/2}\ell_{Y}(s)\to 0\) as \(s\) tends to infinity, leading to \[\frac{1}{s^{\beta}}\frac{\ell_{Y}(s)}{\ell_{X}(s)}=o(1).\] Moreover, \[\lim_{s\to\infty}\frac{\ell_{X}(2s)}{\ell_{X}(s)}=1,\] showing that there exists a constant \(c>0\) such that, for \(s\) large enough, \[\mathbb{E}[Z|X\geq s]\geq cs.\] #### 7.2.3 Proof of Proposition 3.4 We have \[\mathbb{E}[X-Y|X\geq s]=\mathbb{E}[X-\psi(X)|X\geq s],\] where \(\psi(X)=\mathbb{E}[Y|X].\) Since \(X\) is heavy-tailed with tail index \(\gamma_{X}>0,\)\(\mathbb{E}[X|X\geq s]=s+o(s).\) On the other hand, \[\mathbb{E}[\psi(X)|X\geq s]=\mathbb{E}[\psi(X)|\psi(X)\geq\psi(s)],\] since \(\psi\) is strictly non decreasing. Then, since \(\psi(X)\) assumed to be heavy-tailed, \[\mathbb{E}[\psi(X)|\psi(X)\geq\psi(s)]=\psi(s)+o(\psi(s)),\] which shows that \[\mathbb{E}[X-Y|X\geq s]=s-\psi(s)+o(s).\] #### 7.2.4 Proof of Proposition 3.5 Let \(Z=X-Y.\) We have \[\mathbb{E}[(X-Y)^{2}|X\geq s]=\frac{\mathbb{E}[Z^{2}\mathbf{1}_{X\geq s}]}{ \mathbb{P}(X\geq s)}\geq\frac{\mathbb{E}[Z^{2}\mathbf{1}_{X\geq s}\mathbf{1}_{ Z\geq s}]}{\mathbb{P}(X\geq s)}.\] If \(Z\geq s,\) then, necessarily, \(X\geq s\) since \(Y\geq 0.\) Hence \[\mathbb{E}[(X-Y)^{2}|X\geq s]\geq\frac{\mathbb{E}[Z^{2}\mathbf{1}_{Z\geq s}]}{ \mathbb{P}(X\geq s)}=\mathbb{E}[Z^{2}|Z\geq s]\frac{\mathbb{P}(Z\geq s)}{ \mathbb{P}(X\geq s)}.\] Next, \[\mathbb{E}[Z^{2}|Z\geq s]=\frac{\mathbb{E}[Z^{2}\mathbf{1}_{Z^{2}\geq s^{2}}]} {\mathbb{P}(Z\geq s)}-\frac{\mathbb{E}[Z^{2}\mathbf{1}_{Z<-s}]}{\mathbb{P}(Z \geq s)}.\] Moreover, \[\mathbb{E}[Z^{2}\mathbf{1}_{Z<-s}]\leq\mathbb{E}[Y^{2}\mathbf{1}_{Y\geq s}]= \mathbb{E}[Y^{2}|Y\geq s]\mathbb{P}(Y\geq s).\] Combining this last equation with Proposition 7.2 leads to \[\mathbb{E}[Z^{2}|Z\geq s] \geq \mathbb{E}[Z^{2}|Z^{2}\geq s^{2}]\frac{\mathbb{P}(Z^{2}\geq s^{2} )}{\mathbb{P}(Z\geq s)}-\frac{\ell_{Y}(s)}{\ell_{Z}(s)}\mathbb{E}[Y^{2}|Y\geq s ]s^{1/\gamma_{X}-1/\gamma_{Y}}\] \[\geq \mathbb{E}[Z^{2}|Z^{2}\geq s^{2}]-\frac{\ell_{Y}(s)}{\ell_{Z}(s)} \mathbb{E}[Y^{2}|Y^{2}\geq s^{2}]s^{1/\gamma_{X}-1/\gamma_{Y}},\] where the last line comes from the fact that, \(\mathbb{P}(Z^{2}\geq s^{2})\geq\mathbb{P}(Z\geq s),\) and that, since \(Y\geq 0\) almost surely, \(\mathbb{E}[Y^{2}|Y\geq s]=\mathbb{E}[Y^{2}|Y^{2}\geq s^{2}].\) We have, since \(Y\geq 0\) almost surely, \[\mathbb{P}(Y^{2}\geq t)=\mathbb{P}(Y\geq t^{1/2})=\frac{\ell_{Y}(t^{1/2})}{t^ {1/(2\gamma_{Y})}},\] where \(t\rightarrow\ell_{Y}(t^{1/2})\) inherits the slow-varying property of \(\ell_{Y}.\) Hence \[\mathbb{E}[Y^{2}|Y^{2}\geq s^{2}]=s^{2}+o(s^{2}). \tag{7.5}\] On the other hand, let \(\beta=\gamma_{Y}^{-1}-\gamma_{X}^{-1}>0.\) Since \(\ell_{Y}(s)s^{2-\beta/2}\to 0\) when \(s\) tends to infinity, and since \(\ell_{Z}(s)s^{\beta/2}\rightarrow\infty.\) Hence, using (7.5), \[\frac{\ell_{Y}(s)}{\ell_{Z}(s)}\mathbb{E}[Y^{2}|Y^{2}\geq s^{2}]s^{1/\gamma_{ X}-1/\gamma_{Y}}=s^{2-\beta/2}+o(s^{2-\beta/2}).\] Since \(\mathbb{E}[Z^{2}|Z^{2}\geq s^{2}]=s^{2}+o(s^{2}),\) the result follows. ### Additional comparisons with benchmarks **Acknowledgment:** The authors acknowledge funding from the project _Cyber Risk Insurance: actuarial modeling_, Joint Research Initiative under the aegis of Risk Fundation, with partnership of AXA, AXA GRM, ENSAE and Sorbonne Universite.
2302.06138
Study of Low-latitude Ionospheric Scintillation using NavIC
Equatorial ionospheric irregularities have been studied in the past and have produced interesting insights into ionospheric physics and processes. Here, we present the initial results of a long-term study of the ionosphere near the Equatorial Ionization Anomaly (EIA) using Navigation with the Indian Constellation (NavIC). We have characterized the ionospheric irregularities in terms of the power spectral density at different dynamical frequencies. The formalism is similar to as suggested by earlier works using the phase screen modeling of the ionosphere. The observations of the C/N 0 (dB-Hz) variation have been taken by utilizing the L5 (1176.45 MHz) signal of NavIC over Indore located near the northern crest of EIA. We show some initial results as a proof of concept study from a single day (December 4, 2017) of scintillation observations. This is a first-of-its-kind study in this region with NavIC. From the power spectral density analysis, we have demonstrated that NavIC is capable of detecting such irregularities over long periods over this region and has implications for forecasting such events in the future.
Sumanjit Chakraborty, Abhirup Datta
2023-02-13T06:59:35Z
http://arxiv.org/abs/2302.06138v1
# Study of Low-latitude Ionospheric Scintillation using NavIC ###### Abstract Equatorial ionospheric irregularities have been studied in the past and have produced interesting insights about ionospheric physics and processes. Here, we present the initial results of a long term study of ionosphere near the Equatorial Ionization Anomaly (EIA) using the Navigation with the Indian Constellation (NavIC). We have characterized the ionospheric irregularities in terms of the power spectral density at different dynamical frequencies. The formalism is similar to as suggested by [1-3] using the phase screen modelling of the ionosphere. The observations of the C/N\({}_{o}\) (dB-Hz) variation have been taken by utilizing the L5 (1176.45 MHz) signal of NavIC over Indore located near the northern crest of EIA. We show some initial results as a proof of concept study from a single day (December 4, 2017) scintillation observations. This is a first of its kind study in this region with NavIC. From the power spectral density analysis, we have demonstrated that NavIC is capable of detecting such irregularities over long periods over this region and has implications in forecasting such events in the future. ## 1 Introduction Equatorial Plasma Bubbles (EPBs) are the post-sunest plasma instability in the equatorial ionosphere generating irregularities and large scale depletion in the density of the electrons. When the radio waves propagate through these irregularities, they experience scattering and diffraction, causing random fluctuations or scintillations of the Very High Frequency (VHF) signal amplitude, phase, polarization and the propagation direction [4-7]. These scintillation producing irregularities are generally present in and around the F-layer of the ionosphere, where the plasma density is maximum. The primary mechanism responsible for generating the EPBs, with scale sizes from few hundred meters to several kilometers, is the Rayleigh-Taylor (R-T) instability [4-7]. Scintillations are observed during the period when solar activity is at its peak, occurring near the magnetic equator in the post-sunet to midnight sector [8-9]. As these scintillations are demonstration of space weather effects, that affects the performance of space-based navigation systems [10], it becomes crucial to understand the occurrence of scintillation and its after effects. Since the early 1950s and following the works by [11-12], characterization of the ionospheric scintillation has been performed by several researchers who developed numerous phase screen theories. Their fundamental approach was the use of wave propagation theory in random media in order to obtain the exiting wave parameters and further solving the problem using theory of Fresnel diffraction that involves radio wave propagation between the ionosphere and the Earth, which finally would replace the ionosphere by an equivalent random phase screen [13]. They considered the ionospheric irregularities as phase objects with the ground pattern produced by the propagating waves through this phase screen derived from the Fresnel diffraction theory [14]. The northern crest of the Equatorial Ionization Anomaly (EIA) and the geomagnetic equator passing the southern tip of the Indian peninsular region, in addition to sharp latitudinal gradients of ionization, makes the Indian longitude sector a highly geosensitive region of investigation for ionospheric research during geomagnetically disturbed periods when the low-latitude ionization is significantly affected as a result of solar eruptions like the Coronal Mass Ejections (CMEs), the Co-rotating Interaction Regions (CIRs) and solar flares [15-16]. Recently, evaluation of the latest versions of the existing global empirical ionospheric models, the International Reference Ionosphere 2016, the NeQuick2 and the IRI-extended to Plasmasphere, are shown in details to be incapable of predicting the geomagnetically disturbed conditions over the Indian longitude sector [17], thus suggesting the need for investigating the existing knowledge of the phase screen model for proper characterization of the low-latitude ionosphere over the Indian sector. In this paper for the first time, we show initial results by performing the power spectral density analysis and demonstrate that NavIC is capable of detecting such irregularities over long periods over this region, which would have implications in forecasting such events in the future. ## 2 Data Data has been analysed using a NavIC receiver, provided by the SAC-ISRO, capable of receiving L5 (1176.45 MHz) signal, operational since 2017, in the Department of Astronomy, Astrophysics and Space Engineering of IIT Indore. Results and Discussions Due to the passage of a High Speed Solar Wind (HSSW) stream around the Earth, arising from a positive polarity CH on the Sun, a G1 (K\({}_{p}\)= 5, minor) level geomagnetic storm was observed on December 4, 2017 according to the NOAA. The Dst index showed dip during December 4, 2017 with a value -45 nT at 22:00 UT. Figure 1 shows the C/N\({}_{o}\) variation (dB-Hz) for the entire day of December 4, 2017, as observed by the L5 signal of NavIC. Drops in the C/N\({}_{o}\) has been observed at multiple time stamps throughout the day by all the PRNs (consisting of a set of geostationary and geosynchronous satellites). It is to noted that the C/N\({}_{o}\) values had dropped to zero in these time-stamps and that they are designated in the figure as drop-downs to 30 dB-Hz. However, for the identification of significant C/N\({}_{o}\) drops, Figure 2 shows the hourly binned variance plots of all the PRNs of NavIC during the entire day of December 4, 2017. The bin of 16-17 UT for the PRN 2 and that of 12-13 UT for PRN 5 show the most significant rise and hence maximum variation among all the bins of the day. The PSD is given by [1-3]: \[S=\frac{\alpha}{(f_{0}^{2}+f^{2})^{\frac{\beta}{2}}} \tag{1}\] where \(\alpha\) is the spectral strength, f is the dynamical frequency and \(\beta\) is the spectral slope. For \(f>>f_{0}\), equation 1 becomes simplified to: \[S=\alpha f^{-\beta} \tag{2}\] Figure 3 depicts the PSD variation from one of the PRNs (PRN 5) which shows a power-law variation in the PSD and thus is taken into consideration for calculation of the \(\beta\) which is found out to be 0.459\(\pm\)0.039, showing occurrence of weak scintillation that is detected by NavIC. Figure 1: The C/N\({}_{o}\) (dB-Hz) variation during the entire day of December 04, 2017, as observed by L5 signal of NavIC satellite PRNs 2-7.The C/N\({}_{o}\) values that had dropped to zero are designated here as drops to 30 dB-Hz. Figure 3: The PSD variation with the least square fit (black solid line) and the smoothed data (red solid line) corresponding to the C/N\({}_{o}\) variation designated in horizontal bars in 2 as observed from PRN 5 during December 4, 2017. Figure 2: The hourly binned variance plots of C/N\({}_{o}\) for all PRNs of NavIC on December 4, 2017. Most significant rises in variance values as observed from PRN 2 (16-17 UT bin) and PRN 5 (12-13 UT bin). Conclusions The ionosphere over the Indian longitude sector is highly dynamic and geosensitive as a result of presence of the northern crest of EIA. Previous studies show development of a power law phase screen theory and models for scintillation based on weak and strong conditions. However, nature of the phase screen associated with scattering, near the EIA of the geosensitive region of India, with observations utilizing the NavIC satellites' signal (L5:1176.45 MHz), has not been studied in the literature. To address the problem, this study was performed during December 4, 2017 that caused fluctuations in the NavIC C/N\({}_{o}\) variation. The spectral slope from the PSD was obtained to be 0.459\(\pm\)0.039. This paper for the first time, demonstrated the capability of NavIC for detecting such irregularities from the PSD analysis. This single day proof of concept study will be followed up with a detailed study of scintillation analysis. ## 5 Acknowledgements The authors acknowledge Space Applications Centre, Indian Space Research Organization for providing the NavIC receivers to Department of Astronomy, Astrophysics and Space Engineering, IIT Indore under the project NGP-17. Further acknowledgements go to World Data Center for Geomagnetism, Kyoto for the Dst index freely available at [http://wdc.kugi.kyoto-u.ac.jp/kp/index.html](http://wdc.kugi.kyoto-u.ac.jp/kp/index.html).
2303.06211
Radiation and Reaction at One Loop
We study classical radiation fields at next-to-leading order using the methods of scattering amplitudes. The fields of interest to us are sourced when two massive, point-like objects scatter inelastically, and can be computed from one-loop amplitudes. We show that the real and imaginary parts of the amplitudes both play important but physically distinct roles in the radiation field. The amplitude's imaginary part directly computes the portion of radiation emitted under the influence of a body's self-field, i.e., through radiation reaction. This aspect of radiation reaction is directly linked to one-loop Compton amplitudes in electrodynamics, Yang-Mills theory and gravity. We also discuss the fascinating interplay between renormalisation, radiation reaction and classical field theory from this perspective.
Asaad Elkhidir, Donal O'Connell, Matteo Sergola, Ingrid A. Vazquez-Holm
2023-03-10T21:18:45Z
http://arxiv.org/abs/2303.06211v2
# Radiation and Reaction at One Loop ###### Abstract We study classical radiation fields at next-to-leading order using the methods of scattering amplitudes. The fields of interest to us are sourced when two massive, point-like objects scatter inelastically, and can be computed from one-loop amplitudes. We show that the real and imaginary parts of the amplitudes both play important but physically distinct roles in the radiation field. The amplitude's imaginary part directly computes the portion of radiation emitted under the influence of a body's self-field, i.e., through radiation reaction. This aspect of radiation reaction is directly linked to one-loop Compton amplitudes in electrodynamics, Yang-Mills theory and gravity. We also discuss the fascinating interplay between renormalisation, radiation reaction and classical field theory from this perspective. ## 1 Introduction * 2 Field strengths from amplitudes * 2.1 States and observables * 2.2 Real and imaginary parts * 3 Technical simplifications * 3.1 Hierarchy of momenta * 3.2 Classical LO waveshape and heavy-particle crossing * 3.3 Vanishing cuts * 3.4 Vanishing integrals * 3.5 Real parts from single cuts and principal values * 4 Radiation * 4.1 QED * 4.2 QCD * 5 Reaction * 5.1 QED radiation reaction... * 5.2...QCD radiation reaction... * 5.3...GR radiation reaction * 6 Renormalisation * 6.1 Infrared divergences * 6.2 Real divergence * 6.3 Imaginary part * 6.4 QCD and gravity * 6.5 Renormalisation * 7 Classical confirmation * 8 Conclusions * A A derivation of the ALD force with cuts Introduction Gravitational waveforms sourced by compact binary coalescence events are now the basic physical observable in precision studies of General Relativity (GR). These waveforms are closely related to the Riemann curvature: in the transverse traceless gauge, the waveform's second derivative is a curvature component. Future gravitational wave observatories will work at higher signal-to-noise ratios, and therefore will be sensitive to more subtle aspects of the waveform. This presents an exciting challenge for the theoretical physics community: to develop tools allowing for efficient determination of gravitational waveforms at new levels of precision [1; 2; 3; 4]. Precision computations in GR are a challenge because of the non-linearity of its perturbative structure. One hope for simplifying this non-linearity comes from the study of scattering amplitudes in quantum field theory. Scattering amplitudes have a remarkable property known as the "double copy", which allows us to obtain scattering amplitudes in gravitational theories given amplitudes in much simpler Yang-Mills theories [5; 6; 7; 8]. Furthermore, elegant and powerful tools have been developed to compute scattering amplitudes with remarkable ease. In this article, we will make heavy use of generalised unitarity [9; 10]. This method allows us to construct loop-level scattering amplitudes from tree amplitudes. We can further combine the double copy and generalised unitarity, effectively building the dynamical information necessary for gravitational waveforms from tree amplitudes in Yang-Mills theory. The union of generalised unitarity and the double copy has already proven very fruitful in the study of General Relativity, and provides a fresh perspective on the relativistic two-body problem [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55]. To date, much of the work on amplitudes and classical gravity has focused on the understanding the interaction potential between gravitating masses. In contrast, our interest is directly on the radiation emitted during a dynamical process. We make use of a method for constructing radiation fields from amplitudes which has been developed in recent years [37; 39; 56; 57; 58; 59; 60; 61]. The basic idea is to use a quantum-mechanical language to describe the event; the physical observable to be computed becomes the expectation value of the Riemann curvature. In the classical limit, this expectation will equal the classical curvature, up to quantum corrections which can be systematically dropped. The formalism is general and can be applied to field strengths in a variety of theories: electromagnetism, Yang-Mills (YM) theory, and gravity. As their name suggests, the methods of scattering amplitudes are most directly applicable to events where two objects scatter, generating radiation, rather than to the more physically relevant bound binaries. Nevertheless methods exist to connect scattering and bound case physics. In some cases, we can simply analytically continue observables from the scattering to the bound cases [62; 63; 64; 65]. More generally, it is possible to build effective field theories (EFTs) describing general binary dynamics [47; 48; 56; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101], see [102; 103] for reviews. These EFTs can be matched to scattering data and then applied to the bound case, effectively forming a bridge between the two. Effective field theory is a powerful tool, and has been used very successfully in classical gravitational wave physics for many years now [92]. Most closely connected to this article are the "post-Minkowskian" effective theories [104; 105; 106; 107; 108; 109; 100; 101] and worldline quantum field theories [111; 112; 113; 114; 115; 116; 117; 118; 119]. These methods were applied to describe conservative binary dynamics at \(\mathcal{O}(G)\) and \(\mathcal{O}(G^{2})\)[66], at \(\mathcal{O}(G^{3})\)[120; 121] and at \(\mathcal{O}(G^{4})\) in references [122; 38]. Recently, the "HEFT" approach of [47; 48] has been introduced to the array of EFT based methods. Inspired by the success of heavy-quark effective theory [123; 124; 125; 126; 127], the HEFT approach implements the classical limit as a large mass limit. The resulting decoupling between the heavy and light degrees of freedom exposes certain simplifications associated with the classical limit and combines nicely with the double copy [20; 47; 48; 49; 74; 128]. In this article, we build on previous work which studied scattering encounters between classical, point-like objects at leading order (LO) [56; 58; 59; 129; 37]. We describe the structure of field strength observables at next-to-leading order (NLO) in terms of scattering amplitudes. As we will see, the structure is remarkably simple. The waveform, as determined by amplitudes, naturally has two pieces. These are associated with the real and imaginary parts of a one-loop five-point amplitude. We will see that the imaginary part has the classical interpretation of the portion of radiation emitted by an object accelerating under its own self-field. That is, the imaginary part arises as a consequence of radiation reaction at one loop order. This radiation from radiation-reaction is determined by Compton amplitudes in electrodynamics, YM theory, and in gravity. Our treatment makes it clear that this aspect of radiation reaction double-copies in a straightforward manner at NLO. We begin in section 2 with a discussion the general structure of field-strength observables at NLO before describing some technical simplifications we can take advantage of at this order in section 3. After those preliminaries, we dive into the main computations of the paper. First, in section 4, we determine the radiation at one loop which is associated with the real part of the scattering amplitude. This part of the radiation field is classically associated with essentially conservative forces (eg the Lorentz force in electromagnetism). We focus on examples in electrodynamics and Yang-Mills theory. In section 5, we turn to the imaginary part of the amplitude which determines a well-defined, physically distinct portion of the waveform. This part is associated with the radiation emitted from the particle under the influence of its self-field. As radiation reaction is intimately associated with renormalisation, we present a thorough discussion of the renormalisation of the one-loop five-point amplitude in QED which determines the radiation field in section 6. In this section, we justify the omission of certain cuts which one might naively think could contribute to the classical radiation field. We show that these cuts are fully quan tum after renormalisation in the on-shell scheme. Along the way we discuss infrared divergences. Finally, we turn to a detailed classical verification of our results in the context of electrodynamics before concluding. In appendix A, we summarise a classical perspective on the Abraham-Lorentz-Dirac self-field inspired by an old article of Coleman. This perspective is intended to clarify how the quantum-mechnical approach coincides with the classical approach to radiation reaction. #### Note added While finalising this paper we learned about parallel research presented in references [130] and [131] which contain some overlap with our work. Further related work by one of the present authors and collaborators will appear in the forthcoming article [132]. We thank the authors for cooperating with us in the submission of our work, and for sharing advance copies of their drafts with us. ## 2 Field strengths from amplitudes Our goal is to compute the radiation field generated by a scattering event involving two point-like classical objects using the methods of scattering amplitudes. The basic observable of interest is the field strength (in electrodynamics and YM theory) or the Riemann curvature (in gravity), both of which are very similar in structure; we will refer to both generically as "field strengths". In this section, we explain how to determine these field strengths from scattering amplitudes at next-to-leading order (NLO) accuracy. We begin with a short review of the connection between amplitudes and observables, focusing on the case of the field strength, which will also be an opportunity to introduce some notation. ### States and observables Field strengths, as observables in themselves, were first discussed from the perspective of amplitudes in references [37; 133] and were recently reviewed in [134; 135]. Amplitudes are quantum-mechanical objects, so we must start by specifying an initial quantum state which happens to be in the domain of validity of the classical approximation. If we also arrange initial conditions so that we may rely on the classical approximation throughout the scattering event, the equivalence principle guarantees that the quantum treatment will agree with a classical treatment up to small quantum corrections which we systematically drop. This is the basic philosophy of the KMOC approach [59] to extracting classical physics from scattering amplitudes. We will follow the notation of KMOC closely below. We choose our initial state to be \[\ket{\psi}=\int\mathrm{d}\Phi(p_{1},p_{2})\,\phi_{b}(p_{1},p_{2})\ket{p_{1},p_ {2}}\,, \tag{1}\] where, following [59], we write \[\mathrm{d}\Phi(p)\equiv\frac{\mathrm{d}^{D}p}{(2\pi)^{D}}(2\pi)\delta(p^{2}-m^{2} )\,\Theta(p^{0})\equiv\hat{\mathrm{d}}^{D}p\,\hat{\delta}(p^{2}-m^{2})\,\Theta(p ^{0}) \tag{2}\] for the on-shell phase-space measure of a single particle with mass \(m\). Hatted derivatives and delta-functions are defined to absorb factors of \(2\pi\). Meanwhile the ket \(|p_{1},p_{2}\rangle\) involves two different particles -- quanta of entirely different quantum fields -- with masses \(m_{1}\) and \(m_{2}\) and momenta \(p_{1}\) and \(p_{2}\). Since a plane-wave state of a massive particle has no classical interpretation, we have placed our particles in a wavepacket \(\phi_{b}(p_{1},p_{2})\). This wavepacket should individually localise each particle with an uncertainty which is very small compared to any relevant classical scale in our process (for example, the impact parameter). As an example, we could choose \[\phi_{b}(p_{1},p_{2})\equiv e^{ib_{1}\cdot p_{1}}e^{ib_{2}\cdot p_{2}}\phi_{1} (p_{1})\phi_{2}(p_{2})\,, \tag{3}\] where \(\phi_{i}(p_{i})\) are sharply-peaked functions of the momenta. The two-particle wave-function in equation (3) displaces particle \(i\) by a distance \(b_{i}\) relative to an origin; then the impact parameter is \(b_{12}=b_{1}-b_{2}\). We will soon find it very convenient to extend our notation for phase-space measures by writing the measure for several particles as \[\mathrm{d}\Phi(p_{1},p_{2},\ldots)=\mathrm{d}\Phi(p_{1})\mathrm{d}\Phi(p_{2}) \cdots\,. \tag{4}\] We also define the appropriate delta-function \(\delta_{\Phi}(p)\) with respect to this norm such that \[\int\mathrm{d}\Phi(p)\,\delta_{\Phi}(p-p^{\prime})\,f(p)=f(p^{\prime})\,, \tag{5}\] for any smooth function \(f(p)\). Our basic task is to compute the future expectation value of a field strength operator. In Yang-Mills theory, the relevant operator is the field strength tensor \[\mathbb{F}^{a}_{\mu\nu}=\partial_{\mu}\mathbb{A}^{a}_{\nu}-\partial_{\nu} \mathbb{A}^{a}_{\mu}+gf^{abc}\mathbb{A}^{b}_{\mu}\mathbb{A}^{c}_{\nu}\,. \tag{6}\] There is one immediate simplification from working in the far-field limit. In the far field, the expectation value of the Yang-Mills potential \(\mathbb{A}(x)\) is inversely proportional to the large radius \(r\) between the observer and the scattering event. We will only be interested in this leading \(1/r\) behaviour. As a result we may replace the full non-Abelian field strength with its abelianised version: \[\mathbb{F}^{a}_{\mu\nu}\simeq\partial_{\mu}\mathbb{A}^{a}_{\nu}-\partial_{\nu} \mathbb{A}^{a}_{\mu}\,. \tag{7}\] In gravity, we are only interested in the expectation value of the linearised Riemann tensor for the same reason. It may be worth emphasising that there is still non-linear (non-Abelian) dynamics in the core of spacetime. The state in the far future is \(S\left|\psi\right\rangle\) since the \(S\) matrix is the all-time evolution operator. Placing our detector at a position \(x\) near lightlike future infinity, the observable of interest to us in Yang-Mills theory is \[F^{a}_{\mu\nu}(x)\equiv\langle\psi|S^{\dagger}\mathbb{F}^{a}_{\mu\nu}(x)S|\psi \rangle. \tag{8}\] To connect with scattering amplitudes, we use the mode expansion for the quantum field \(\mathbb{A}^{a}_{\mu}\): \[\mathbb{A}^{a}_{\mu}(x)=\sum_{\eta}\int\mathrm{d}\Phi(k)\left[\varepsilon^{ \eta}_{\mu}(k)a^{a}_{\eta}(k)e^{-ik\cdot x}+\mathrm{h.c.}\right]\,, \tag{9}\] so that \[F^{a}_{\mu\nu}(x)=2\,\mathrm{Re}\sum_{\eta}\int\mathrm{d}\Phi(k)\left[-ik_{[ \mu}\varepsilon^{\eta}_{\nu]}(k)\,\langle\psi|S^{\dagger}a^{a}_{\eta}(k)S|\psi \rangle\,e^{-ik\cdot x}\right]\,. \tag{10}\] Most of our focus in this article will be on the computation of \(\langle\psi|S^{\dagger}a^{a}_{\eta}(k)S|\psi\rangle\). Once this quantity is known, an explicit expression for the field strength can be found by integration. In gravity, defining the curvature expectation \[R_{\mu\nu\rho\sigma}(x)\equiv\langle\psi|S^{\dagger}\mathbb{R}_{\mu\nu\rho \sigma}(x)S|\psi\rangle\, \tag{11}\] it similarly follows that \[R_{\mu\nu\rho\sigma}(x)=\kappa\,\mathrm{Re}\sum_{\eta}\int\mathrm{d}\Phi(k) \left[k_{[\mu}\varepsilon^{\eta}_{\nu]}(k)k_{[\rho}\varepsilon^{\eta}_{\sigma] }(k)\,\langle\psi|S^{\dagger}a_{\eta}(k)S|\psi\rangle\,e^{-ik\cdot x}\right]\,. \tag{12}\] We have introduced the constant \(\kappa\), defined in terms of Newton's constant by \(\kappa=\sqrt{32\pi G}\), and the annihilation operator \(a_{\eta}(k)\) of a graviton state with helicity \(\eta\) and momentum \(k\). As in gauge theory, the key dynamical quantity to be determined is \(\langle\psi|S^{\dagger}a^{a}_{\eta}(k)S|\psi\rangle\). The field strength of equation (10) and the curvature (12) both involve an integration over the phase space of a massless particle. At large distances, this integral can be reduced to a one-dimensional Fourier transform using standard methods (see [134, 135] for a recent review). Writing the observation coordinate as \(x=(x^{0},\mathbf{x})\) and introducing the retarded time \(u=x^{0}-|\mathbf{x}|\), the results are \[F^{a}_{\mu\nu}(x)=\frac{-1}{4\pi|\mathbf{x}|}2\,\mathrm{Re}\int_{0}^{\infty} \hat{\mathrm{d}}\omega\,e^{-i\omega u}\sum_{\eta}k_{[\mu}\varepsilon^{\eta}_{ \nu]}(k)\ \langle\psi|S^{\dagger}a^{a}_{\eta}(k)S|\psi\rangle\, \tag{13}\] in Yang-Mills theory, and \[R_{\mu\nu\rho\sigma}(x)=\frac{-\kappa}{4\pi|\mathbf{x}|}\,\mathrm{Re}\int_{0}^ {\infty}\hat{\mathrm{d}}\omega\,e^{-i\omega u}\sum_{\eta}\,ik_{[\mu} \varepsilon^{\eta}_{\nu]}(k)k_{[\rho}\varepsilon^{\eta}_{\sigma]}(k)\ \langle\psi|S^{\dagger}a_{\eta}(k)S|\psi\rangle\, \tag{14}\] in gravity. Note that in both integrals (13) and (14) the momentum of the messenger reads \(k^{\mu}=\omega(1,\hat{\mathbf{x}})\). In terms of amplitudes and quantum field theory, then, the object we need to compute is \[\alpha_{\eta}(k)\equiv\langle\psi|S^{\dagger}a_{\eta}(k)S|\psi\rangle\, \tag{15}\] where \(a_{\eta}(k)\) is an annihilation operator for the relevant field. We will refer to this quantity as the "waveshape", since it is the parameter describing the coherent state of radiation which has the same field strength as given in equation (10). The connection between amplitudes, coherent states and radiation was the topic of reference [42]. Having discussed the general connection between amplitudes and field strengths, let us now understand how to construct the waveshape from perturbative scattering amplitudes. One obvious way to proceed is simply to extend the KMOC framework of [59] to the computation of matrix element by expanding \(S=1+iT\). This approach immediately leads to the leading order expression \[\alpha_{\eta}(k)=\int\mathrm{d}\Phi(p_{1}^{\prime},p_{2}^{\prime},p_{1},p_{2} )\,\phi_{b}^{*}(p_{1}^{\prime},p_{2}^{\prime})\phi_{b}(p_{1},p_{2})\hat{ \delta}^{D}(p_{\mathrm{tot}})\,i\mathcal{A}_{5,0}(p_{1},p_{2}\to p_{1}^{ \prime},p_{2}^{\prime},k_{\eta})\,, \tag{16}\] where we are adopting the notation that \(\mathcal{A}_{n,L}\) is an \(n\) point, \(L\) loop amplitude. The waveshape is slightly more involved at one loop (order \(g^{5}\)), where we encounter two terms \[\alpha_{\eta}(k)=\int\mathrm{d}\Phi(p_{1}^{\prime},p_{2}^{\prime },p_{1},p_{2})\,\phi_{b}^{*}(p_{1}^{\prime},p_{2}^{\prime})\phi_{b}(p_{1},p_{ 2})\hat{\delta}^{D}(p_{\mathrm{tot}})\,(\,i\mathcal{A}_{5,1}(p_{1},p_{2}\to p _{1}^{\prime},p_{2}^{\prime},k_{\eta}) \tag{17}\] \[\qquad\qquad+\int\mathrm{d}\Phi(\tilde{p}_{1},\tilde{p}_{2})\hat {\delta}^{D}(\tilde{p}_{\mathrm{tot}})\mathcal{A}_{5,0}(p_{1},p_{2}\to\tilde{ p}_{1},\tilde{p}_{2},k_{\eta})\mathcal{A}_{4,0}^{*}(\tilde{p}_{1},\tilde{p}_{2} \to p_{1}^{\prime},p_{2}^{\prime})\bigg{)}\,\] with the delta functions imposing the usual conservation of energy and momentum \[p_{\mathrm{tot}}=p_{1}+p_{2}-p_{1}^{\prime}-p_{2}^{\prime}-k=0,\ \ \tilde{p}_{\mathrm{tot}}=\tilde{p}_{1}+\tilde{p}_{2}-p_{1}^{\prime}-p_{2}^{ \prime}=0, \tag{18}\] for external states and across the cut. It is easy to see that the structure of the one-loop waveshape (17) is indeed very similar to the impulse described in [59]: one sums (\(i\) times) the one-loop amplitude and the specific cut shown in equation (17). However, in this article, we find it to be very useful to rearrange the observable in a form which clarifies the physics while also simplifying aspects of the computation. ### Real and imaginary parts One clue that there is another way of constructing the observable is the fact that the two terms in (17), instruct us to sum \(i\) times the amplitude and the cut of the amplitude. Since cuts arise from the imaginary parts of the amplitude it is clear that the combination \(i{\cal A}_{1}+{\rm Im}\,{\cal A}_{1}\) is _removing_ an imaginary part of the amplitude. However it is important to realise that the cut in equation (17) is not the complete imaginary part of the amplitude: the whole imaginary part is the sum of several distinct cuts. In fact we will see that one particular cut survives in the classical limit, and is directly relevant to radiation reaction. The usefulness of real and imaginary parts of amplitudes in the construction of KMOC-style classical observables was first emphasised in reference [36] which studied the impulse in classical scattering. The authors found that classically-singular terms1 are absent in the real part of the amplitude, while singular terms did appear in the imaginary part. These classically-singular terms cancelled among the different contributions to the imaginary part. We will soon find an analogous phenomenon in the waveshape. Real and imaginary parts also play a crucial role in eikonal methods, see for instance references [16; 42; 136; 137; 138]. Footnote 1: These terms involve inverse powers of \(\hbar\) and must cancel in observables. They are sometimes known as “superclassical” or “hyperclassical” terms. Real and imaginary parts of amplitudes are intimately connected to unitarity of the \(S\) matrix. To separate these parts of the amplitude, we first use \(S=1+iT\) and the unitarity relation \[-i(T-T^{\dagger})=T^{\dagger}T \tag{19}\] to write the waveshape (15) as \[\begin{split}\alpha_{\eta}(k)&=\frac{1}{2}\,\langle \psi|ia_{\eta}(k)\,(T+T^{\dagger})-ia_{\eta}(k)\,T^{\dagger}T+2T^{\dagger}\,a_ {\eta}(k)T|\psi\rangle\\ &=\frac{1}{2}\,\langle\psi|ia_{\eta}(k)\,(T+T^{\dagger})-[a_{ \eta}(k),T^{\dagger}]\,T+T^{\dagger}\,[a_{\eta}(k),T]|\psi\rangle\.\end{split} \tag{20}\] The first of these terms involves the combination \((T+T^{\dagger})/2\); up to a momentum-conserving delta function, this is the "real" part of the amplitude. It can be evaluated by cutting one internal propagator, and replacing all others by principal value prescriptions as we discuss below in section 3.5. The other terms involve cuts of the amplitude and are therefore linked to its imaginary part. Because the cuts involve two \(T\) matrices (one conjugated), and we work at order \(g^{5}\), it follows that we need one insertion of a \(g^{2}\) tree amplitude and a \(g^{3}\) tree amplitude. There is a short list of possible amplitudes: at order \(g^{2}\), we encounter four-point amplitudes involving four scalars, Compton-type amplitudes with two scalars and two messengers, or four messenger amplitudes. At order \(g^{5}\) we simply dress the order \(g^{2}\) amplitudes with one additional messenger. Furthermore, the commutator \([a_{\eta}(k),T]\) vanishes unless the corresponding amplitude contains at least one outgoing messenger. Let us first consider the term \[\begin{split}\alpha_{\eta}(k)&\supset\frac{1}{2}\, \langle\psi|T^{\dagger}\left[a_{\eta}(k),T]|\psi\right\rangle\\ &=\frac{1}{2}\int\mathrm{d}\Phi(p_{1}^{\prime},p_{2}^{\prime},p_{ 1},p_{2})\phi_{b}^{*}(p_{1}^{\prime},p_{2}^{\prime})\phi_{b}(p_{1},p_{2})\ \langle p_{1}^{\prime},p_{2}^{\prime}|T^{\dagger}\left[a_{\eta}(k),T \right]|p_{1},p_{2}\rangle\.\end{split} \tag{21}\] The \(T\) matrix here acts on the incoming two-scalar state, and (because of the commutator) must involve at least one outgoing messenger. The only possibility in our list of order \(g^{2}\) and \(g^{3}\) amplitudes is the \(2\to 3\) amplitude involving radiation of one messenger as the two scalars scatter. Similarly \(\langle p_{1}^{\prime},p_{2}^{\prime}|\,T^{\dagger}\) must evaluate to the four-point four-scalar amplitude as the only order \(g^{2}\) amplitude with two final-state scalars. Thus2, Footnote 2: Throughout this paper, we adopt the convention of drawing massive particle lines as solid lines. We will always indicate particle 1 with a red line and particle 2 with a blue one. \[\langle p_{1}^{\prime},p_{2}^{\prime}|T^{\dagger}[a_{\eta}(k),T]|p_{1},p_{2} \rangle=\parbox{113.811024pt}{\includegraphics[]{fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig/// fig/// fig/// fig// fig/// fig/// fig/// fig/// fig/// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig///// fig///// fig///// fig///// fig//// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig////// fig///// fig////// fig///// fig////// fig////// fig////// fig///// fig////// fig////// fig///// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig////// fig///// fig////// fig////// fig////// fig////// fig////// fig///// fig///// fig///// fig///// fig////// fig///// fig///// fig///// fig///// fig////// fig////// fig//// fig///// fig///// fig///// fig////// fig///// fig///// fig///// fig///// fig///// fig////// fig///// fig///// fig///// fig///// fig///// fig////// fig///// fig////// fig///// fig////// fig//// fig////// fig///// fig////// fig///// fig///// fig////// fig///// fig////// fig///// fig///// fig///// fig////// fig///// fig//// fig///// fig///// fig///// fig///// fig//// fig///// fig///// fig///// fig////// fig///// fig////// fig///// fig///// fig//// fig///// fig///// fig////// fig///// fig///// fig//// fig///// fig///// fig//// fig//// fig///// fig///// fig///// fig///// fig///// fig//// fig///// fig///// fig///// fig///// fig///// fig////// fig///// fig////// fig///// fig///// fig////// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig//// fig//// fig///// fig///// fig///// fig///// fig//// fig///// fig///// fig//// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig//// fig///// fig///// fig//// fig///// fig//// fig///// fig///// fig//// fig///// fig//// fig///// fig///// fig///// fig//// fig///// fig///// fig//// fig///// fig//// fig///// fig///// fig///// fig//// fig///// fig///// fig////// fig//// fig///// fig///// fig///// fig///// fig///// fig///// fig////// fig///// fig//// fig//// fig///// fig//// fig///// fig///// fig///// fig//// fig///// fig//// fig//// fig///// fig///// fig///// fig///// fig///// fig///// fig////// fig////// fig///// fig//// fig//// fig//// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig////// fig///// fig///// fig//// fig///// fig///// fig///// fig///// fig//// fig///// fig//// fig///// fig//// fig///// fig//// fig///// fig////// fig///// fig///// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig/// fig//// fig/// fig//// fig//// fig///// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig/// fig//// fig/// fig///// fig//// fig//// fig//// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig/// fig//// fig/// fig///// fig//// fig//// fig//// fig/// fig/// fig/// fig//// fig/// fig///// fig//// fig//// fig///// fig/// fig///// fig/// fig/// fig//// fig/// fig//// fig//// fig/// fig/// fig/// fig//// fig/// fig//// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig/// fig//// fig/// fig/// fig//// fig/// fig//// fig/// fig/// fig/// fig/// fig//// fig/// fig//// fig//// fig// fig/// fig//// fig/// fig/// fig/// fig//// fig/// fig/// fig//// fig/// fig/// fig// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig// fig// fig// fig/// fig/// fig// fig/// fig/// fig//// fig/// fig/// fig/// fig/// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig/// fig// fig// fig/// fig// fig/// fig// fig// fig/// fig// fig// fig/// fig// fig// fig// fig// fig// fig/// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig// fig/// fig/ fig/// fig// fig/ fig/// fig// fig// fig// fig// fig// fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/ fig// fig// fig/ fig// fig// fig/ fig/ fig// fig// fig// fig/ fig// fig// fig/ fig// fig/ fig// fig// fig/ fig// fig/ fig// fig// fig// fig/ fig/// fig/ fig// fig// fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig// fig/ fig// fig// fig/ fig// fig/ fig// fig/ fig// fig// fig// fig// fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig/// fig/ fig// fig/ fig// fig/ fig/ fig// fig/ fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig/ fig// fig/ fig/ fig// fig/ fig/// fig// fig// fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig// fig// fig/ fig/// fig/ fig// fig// fig// fig// fig/ fig// fig// fig/ fig// fig/ fig/ fig// fig// fig/ fig// fig/ fig/ fig// fig/ fig// fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig// fig/ fig/ fig here is the Compton amplitude. That is, (24) The presence of Compton amplitudes in this cut is significant. In fact, we will see that in the classical limit it is _only_ these cuts which survive. To substantiate our arguments, in appendix A we will present a purely classical derivation which relates cuts and dissipative forces. ## 3 Technical simplifications The full quantum structure of the waveshape \(\alpha_{\eta}(k)\) can be constructed from amplitudes using this formalism. While there are many interesting aspects of the quantum-mechanical case, the focus on this article is on the classical waveshape. Obviously the classical case is simpler than the full quantum case, and we now wish to discuss the details and the simplifications we can take advantage of to simplify our work. More specifically, we are interested in the classical limit of small angle scattering, often known as the "post-Minkowski" expansion in the gravitational context. This is a relativistically covariant perturbative expansion of classical quantities -- in our case, of the classical radiation field. There are a number of aspects of this expansion, which have been discussed in detail elsewhere, for example in references [39, 59, 66, 122, 139]. We therefore only highlight key aspects for our work here. ### Hierarchy of momenta One simple-minded way to separate classical and quantum effects is to restore factors of \(\hbar\): clearly all \(\hbar\)'s must disappear in classical expressions, and quantum corrections will be suppressed by (dimensionless ratios involving) \(\hbar\). In this paper, the most important factors of \(\hbar\) appear in the momenta of messengers: the photons, gluons or gravitons which mediate the interaction. Writing a generic messenger momentum as \(q\) compared to a point-particle momentum \(p\), we note that \(q\) scales as \(\hbar\), \[q=\hbar\bar{q}\,, \tag{3.1}\] while the particle momentum \(p\) scales as its mass: \[p=mu\,. \tag{3.2}\] Here we introduced a classical wavenumber \(\bar{q}\) with dimensions of length, and the classical proper velocity \(u\). The ratio of these, or rather of specific components, is of order the (reduced) Compton wavelength of the particle \(\hbar/m\). The classical approximation is valid only when the wavelengths of messengers, described by \(\bar{q}\), are much larger than the Compton wavelengths of the particles. Thus we treat messenger momenta as being very small compared to particle momenta, schematically \[p\gg q\,. \tag{3.3}\] Note that this can be implemented by treating the point particles as being very heavy, appoint of view which is emphasised for example in references [20; 47; 48; 49; 74; 128; 129]. Here we simply proceed by Laurent expanding integrands in terms of variables suppressed by messenger momenta relative to particle momenta. Throughout this paper the expansion will be essentially trivialised. The most dangerous superclassical terms are present only in the imaginary part, and cancel directly at the level of cuts without requiring detailed computation. For the remainder, \(\hbar\) power counting in a convenient gauge shows that all diagrams are classical at leading order in the \(\hbar\) expansion. ### Classical LO waveshape and heavy-particle crossing The waveshape (2.16) can be written at leading order as \[\begin{split}\alpha_{\eta}(k)=\int&\mathrm{d}\Phi (p_{1}^{\prime},p_{2}^{\prime},p_{1},p_{2})\,\phi^{*}(p_{1}^{\prime},p_{2}^{ \prime})\phi(p_{1},p_{2})\,e^{ib_{1}\cdot(p_{1}-p_{1}^{\prime})}e^{ib_{2}\cdot( p_{2}-p_{2}^{\prime})}\\ &\times i\mathcal{A}_{5,0}(p_{1},p_{2}\to p_{1}^{\prime},p_{2}^{ \prime},k_{\eta})\,\hat{\delta}^{D}(p_{1}^{\prime}+p_{2}^{\prime}+k-p_{1}-p_{ 2}).\end{split} \tag{3.4}\] Now let's simplify this expression in the classical limit. As we do so we will learn something about the tree amplitude. We will take the classical limit in two slightly different manners. First, write the "outgoing" momenta3 as Footnote 3: As our observable is an expectation value, the apparent in and out states are both in states. Nevertheless it can be convenient at times to think of the primed momenta as outgoing. \[p_{i}^{\prime}=p_{i}+q_{i}\,. \tag{3.5}\] The momenta \(q_{i}\) are messenger momenta, satisfying \(k=-q_{1}-q_{2}\). Now, the on-shell phase space measure of the outgoing particle \(i\) is \[\mathrm{d}\Phi(p_{i}+q_{i})=\hat{\mathrm{d}}^{D}q_{i}\,\Theta(p_{i}^{0}+q_{i}^{0 })\,\hat{\delta}(p_{i}^{2}+2p_{i}\cdot q_{i}+q_{i}^{2}-m_{i}^{2})\,. \tag{3.6}\] We simplify this as follows. First, the energy \(p_{i}^{0}\) is always much greater than \(q_{i}^{0}\) in the classical region (since pair-production must be kinematically suppressed.) Therefore we replace the theta functions by unity. Next, we note that \(p_{i}\) is an on-shell initial momentum so that \(p_{i}^{2}=m_{i}^{2}\). We further simplify the delta function noting that \(q_{i}^{2}\) is suppressed by a Compton wavelength relative to \(p_{i}\cdot q_{i}\). Thus, the waveshape becomes \[\alpha_{\eta}(k)=\int \mathrm{d}\Phi(p_{1},p_{2})\hat{\mathrm{d}}^{D}q_{1}\hat{ \mathrm{d}}^{D}q_{2}\,\hat{\delta}(2p_{1}\cdot q_{1})\hat{\delta}(2p_{2}\cdot q _{2})\,\phi^{*}(p_{1}+q_{1},p_{2}+q_{1})\phi(p_{1},p_{2}) \tag{3.7}\] \[\times e^{-ib_{1}\cdot q_{1}}e^{-ib_{2}\cdot q_{2}}\,i\mathcal{A }_{5,0}(p_{1},p_{2}\to p_{1}+q_{1},p_{2}+q_{2},k_{\eta})\,\hat{\delta}^{D}(k+ q_{1}+q_{2})\,.\] Next, we simplify the wavefunctions by noting \[\phi(p_{1}+q_{1},p_{2}+q_{1})\simeq\phi(p_{1},p_{2})\,. \tag{3.8}\] The origin of this fact is that the messenger momenta are suppressed by the Compton wavelength relative to the particle momenta, and on this scale the wavefunctions are rather flat. Indeed if the momentum-space wavefunctions were to localise the momenta to within a few Compton wavelengths, then the position-space uncertainty would be very large [59]. As a result, the leading-order waveshape is \[\alpha_{\eta}(k)=\int \mathrm{d}\Phi(p_{1},p_{2})\hat{\mathrm{d}}^{D}q_{1}\hat{\mathrm{ d}}^{D}q_{2}\,\hat{\delta}(2p_{1}\cdot q_{1})\hat{\delta}(2p_{2}\cdot q_{2})\,| \phi(p_{1},p_{2})|^{2}\,e^{-ib_{1}\cdot q_{1}}e^{-ib_{2}\cdot q_{2}} \tag{3.9}\] \[\times i\mathcal{A}^{\prime*}_{5,0}(p_{1},p_{2}\to p_{1}+q_{1},p_{2}+q _{2},k_{\eta})\,\hat{\delta}^{D}(k+q_{1}+q_{2})\,.\] On the other hand, returning to equation (3.4) and instead setting \[p_{i}=p_{i}^{\prime}+q_{i}\,, \tag{3.10}\] we find, using the same logic, \[\alpha_{\eta}(k)=\int \mathrm{d}\Phi(p_{1}^{\prime},p_{2}^{\prime})\hat{\mathrm{d}}^{D}q _{1}\hat{\mathrm{d}}^{D}q_{2}\,\hat{\delta}(2p_{1}^{\prime}\cdot q_{1})\hat{ \delta}(2p_{2}^{\prime}\cdot q_{2})\,|\phi(p_{1}^{\prime},p_{2}^{\prime})|^{2} \,e^{-ib_{1}\cdot q_{1}}e^{-ib_{2}\cdot q_{2}} \tag{3.11}\] \[\times i\mathcal{A}_{5,0}(p_{1}^{\prime}-q_{1},p_{2}^{\prime}-q_{ 2}\to p_{1}^{\prime},p_{2}^{\prime},k_{\eta})\,\hat{\delta}^{D}(k+q_{1}+q_{2} )\,.\] There is nothing stopping us from dropping the primes in this equation, since \(p_{i}^{\prime}\) are simply variables of integration. Comparing equations (3.9) and (3.11), the only difference is in the details of the momentum dependence in the tree amplitude. The wavefunction is unspecified; we have only used properties it must have in the classical limit. We conclude that \[\mathcal{A}_{5,0}(p_{1},p_{2}\to p_{1}+q_{1},p_{2}+q_{2},k_{\eta})=\mathcal{A }_{5,0}(p_{1}-q_{1},p_{2}-q_{2}\to p_{1},p_{2},k_{\eta})\,. \tag{3.12}\] This expression can only hold for the classical "fragment" of the amplitude, in the sense of reference [42]: at tree level, the classical fragment is simply the dominant term in the classical Laurent expansion. An alternative perspective is that this crossing relation follows from the scale separation between the heavy-mass scale \(m_{1}\) and \(m_{2}\) in the momenta of the scalar particles, and the light scale of order \(q\) in the messengers. This decoupling is made manifest in heavy particle effective theories, which could also be used to compute these amplitudes. The result, then, is a kind of crossing relation valid for heavy particle effective theories. It essentially allows us to cross the messenger momentum leaving the large particle momentum untouched. We will find this result is very useful below. It is straightforward to check this heavy-particle crossing relation in explicit examples: the QED amplitude is visible in equation 5.46 of reference [59] while the gravitational five point case is written in equation 4.21 of reference [129]. In both cases, heavy-particle crossing is achieved by eliminating the momentum \(k\) in favour of \(q_{1}+q_{2}\), and then replacing \(q_{i}\to-q_{i}\). This has the effect of replacing \(p_{i}+q_{i}\) with the desired \(p_{i}-q_{i}\) without clashing with the relation between \(k\) and the \(q_{i}\) (this relation does not pick up a sign in the crossing). Returning to the waveshape, we shall write \[\alpha_{\eta}(k)=\left\langle\!\!\left\langle\int\hat{\mathrm{d}}^{D}q_{1} \hat{\mathrm{d}}^{D}q_{2}\,\hat{\delta}(2p_{1}\cdot q_{1})\hat{\delta}(2p_{2} \cdot q_{2})\,e^{-ib_{1}\cdot q_{1}}e^{-ib_{2}\cdot q_{2}}\,\left(\cdots\right) \right\rangle\!\!\right\rangle \tag{3.13}\] at LO and NLO. Here the dots signify a general integrand, made of amplitudes and cuts. The large angle brackets remind us that the result must be integrated against the wavefunctions. However, once the integrand has been fully simplified in the classical limit, in particular to cancel terms involving singular powers of \(\hbar\), the integrand is smooth on the scale of the wavepacket. We can therefore formally take the wavepacket size to zero, so that the wavepacket integral simply localises the incoming momenta \(p_{i}\) on their classical values. ### Vanishing cuts Earlier, we advertised that certain cuts which contribute to the full quantum wave-shape cancel in the classical waveshape. We are now in a position to show this in detail. The result of this subsection is that \[\int\mathrm{d}\Phi(p_{1}^{\prime},p_{2}^{\prime},p_{1},p_{2})\, \phi^{*}(p_{1}^{\prime},p_{2}^{\prime})\phi(p_{1},p_{2})\,e^{ib_{1}\cdot(p_{1} -p_{1}^{\prime})}e^{ib_{2}\cdot(p_{2}-p_{2}^{\prime})}\,\hat{\delta}^{D}(p_{1} ^{\prime}+p_{2}^{\prime}+k-p_{1}-p_{2}) \tag{3.14}\] In other words, these two cuts make no contribution to the classical waveshape. This cancellation can be interpreted as the cancellation of classically-singular ("superclassical") terms which occur in the five point one-loop amplitude. The result can be seen as a generalisation of the removal of iterated trees in an exponentiated form of the amplitude along the lines of the eikonal or radial action at four points. There is more discussion of this kind of exponentiation in reference [42]. To see how the cancellation works, we adjust the initial and final states under the integral signs to reach \[\int \mathrm{d}\Phi(p_{1},p_{2},p_{1}+q_{1},p_{2}+q_{2})\,|\phi(p_{1},p _{2})|^{2}\,e^{-ib_{1}\cdot q_{1}}e^{-ib_{2}\cdot q_{2}}\,\hat{\delta}^{D}(k+q_ {1}+q_{2}) \tag{3.15}\] Writing out the cut, this becomes \[\left<\!\!\left<\!\!\left<\int\hat{\mathrm{d}}^{D}q_{1}\hat{ \mathrm{d}}^{D}q_{2}\,\hat{\delta}(2p_{1}\cdot q_{1})\hat{\delta}(2p_{2}\cdot q _{2})\hat{\delta}^{D}(k+q_{1}+q_{2})\,e^{-ib_{1}\cdot q_{1}}e^{-ib_{2}\cdot q _{2}}\right.\right. \tag{3.16}\] \[\qquad\times\int\hat{\mathrm{d}}^{D}\ell_{1}\hat{\mathrm{d}}^{D }\ell_{2}\ \hat{\delta}(2p_{1}\cdot\ell_{1})\hat{\delta}(2p_{2}\cdot\ell_{2})\ \hat{\delta}^{D}(\ell_{1}+\ell_{2}+k)\] \[\qquad\left.\left.\mathcal{A}_{5,0}(p_{1},p_{2}\to p_{1}+\ell_{1},p_{2 }+\ell_{2},k)\mathcal{A}_{4,0}(p_{1}+\ell_{1},p_{2}+\ell_{2}\to p_{1}+q_{1},p_{ 2}+q_{2})\right.\right.\right.\] \[\qquad\left.\left.-\mathcal{A}_{4,0}(p_{1}-q_{1},p_{2}-q_{2}\to p _{1}-\ell_{1},p_{2}-\ell_{2})\mathcal{A}_{5,0}(p_{1}-\ell_{1},p_{2}-\ell_{2} \to p_{1},p_{2},k)\right]\!\!\right>.\] Using heavy-particle crossing, the two five point trees are shown to be equal. As for the four point trees, one could use a result analogous to this crossing to show that they match. Alternatively, it is a simple point that these trees only depend on the \(t\) channel Mandelstam variable, which is the same in both terms, times the point-particle data. Thus, we conclude that the result vanishes. Our argument for this cancellation is somewhat indirect, and our confidence was improved after performing some detailed tests. First, we computed the gauge invariant and well-defined parts of the waveshape in QED and QCD with an automated code based on references [29] and [140], making no use of this heavy-particle crossing. As we discuss below we also computed \(\alpha\) analytically taking advantage of the cancellation due to heavy particle crossing, and find agreement with both results. Furthermore, another check is achieved through the eikonal formalism. There, the removal of superclassical terms relies on a factorisation property first outlined in [42] and [141] which is essentially equivalent to classical crossing. This cancellation is also made manifest in the HEFT approach [47; 48]. ### Vanishing integrals In our one-loop computations, we will encounter topologies including pentagons, boxes, triangles etc. Here we largely work at the level of the integrand. Nevertheless it is very useful to simplify our integrand by dropping terms which integrate to zero. The situation with loop integrals in the classical limit at four-points at one and two loops is very well understood and is thoroughly discussed for example in references [19; 39]. There are some similarities between four and five points. For example, we note that \[\int\hat{\mathrm{d}}^{D}\ell\frac{(\ell-q_{1})^{2}}{\ell^{2}(\ell-q_{1})^{2}(p _{1}\cdot\ell)(p_{2}\cdot\ell)}=0\,. \tag{3.17}\] One viewpoint is that this occurs because the integral is scaleless in dimensional regulation. An alternative viewpoint is that the integral is irrelevant classically with any choice of regulator because it leads to a contact term connecting the two point-like particles. These contact terms are only non-vanishing outside the domain of validity of the classical theory when the two particles are spatially separated by less than their Compton wavelength. As another example, consider the integral \[\begin{split}\int\hat{\mathrm{d}}^{D}\ell\frac{\ell^{2}}{\ell^{2 }(\ell-q_{1})^{2}(p_{1}\cdot\ell)(p_{2}\cdot\ell)}&=\int\hat{ \mathrm{d}}^{D}\ell\frac{1}{(\ell-q_{1})^{2}(p_{1}\cdot\ell)(p_{2}\cdot\ell)} \\ &=\int\hat{\mathrm{d}}^{D}\ell\frac{1}{\ell^{2}(p_{1}\cdot\ell)(p _{2}\cdot(\ell+q_{1}))}\,.\end{split} \tag{3.18}\] In the second step, we simply set \(\ell^{\prime}=\ell-q_{1}\), and then dropped the prime. We also set \(p_{1}\cdot q_{1}=0\), assuming that the \(\hbar\)-suppressed correction term of order \(q_{1}^{2}\) could be neglected. This integral is _not_ scaleless: indeed, there is a scale \(p_{2}\cdot q_{1}=-p_{2}\cdot k\) in the integral. Nevertheless we may still drop this integral: \[\int\hat{\mathrm{d}}^{D}\ell\frac{\ell^{2}}{\ell^{2}(\ell-q_{1})^{2}(p_{1} \cdot\ell)(p_{2}\cdot\ell)}\to 0\,. \tag{3.19}\] Again, the reason is that it leads to a contact term in position space. Note that care must be taken in the context of eg pentagon diagrams with three massless internal propagators; pinching one of these need not necessarily lead to a vanishing contact term. ### Real parts from single cuts and principal values Let us now return to the waveform, and look in more detail at the term containing the real part of the amplitude: \[\begin{split}\alpha_{\eta}(k)|_{1}&\equiv\frac{1}{ 2}\left\langle\psi|ia_{\eta}(k)\left(T+T^{\dagger}\right)\right|\psi\\ &=\frac{1}{2}\int\mathrm{d}\Phi(p_{1}^{\prime},p_{2}^{\prime},p_ {1},p_{2})\,\phi_{b}^{*}(p_{1}^{\prime},p_{2}^{\prime})\phi_{b}(p_{1},p_{2}) \,i\left\langle p_{1}^{\prime},p_{2}^{\prime},k_{\eta}|T+T^{\dagger}|p_{1},p_{ 2}\right\rangle\,.\end{split} \tag{3.20}\] The matrix element appearing here can be expressed in terms of amplitudes as \[\langle p_{1}^{\prime},p_{2}^{\prime},k_{\eta}|T+T^{\dagger}|p_{1},p_{2}\rangle= \left(\mathcal{A}_{5,0}+\mathcal{A}_{5,0}^{\prime*}\right)\,\hat{\delta}^{D}(p_ {1}+p_{2}-p_{1}^{\prime}-p_{2}^{\prime}-k)\,, \tag{3.21}\] where the five-point tree amplitudes are more explicitly \[\begin{split}\mathcal{A}_{5,0}&\equiv\mathcal{A}(p _{1},p_{2}\to p_{1}^{\prime},p_{2}^{\prime},k_{\eta})\,,\\ \mathcal{A}_{5,0}^{\prime}&\equiv\mathcal{A}(p_{1}^{ \prime},p_{2}^{\prime},k_{-\eta}\to p_{1},p_{2})\,.\end{split} \tag{3.22}\] Notice that the initial and final states are swapped in \(\mathcal{A}_{5,0}^{\prime}\) relative to \(\mathcal{A}_{5,0}\). The close relationship between the conjugated amplitude \(\mathcal{A}_{5,0}^{\prime*}\) and the amplitude \(\mathcal{A}_{5,0}\) is discussed in many quantum field theory textbooks, though the focus in typically on the imaginary part \(i(\mathcal{A}_{5,0}-\mathcal{A}_{5,0}^{\prime*})\) because of its relevance to unitarity (see, for example, [142; 143; 144; 145] for helpful discussions in this particular context). Because the real part \(\mathcal{A}_{5,0}+\mathcal{A}_{5,0}^{\prime*}\) is relevant to us, it is worth giving an example to see how the combination works. We consider a one-loop diagram contributing to the amplitude \(\mathcal{A}_{5,0}\) in Yang-Mills theory: \[\begin{split}\includegraphics[width=142.26378pt]{fig/200.eps} \end{split} \tag{3.23}\] This diagram depends on a color factor \(\mathcal{C}\), a kinematic numerator \(N\) and a propagator structure \(P\). The Feynman rules lead to \[\begin{split} P^{-1}&=(\ell^{2}+i\epsilon)[(q_{1}- \ell)^{2}+i\epsilon][(p_{1}+\ell)^{2}-m_{1}^{2}+i\epsilon][(p_{2}-\ell)^{2}-m_ {2}^{2}+i\epsilon]\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\times[(p_{2}-\ell-k)^{2}-m_{2}^{2}+i\epsilon]\,,\\ N&=\varepsilon_{\eta}^{*}\cdot(2p_{2}-2\ell)(2p_{ 1}+\ell)\cdot(2p_{2}-\ell)(2p_{1}+\ell+q_{1})\cdot(2p_{2}-\ell+q_{1}+2q_{2})\,. \end{split} \tag{3.24}\] Note the appearance of the (possibly complex) polarisation vector \(\varepsilon_{h}^{*}\). To describe the color factor, we suppose the initial color of particle \(i\) is specified by a color vector \(\chi_{i}\), while another vector \(\chi_{i}^{\prime}\) defines the final color. Let us further suppose that the outgoing gluon has adjoint color \(a\). Then we have \[\mathcal{C}=\bar{\chi}_{1}^{\prime}\cdot T_{1}^{b}\cdot T_{1}^{c}\cdot\chi_{1} \,\bar{\chi}_{2}^{\prime}\cdot T_{2}^{b}\cdot T_{2}^{a}\cdot T_{2}^{b}\cdot \chi_{2}\,. \tag{3.25}\] The contribution of this diagram to the amplitude is \[\mathcal{A}_{5,0}\supset-ig^{5}\mathcal{C}\int\hat{\mathrm{d}}^{D}\ell\,NP\,, \tag{3.26}\] since (in our conventions) the Feynman rules evaluate to \(i\) times the amplitude4. Footnote 4: This is consistent with \(S=1+iT\), and the convention that, for example, the tree four-point amplitude in \(\lambda\phi^{4}/4!\) theory is \(\lambda\). As the initial and final states are interchanged in \(\mathcal{A}^{\prime}_{5,0}\), we instead encounter the diagram \[\begin{split}\includegraphics[scale=0.5]{fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/// fig// fig// fig/// fig// fig// fig// fig/// fig// fig/// fig// fig/// fig// fig/// fig/// fig/// fig/// fig/// fig// fig/// fig/// fig/// fig/// fig// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig//// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig//// fig/// fig//// fig/// fig//// fig/// fig//// fig//// fig//// fig/// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig//// fig//// fig///// fig///// fig//// fig//// fig///// fig//// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig//// fig///// fig///// fig//// fig///// fig///// fig///// fig///// fig//// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig////// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig///// fig////// fig///// fig///// fig//// fig///// fig///// fig///// fig///// fig//// fig///// fig//// fig///// fig//// fig///// fig///// fig//// fig///// fig///// fig///// fig//// fig///// fig//// fig///// fig//// fig///// fig///// fig//// fig//// fig///// fig//// fig///// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig//// fig//// fig///// fig///// fig/// fig//// fig///// fig//// fig//// fig//// fig//// fig//// fig//// fig///// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig/// fig//// fig//// fig//// fig///// fig//// fig//// fig//// fig/// fig//// fig//// fig/// fig//// fig/// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig/// fig/// fig/// fig/// fig/// fig//// fig/// fig//// fig/// fig//// fig/// fig/// fig/// fig/// fig/// fig/// fig//// fig/// fig/// fig//// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig//// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig//// fig/// fig/// fig// fig/// fig/// fig/// fig/// fig/// fig/// fig// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig// fig//// fig/// fig/// fig//// fig//// fig// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig//// fig/// fig/// fig/// fig/// fig/// fig/// fig//// fig/// fig/// fig/// fig/// fig/// fig/// fig// fig/// fig/// fig/// fig// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig// fig/// fig//// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig// fig/// fig/// fig// fig/// fig/// fig/// fig/// fig//// fig/// fig/// fig/// fig/// fig//// fig/// fig//// fig/// fig/// fig//// fig/// fig/// fig//// fig/// fig/// fig//// fig/// fig//// fig/// fig/// fig/// fig//// fig//// fig/// fig//// fig//// fig/// fig//// fig//// fig//// fig/// fig/// fig//// fig//// fig//// fig///// fig//// fig//// fig/// fig//// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig// fig//// fig/// fig//// fig/// fig//// fig//// fig//// fig/// fig/// fig//// fig//// fig//// fig//// fig//// fig//// fig//// fig/// fig/// fig// fig//// fig/// fig/// fig/// fig/// fig/// fig/// fig//// fig//// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig// fig/// fig/// fig//// fig/// fig// fig/// fig/// fig/// fig/// fig/// fig/// fig/// fig// fig//// fig// fig/// fig//// fig//// fig/// fig/// fig/// fig//// fig/// fig/// fig//// fig/// fig//// fig/// fig/// fig/// fig// fig/// fig/// fig//// fig/// fig/// fig/// fig//// fig/// fig//// fig// fig/// fig/// fig// fig/// fig/// fig//// fig//// fig/// fig// fig/// fig// fig/// fig/// fig/// fig/// fig/// fig// fig// fig// fig/// fig// fig/// fig//// fig/// fig// fig/// fig/// fig// fig// fig/// fig/// fig// fig/// fig/// fig/// fig/ fig//// fig//// fig// fig// fig/// fig/// fig// fig// fig/// fig// fig// fig// fig// fig// fig/// fig// fig/// fig// fig/// fig// fig/// fig// fig/// fig/// fig/// fig//// fig/// fig// fig// fig/// fig// fig// fig// fig/// fig// fig/// fig/// fig// fig/// fig// fig// fig/// fig/// fig/// fig// fig/// fig// fig// fig// fig// fig// fig// fig/// fig/// fig// fig// fig// fig/// fig/// fig/// fig// fig// fig// fig/// fig// fig// fig/// fig/ fig// fig// fig/// fig// fig/// fig// fig/// fig// fig// fig// fig/// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig// fig// fig// fig/ fig// fig/ fig/ fig// fig// fig// fig/ fig/ fig/// fig// fig/ fig// fig// fig/ fig// fig// fig/ fig// fig// fig// fig// fig/ fig// fig// fig// fig// fig// fig// fig// fig/ fig// fig/ fig// fig// fig// fig/ fig/// fig// fig/ fig// fig/ fig// fig// fig// fig/ fig// fig// fig// fig/ fig// fig// fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig/ fig// fig// fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig// fig// fig/ fig// fig/ fig// fig// fig// fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig// fig/ fig/ fig// fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig// fig// fig/ fig/ fig/ fig/ fig/ fig// fig// fig// fig/ fig/ fig// fig// fig// fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig// fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig// fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig// fig/ fig/ fig/ fig/ fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig/ fig// fig/ fig/ fig// fig/ fig/ fig// fig// It is very natural to obtain the imaginary part of the propagator structure using \[\frac{1}{p^{2}-m^{2}+i\epsilon}=\text{PV}\left(\frac{1}{p^{2}-m^{2}}\right)-\frac {i}{2}\hat{\delta}(p^{2}-m^{2})\,, \tag{3.32}\] where PV is the principal value5. The delta function here is equivalent to cutting a single particle. By counting powers of \(i\), it is clear that the imaginary part of our propagator structure is obtained by cutting an odd number of propagators. (This contrasts with the usual unitarity cuts at one loop which involve cutting two propagators.) Footnote 5: In the closed time-path (Schwinger-Keldysh) approach to computing expectation values in field theory, the \(T^{\dagger}\) matrix arises from the part of the contour which goes “backwards” in time. Our diagrams contain five propagators, so in principle there are imaginary parts when we cut one, three or five propagators. However, three point amplitudes have no support in Minkowski space -- so there is no need to consider cutting five propagators. It is also easy to see that cutting three propagators necessarily leads to one three-point amplitude. Thus our imaginary parts necessarily arise from single cuts; all other propagators are then to be evaluated with the principal-value pole prescription. It is worth emphasising that this pole prescription appears naturally from general considerations. ## 4 Radiation In this chapter we discuss the complete real part of the QED and QCD waveshape in detail at the level of their integrand. As we will see, taking the real part of the one-loop amplitude corresponds to isolating "conservative" contributions in radiative fields. By that we mean all radiation which is caused by one particle accelerating in the Lorentz/geodesic fields of the second one. That is, the forces acting on the particle are conservative, and (classically) omit the self-field of the particle. We return to these intrinsically dissipative self-force corrections to the radiation field in the next chapter of this manuscript. It may be worth commenting that the factor of \(i\) between the real and imaginary parts of the waveshape is itself a signal of time-reversal violation. We begin with electrodynamics. The waveshape in QED is remarkably simple yet it is physically interesting and closely connected to more complicated radiation fields. This is why we find it is a useful to discuss it in detail. Indeed the QED waveshape computes well-defined parts of the QCD waveshape, associated with specific ordered amplitudes. We will discuss how QED is embedded in QCD in more detail later. In this section, we will omit certain cuts that in principle could contribute to the real part of the amplitude, but which are intuitively quantum-mechanical. Examples are the one-loop correction to the QED vertex. In the later section 6 we will return to all of these cuts and demonstrate that they are removed by renormalisation in the real part of the waveform. They nevertheless contribute to the imaginary part: counterterms must of course be real. ### Qed We are now at a good place to compute the QED waveshape. At NLO, this is fifth order in the coupling. But in electrodynamics we are free to give our two particles different charges \(Q_{1}\) and \(Q_{2}\), and correspondingly the five coupling powers in the NLO waveshape can be decomposed into four different charge sectors: \(Q_{1}Q_{2}^{4}\), \(Q_{1}^{2}Q_{2}^{3}\), \(Q_{1}^{3}Q_{2}^{2}\), and \(Q_{1}^{4}Q_{2}\). (There can be no terms of order \(Q_{1}^{5}\) or \(Q_{2}^{5}\) since at least one photon must connect the two particles for radiation to occur.) In the language of scattering amplitudes, the one-loop five-point amplitude in QED can be decomposed into four different partial amplitudes corresponding to these four charge sectors. There are really only two independent partial amplitudes to compute, which we can take to be the \(Q_{1}^{2}Q_{2}^{3}\) and \(Q_{1}Q_{2}^{4}\) amplitudes. The \(Q_{1}^{3}Q_{2}^{2}\) and \(Q_{1}^{4}Q_{2}\) partial amplitudes can be recovered by interchanging particles 1 and 2. In this section, we start with the \(Q_{1}^{2}Q_{2}^{3}\) partial amplitude. In order to show how this waveshape can be extracted most simply, it is convenient to digress briefly on a related computation: the impulse at next-to-leading order. #### NLO impulse The impulse at NLO involves a one-loop four-point amplitude. This amplitude has real and imaginary parts, which play rather different roles in the observable. In particular, at one loop, the real part controls the scattering angle, while the imaginary part ensures that the on-shell condition is satisfied. Here it is most relevant to focus on the contribution of the real part of the amplitude to the observable, so we define \[\Delta p_{1}^{\mu}|_{\rm real}\equiv\int{\rm d}^{D}q\,\hat{\delta}(2p_{1}\cdot q )\hat{\delta}(2p_{2}\cdot q_{2})\,iq^{\mu}\,e^{-iq\cdot b}\,{\rm Re}\,{\cal A}_ {4,1}(p_{1},p_{2}\to p_{1}+q,p_{2}-q)\,. \tag{4.1}\] One-loop four-point amplitudes involve at most four propagators. In this case, two of those propagators involve photons while the other two are associated with the massive particles with momenta \(p_{1}\) or \(p_{2}\). The real part of the amplitude arises by replacing a propagator with the corresponding delta function: this places one line on shell, effectively performing a single cut of the amplitude. First, we consider the result of placing the propagator for line 2 on shell. Diagrammatically, we must then consider \[p_{1} \tag{4.2}\] The contribution of this diagram to the amplitude is \[\operatorname{Re}\mathcal{A}_{4,1}|_{p_{2}}=\int\hat{\mathrm{d}}^{D}\ell\frac{1}{ \ell^{2}(\ell-q)^{2}}\frac{1}{2}\hat{\delta}(2p_{2}\cdot\ell)N(\ell)\,, \tag{4.3}\] where \(N(\ell)\) is a numerator function we must fix. We fix the numerator by cutting all the propagators in the diagram: we have explicitly cut the massive propagator, and any terms in \(N(\ell)\) which are proportional to \(\ell^{2}\) or \((\ell-q)^{2}\) integrate to zero. In other words we can take each of the blobs in the diagram to be on-shell amplitudes, so that \[N(\ell)=\sum_{\text{helicities}}\mathcal{A}_{3,0}(p_{2},\ell)\mathcal{A}_{4,0}( p_{1},\ell,\ell-q)\mathcal{A}_{3,0}(p_{2}-\ell,\ell-q)\,. \tag{4.4}\] In \(D\) dimensions, the helicity sum is straightforward using formal polarisation vectors. Let us write the polarisation vector for a photon of momentum \(k\) and gauge \(q\) as \(\varepsilon(k;q)\). If we choose the gauge to be \(q=p_{1}\), then the Compton amplitude appearing in the cut is \[\mathcal{A}_{4,0}(p_{1},\ell,\ell-q)=2Q_{1}^{2}\,\varepsilon(\ell,p_{1})\cdot \varepsilon^{*}(\ell-q,p_{1})\,. \tag{4.5}\] The three-point amplitudes are trivially obtained from \[\mathcal{A}_{3,0}(p_{2},\ell)=2Q_{2}\,\varepsilon(\ell,p_{1})\cdot p_{2}\,. \tag{4.6}\] To perform the helicity sum, we only need the completeness relation which, in case of a massive gauge vector, is \[\sum_{\text{helicities}}\varepsilon^{\mu}(k,q)\varepsilon^{\nu*}(k,q)=-\left( \eta^{\mu\nu}-\frac{k^{\mu}q^{\nu}+k^{\nu}q^{\mu}}{k\cdot q}+q^{2}\frac{k^{\mu }k^{\nu}}{(k\cdot q)^{2}}\right)\,. \tag{4.7}\] This summation involves products of a polarisation vector and its conjugate. As usual in generalised unitarity, this structure naturally arises in the product of amplitudes appearing in the diagram (4.2) because a photon connecting two amplitudes must be outgoing with respect to one amplitude and incoming with respect to the other. It then follows that the numerator is \[N(\ell)=8Q_{1}^{2}Q_{2}^{2}\left(m_{2}^{2}+\frac{(p_{1}\cdot p_{2})^{2}}{(p_{ 1}\cdot\ell)^{2}}\ell\cdot(\ell-q)\right)\,. \tag{4.8}\] As a result, the contribution to the observable is \[\begin{split}\Delta p_{1}^{\mu}|_{\text{real},p_{2}}=\frac{iQ_{1 }^{2}Q_{2}^{2}}{2}\int&\mathrm{d}^{D}q\,\hat{\delta}(u_{1}\cdot q )\hat{\delta}(u_{2}\cdot q_{2})\,q^{\mu}\,e^{-iq\cdot b}\\ &\times\int\frac{\hat{\mathrm{d}}^{D}\ell}{\ell^{2}(\ell-q)^{2}} \frac{\hat{\delta}(u_{2}\cdot\ell)}{m_{1}}\left(1+\frac{(u_{1}\cdot u_{2})^{2} }{(u_{1}\cdot\ell)^{2}}\ell\cdot(\ell-q)\right)\,,\end{split} \tag{4.9}\] where we used the proper velocities \(u_{i}=p_{i}/m_{i}\). The contribution from cutting the massive propagator with incoming momentum \(p_{1}\) can be obtained from equation (4.1) by symmetrising on particles 1 and 2. By summing these two contributions we find agreement with the impulse given in equation 5.38 of reference [59]6. Footnote 6: Terms in that equation 5.38 which involve derivatives of delta functions arise from the imaginary part of the amplitude when included correctly in the observable. In this way, we reproduce the one-loop impulse in a very straightforward manner. However we did so by cutting only the massive propagators, omitting possible cuts of the massless photon propagators. These cuts do not contribute to the classical impulse. Indeed, from a purely classical perspective the messengers are Fourier transforms of the Coulomb field, and therefore can transport no energy in the rest frame of the source. Thus, they cannot go on shell. From the perspective of amplitudes and cuts, one can show that when one the messenger lines are cut the resulting amplitude is suppressed by a power of \(\hbar\). (This involves choosing a specific gauge for the polarisation objects of the messengers and a remaining cancellation among Feynman diagrams.) We shall omit this class of cuts in the following discussion for the same reason. #### Lorentz impulse: heavy mass We now turn to the radiation at order \(Q_{1}^{2}Q_{2}^{3}\). Classically, this radiation results from the acceleration of particle 2 due to the Lorentz force in the field of particle 1. To determine the radiation field we recycle much of the computation of the impulse. As in the case of the impulse above, we do not cut internal photon lines (the internal photons are both potential modes.) We focus first on the contribution arising by cutting line 1, leading to the diagram (4.10) We will soon see that this diagram gives the dominant contribution to the wave-shape when the mass \(m_{1}\) of particle 1 is large. Its contribution to the amplitude can be computed in a manner which is almost identical to the impulse. The main novelty relative to our discussion of the impulse is the appearance of a five-point tree amplitude: \[i{\cal A}_{5,0}=\,\,p_{2}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, The second class of polarisation sum to be performed is \[\begin{split}&\sum_{\text{helicities}}p_{1}\cdot\varepsilon(\ell)\,p_{1 }\cdot\varepsilon^{*}(\ell-q_{1})\frac{\varepsilon(\ell-q_{1})\cdot\varepsilon ^{*}(k)\,\varepsilon^{*}(\ell)\cdot q_{2}}{2p_{2}\cdot\ell}\\ &=\left(p_{1}\cdot q_{2}+\frac{p_{1}\cdot p_{2}\,\ell\cdot q_{2} }{p_{2}\cdot\ell}\right)\left(p_{1}\cdot\varepsilon^{*}(k)-\frac{p_{1}\cdot p _{2}}{p_{2}\cdot(\ell-q_{1})}(\ell-q_{1})\cdot\varepsilon^{*}(k)\right)\frac{ 1}{2p_{2}\cdot\ell}\,.\end{split} \tag{4.14}\] Putting these together, diagram (4.10) leads to terms in the one-loop five-point amplitude given by \[\begin{split}&\mathcal{A}_{5,1}=-4Q_{1}^{2}Q_{2}^{3}\int\frac{ \hat{\mathrm{d}}^{D}\ell}{\ell^{2}(\ell-q_{1})^{2}}\frac{1}{p_{1}\cdot\ell+i \epsilon}\left[\left(m_{1}^{2}+\frac{\ell\cdot(\ell-q_{1})(p_{1}\cdot p_{2}) ^{2}}{p_{2}\cdot\ell\,p_{2}\cdot(\ell-q_{1})}\right)\frac{\varepsilon^{*}(k) \cdot q_{1}}{p_{2}\cdot q_{1}}\right.\\ &\left.+\left(p_{1}\cdot q_{2}+\frac{p_{1}\cdot p_{2}\,\ell\cdot q _{2}}{p_{2}\cdot\ell}\right)\left(p_{1}\cdot\varepsilon^{*}(k)-\frac{p_{1} \cdot p_{2}}{p_{2}\cdot(\ell-q_{1})}(\ell-q_{1})\cdot\varepsilon^{*}(k)\right) \frac{2}{p_{2}\cdot\ell}\right]+\cdots\,,\end{split} \tag{4.15}\] where the ellipsis indicates terms not captured by the cut. The contribution of this part of the amplitude to the waveshape involves taking the imaginary part of the explicit massive propagator. Including the rest of the structure of the waveshape, we find \[\begin{split}\alpha_{\eta}(k)|_{1}=\frac{iQ_{1}^{2}Q_{2}^{3}}{2m_ {1}m_{2}}&\int\hat{\mathrm{d}}^{4}q_{1}\hat{\mathrm{d}}^{4}q_{2} \,\hat{\delta}(q_{1}\cdot u_{1})\hat{\delta}(q_{2}\cdot u_{2})\hat{\delta}^{D }(q_{1}+q_{2}+k)e^{-iq_{1}\cdot b_{1}}e^{-iq_{2}\cdot b_{2}}\\ &\times\int\frac{\hat{\mathrm{d}}^{4}\ell}{\ell^{2}(\ell-q_{1})^ {2}}\hat{\delta}(p_{1}\cdot\ell)\left[\left(m_{1}^{2}+\frac{\ell\cdot(\ell-q_ {1})(p_{1}\cdot p_{2})^{2}}{p_{2}\cdot\ell\,p_{2}\cdot(\ell-q_{1})}\right) \frac{\varepsilon^{*}(k)\cdot q_{1}}{p_{2}\cdot q_{1}}\right.\\ &\qquad\qquad+\left(p_{1}\cdot\varepsilon^{*}(k)-\frac{p_{1} \cdot p_{2}}{p_{2}\cdot(\ell-q_{1})}(\ell-q_{1})\cdot\varepsilon^{*}(k)\right) \frac{2p_{1}\cdot q_{2}}{p_{2}\cdot\ell}\\ &\qquad\qquad\left.+\left(p_{1}\cdot\varepsilon^{*}(k)-\frac{p_{1} \cdot p_{2}}{p_{2}\cdot(\ell-q_{1})}(\ell-q_{1})\cdot\varepsilon^{*}(k)\right) \frac{2p_{1}\cdot p_{2}\,\ell\cdot q_{2}}{(p_{2}\cdot\ell)^{2}}\right].\end{split} \tag{4.16}\] To see how this term scales with the masses of the particles, scale the masses out from the momenta via \(p_{i}=m_{i}u_{i}\). It is then clear that this part of the waveshape is proportional to \(m_{1}^{0}m_{2}^{-2}\) so that, as we advertised above, this cut corresponds to the radiation emitted in the large \(m_{1}\) limit. #### Lorentz impulse: symmetric mass The remaining single-particle cuts at order \(Q_{1}^{2}Q_{2}^{3}\) are proportional to \(1/(m_{1}m_{2})\). The cut diagrams are: (4.17) This topology can easily be determined using the methods discussed above; the only (slight) novelty is that the two Compton amplitudes which appear are most simply evaluated in terms of polarisation vectors in different gauges. Both diagrams make an equal contribution to the waveform so we may study only the first diagram. The contribution of this diagram to the one-loop five-point amplitude can be written as \[i\mathcal{A}_{5,1}^{B}=-i\int\frac{\hat{\mathrm{d}}^{D}\ell}{\ell^{2}(\ell-q_{ 1})^{2}}\frac{1}{-2p_{2}\cdot\ell+i\epsilon}K^{B}\,, \tag{4.18}\] where \(K^{B}\) is the evaluation of the cut, namely \[iK^{B}=-8iQ_{1}^{2}Q_{2}^{3}\sum_{\mathrm{helicities}}p_{2}\cdot\varepsilon^{* }(\ell)\,\varepsilon(\ell,p_{1})\cdot\varepsilon^{*}(\ell-q_{1},p_{1})\, \varepsilon(\ell-q_{1},p_{2})\cdot\varepsilon^{*}(k,p_{2})\,. \tag{4.19}\] We have introduced the notation \(\varepsilon(k,p)\) for the polarisation vectors corresponding to a photon with momentum \(k\) in gauge \(p\). Note that we used different gauges for the polarisation vectors in different tree Compton amplitudes in the cut. However, it is an easy matter to change the gauge, and in particular we find it convenient to write \[\varepsilon^{\mu}(\ell-q_{1},p_{1})=\varepsilon^{\mu}(\ell-q_{1},p_{2})-(\ell -q_{1})^{\mu}\frac{p_{1}\cdot\varepsilon(\ell-q_{1},p_{2})}{p_{1}\cdot\ell}\,. \tag{4.20}\] The helicity sum can then be performed in \(D\) dimensions straightforwardly. The contribution of the cut to the waveform is \[\begin{split}\frac{iQ_{1}^{2}Q_{2}^{3}}{m_{1}m_{2}}\int\hat{ \mathrm{d}}^{D}q_{1}\hat{\mathrm{d}}^{D}q_{2}\,\hat{\delta}(q_{1}\cdot u_{1}) \hat{\delta}(q_{2}\cdot u_{2})\hat{\delta}^{D}(q_{1}+q_{2}+k)\,e^{-iq_{1}\cdot b _{1}}e^{-iq_{2}\cdot b_{2}}\\ \times\int\hat{\mathrm{d}}^{D}\ell\frac{\hat{\delta}(u_{2}\cdot \ell)}{\ell^{2}(\ell-q_{1})^{2}}K^{B}\,.\end{split} \tag{4.21}\] where the quantity \(K^{B}\), which is directly proportional to the polarisation sum, is \[K^{B}= \frac{1}{(u_{1}\cdot\ell)^{2}}\left[u_{1}\cdot\varepsilon^{*}(k,p_{ 2})\left(u_{1}\cdot\ell\,u_{2}\cdot q_{1}-u_{1}\cdot u_{2}\,\ell\cdot q_{1}\right)\right. \tag{4.22}\] \[\qquad+\left(\ell-q_{1}\right)\cdot\varepsilon^{*}(k,p_{2})\left( u_{1}\cdot u_{2}\,u_{1}\cdot\ell-\frac{(u_{1}\cdot u_{2})^{2}\ell\cdot q_{1}}{u_{2} \cdot q_{1}}+\frac{(u_{1}\cdot\ell)^{2}}{u_{2}\cdot q_{1}}\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\left.-\ell\cdot\varepsilon^{*}(k,p_{2})\,u_{1}\cdot u_{2}\,u_{1} \cdot\ell\,\right]\,.\] It is straightforward to recover \(Q_{1}^{3}Q_{2}^{2}\) terms in the waveshape by swapping the particle labels \(1\) and \(2\). The \(Q_{1}^{4}Q_{2}\) and \(Q_{1}Q_{2}^{4}\) partial waveshapes are described below. We have tested these results in a number of ways. Firstly, we have compared our expressions to the one-loop five-point Yang-Mills amplitudes presented in [29]. We also compared with the work of Shen [58], who iterated the classical equations to this order. Some care has to be taken to remove divergent terms in the results of reference [58] which result from Shen's merging procedure. Nevertheless we found agreement in this sector. Finally, as described below, we have performed our own computation in the classical theory and find full agreement. ### Qcd Let us move to Yang-Mills at this point, and analyse some of the main features of the waveshape in QCD. For the purposes of our paper, the main difference between QED and Yang-Mills amplitudes is the handling of color degrees of freedom. In fact, now the waveshape will also include various color-dependent factors which enter diagram vertices. Here, we choose to represent the massive scalars and gluons in our problem in the fundamental \(T_{ij}^{a}\) and adjoint \(f^{abc}\) representation of the color group, respectively. Our strategy is to proceed in the usual way, by exploiting the following gauge theory structure \[\begin{split}[T^{a},T^{b}]_{ij}=f^{abc}T_{ij}^{c},\\ f^{dac}f^{cbe}-f^{dbc}f^{cae}=f^{abc}f^{dec},\end{split} \tag{4.23}\] to organise and expand our amplitudes (or cuts thereof) in a color basis, and focus on each gauge invariant sector independently. Schematically, the one-loop amplitude can be expanded in a basis of color coefficients \[\begin{split}\mathcal{A}_{5,1}(p_{1}...k)&= \mathcal{C}\left(\raisebox{-0.0pt}{\includegraphics[scale=0.0]{fig/diagram_1. eps}}\right)A_{1}+\mathcal{C}\left(\raisebox{-0.0pt}{\includegraphics[scale=0.0]{fig/diagram_2. eps}}\right)A_{2}\\ &+\mathcal{C}\left(\raisebox{-0.0pt}{\includegraphics[scale=0.0]{fig/diagram_3. eps}}\right)A_{3}+\mathcal{C}\left(\raisebox{-0.0pt}{\includegraphics[scale=0.0]{fig/diagram_4. eps}}\right)A_{4}+\mathcal{C}\left(\raisebox{-0.0pt}{\includegraphics[scale=0.0]{fig/diagram_5. eps}}\right)A_{5}+\cdots\,,\end{split} \tag{4.24}\] where \(A_{i}\) is the partial amplitude corresponding to the color factor \(C_{i}\), and the expression in the ellipsis includes quantum corrections. Once the full amplitude is organised in terms of independent partial amplitudes, we can consider the classical limit of each one separately. However, now we have to restore factors of \(\hbar\) in both momenta and color coefficients, according to the prescription of [22]. We now take a moment to consider the partial amplitudes in (4.24). As we show in Table 1 - where we list the topologies appearing in the partial amplitudes - \(A_{1}\) and \(A_{2}\) involve only diagrams with no non-Abelian (pure-gluon) vertices. We recognise these as the QED amplitude sectors computed in the previous section. The contributions from these sectors can therefore be plugged into the QCD expression simply by dressing them with their given color factor. In this section we therefore focus on the terms which appear for the first time in the case of QCD - namely \(A_{3}\), \(A_{4}\) and \(A_{5}\). As shown in Table 1 these partial amplitudes do involve non-Abelian topologies and must be calculated to find the full QCD result. We will refer to \(A_{3}\) and \(A_{4}\), \(A_{5}\) as the _pentagon_- and _maximally non-Abelian_ partial amplitudes, respectively. #### Pentagon We begin by looking at the partial amplitude \(A_{3}\). The color factor of this amplitude is simply the color structure of the pentagon topology, given by \[\mathcal{C}\left(\yng(1)\right)=f^{Abc}C_{1}^{b}\cdot C_{1}^{d}\,C_{2}^{c}\cdot C _{2}^{d}, \tag{4.25}\] where \(A\) is the adjoint index of the emitted gluon and \(C_{i}^{a}\) is the classical color charge of the massive body \(i\)[22]. In this section we will only discuss cutting the \(p_{2}\) propagator, as remaining cuts can be obtained by relabelling particles. There are \begin{table} \begin{tabular}{c|c|c} Color type & \(A_{i}\) & Topologies \\ \hline \(\mathcal{C}\left(\yng(1)\right)\) & \(A_{1},A_{2}\) & \(\yng(1)\) \\ \(\mathcal{C}\left(\yng(1)\right)\) & \(A_{3}\) & \(\yng(1)\) \\ \(\mathcal{C}\left(\yng(1)\right)\) & \(A_{4},A_{5}\) & \(\yng(1)\) \\ \end{tabular} \end{table} Table 1: Topologies contributing to the partial amplitudes of the color factors \(\mathcal{C}\). two cuts to consider: \[\begin{split}\includegraphics[scale=0.5]{fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/fig/fig/figfig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/fig/fig/figfig/fig/fig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/figfig/fig/fig/fig/fig/figfig/fig/fig/fig/figfig/fig/fig/figfig/fig/fig/figfig/fig/figfig/fig/fig/figfig/fig/figfig/figfig/fig/figfig/fig/fig/figfig/figfig/fig/figfig/fig/figfigfig/fig/fig/figfig/figfig/fig/figfig/fig/fig/figfig/fig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/fig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/fig/figfig/figfig/fig/figfigfig/figfig/figfig/figfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfig/figfig/figfigfig/figfig/figfig/figfigfigfig/fig/figfig/figfigfig/figfig/figfig/figfigfig/figfigfig/figfig/figfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfigfigfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfig/figfigfigfig/figfig/figfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfigfig/figfig/figfigfigfig/figfigfig/figfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfig/figfigfig/figfigfig/figfigfig/figfigfig/figfig/figfig We extract from this contribution terms which are missed by the previous cut. Here, it will be convenient to work in the gauge \(p_{1}\cdot\varepsilon=0\) for all external gluons, so we get to use the completeness relation (4.7) with \(q=p_{1}\). At the end, we find the following expression for the real-part of the pentagon partial amplitude \[\operatorname{Re}A_{3}=g^{5}\int\!\hat{\mathrm{d}}^{D}\ell\,\frac {\hat{\delta}(\ell\cdot p_{2})}{\ell^{2}(\ell+q_{2})^{2}(\ell-q_{1})^{2}} \left[\left(\varepsilon^{*}\cdot p_{2}\frac{p_{1}\cdot p_{2}}{p_{1}\cdot\ell}+ \varepsilon^{*}\cdot q_{2}\frac{(p_{1}\cdot p_{2})^{2}}{p_{1}\cdot\ell\,p_{1} \cdot k}\right)\frac{(\ell-q_{1})^{2}}{2}\right.\] \[\left.-\varepsilon^{*}\cdot p_{2}\left(\frac{(k\cdot p_{2})^{2}} {p_{1}\cdot\ell}+\frac{p_{1}\cdot\ell\,\ell\cdot k+p_{1}\cdot k\,\ell\cdot q _{1}}{(p_{1}\cdot\ell)^{2}}\right)+\varepsilon^{*}\cdot\ell\frac{p_{1}\cdot p _{2}\,p_{2}\cdot k}{p_{1}\cdot\ell}\right.\] \[\left.+\varepsilon^{*}\cdot(\ell+q_{2})\left(p_{2}^{2}+\frac{p_{ 1}\cdot p_{2}}{p_{1}\cdot\ell}p_{2}\cdot q_{1}-\left(\frac{p_{1}\cdot p_{2}}{p _{1}\cdot\ell}\right)^{2}\ell\cdot q_{1}\right)\right]. \tag{4.30}\] #### Maximally non-Abelian partial amplitude Two of the most physically interesting gauge invariant sectors are \(A_{4}\) and \(A_{5}\). We will refer to as "maximally non-Abelian" as their color factors involve two structure constants. Noting that these two sectors are related by particle relabelling, we will focus on \(A_{4}\) only. The color structure corresponding to this partial amplitude is now \[\mathcal{C}\left(\raisebox{-0.0pt}{\includegraphics[scale=0.0]{fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/fig/ fig/fig/fig/fig/fig/fig This gauge invariant sector is actually simple enough that it can also be described through Feynman diagrams. Indeed, classically we find that only five diagrams contribute to \(A_{4}\). These are Thus, we find it instructive to see how (4.32) can also be derived this way. To this end, and to further ease our narrative, we employ the gauge \(\varepsilon\cdot u_{2}=0\) and only look at the propagator structure \[P^{-1}=\ell^{2}(\ell+q_{2})^{2}(\ell-q_{1})^{2}. \tag{4.37}\] Then, it turns out that such pieces arise from the following diagram only Using usual scalar QCD Feynman rules, this diagram is seen to correspond to \[\operatorname{Re}A_{4}\to\frac{4g^{5}m_{1}m_{2}}{q_{1}^{2}}\int\hat{\mathrm{d} }^{D}\ell\,\hat{\delta}(u_{2}\cdot\ell)\frac{N}{\ell^{2}(\ell+q_{2})^{2}(\ell- q_{1})^{2}}. \tag{4.38}\] Above, the numerator is found to have the following expression \[N=u_{1}^{\alpha}\,\Pi_{\alpha\beta\gamma}(-q_{1},q_{1}-\ell,\ell)u_{2}^{ \gamma}\,\Pi^{\beta\rho\sigma}(\ell-q_{1},-k,-\ell-q_{2})\varepsilon_{\eta\,,\rho}^{*}(k)u_{2\,\sigma}, \tag{4.39}\] where we conveniently defined the three gluon vertex with all incoming momenta as \[\Pi_{\alpha\beta\gamma}(k,q,r)\equiv(k-q)_{\gamma}\eta_{\alpha\beta}+(q-r)_{ \alpha}\eta_{\beta\gamma}+(r-k)_{\beta}\eta_{\alpha\gamma}. \tag{4.40}\] Let us focus on this numerator then. A little algebra shows that this reduces to \[N=-4\varepsilon_{\eta}^{*}\cdot\ell\,\ell\cdot u_{1}+4\varepsilon_{\eta}^{*} \cdot q_{1}(\ell\cdot u_{1}+\gamma q_{1}\cdot u_{2})-4\varepsilon_{\eta}^{*} \cdot u_{1}(q_{1}\cdot u_{2})^{2}, \tag{4.41}\] which is valid on shell of the delta functions and classically. The remaining propagator structures can be confirmed in the same fashion. We omit their derivations since they are straightforward and do not present any new features. Furthermore we have checked (4.32) both against the results of [58] and with our automated code. We will soon see how this kinematic sector is also responsible for non-Abelian radiation reaction. Figure 1: Feynman diagrams contributing classically to \(A_{4}\). Reaction As we have seen in section 4.1 our amplitude expressions are well able to reproduce conservative classical data. In this part of the paper we show that the same happens for non conservative effects. For example, such contributions to observables can be explained in terms of the ALD force (after Abraham, Lorentz and Dirac) in classical electrodynamics, see for instance [59]. However, treatment of dissipative forces in both QCD and gravity is, at best, much more challenging. This is why our goal here is to show how these subtle classical effects can be treated in a concise and universal manner through amplitudes. As we will see, it is the imaginary part of amplitudes that has the effect of sourcing dissipation in this context. Not only that, in appendix A we show how cuts and imaginary parts are inherently related to classical electromagnetic forces. Incidentally, as perhaps overlooked until now, we also demonstrate how radiation reaction enters the waveshape already at one loop. In fact, usually one has to deal with it at two loops of higher when computing momentum deflections [59; 110]. ### QED radiation reaction... Let us begin with electrodynamics. As elucidated in section 2.2, to discuss non conservative dynamics we will only need the imaginary part of the amplitude. To be more precise, we will consider cuts in the channel involving Compton amplitudes. Indeed, we learnt in section 3.3 that these are the only ones that are not subtracted classically. Furthermore, as to the the real part of such diagrams, we expect this to be rendered quantum by an appropriate choice of renormalisation scheme. In fact, we will explicitly show how this happens in QED in the next chapter8. Conveniently for us, \(\operatorname{Im}\mathcal{A}\) is simply obtained by appropriately cutting all relevant diagrams at this order. For simplicity we will be taking particle 1 to be static: \(m_{2}/m_{1}\ll 1\). Footnote 8: Furthermore, see for instance [59]. We start by considering cuts of diagrams of the following type o ease our calculation we will employ another useful trick. This consists of placing the \(q_{1}\) photon line on-shell too. Being rigorous, we shouldn't be allowed to do so: this cut isolates a tree point amplitude which vanishes on-shell in Minkowski. Nevertheless, it turns out that we can effectively cut this line as well. In fact, multiplying \(1/q_{1}^{2}\) by \(q_{1}^{2}\) does not strictly yield zero, but only gives a contact term which integrates to zero. Thus, for all our purposes \(1/q_{1}^{2}\propto\delta(q_{1}^{2})\). With these considerations in place, we can realise that we only need cuts of the following type which are two particle cuts separating two tree level Compton amplitudes. We write such four point amplitude as \[\mathcal{A}_{4,0}(p_{1},k_{1}\to p_{2},k_{2})=2iQ^{2}\varepsilon_{\eta}^{\mu}( k_{1})\varepsilon_{\eta^{\prime}}^{*\nu}(k_{2})\mathcal{J}_{\mu\nu}(p_{1},k_{1} \to p_{2},k_{2}), \tag{110}\] where we are taking \(p_{1},\,k_{1}\) incoming and \(p_{2},\,k_{2}\) outgoing. Above, we have also defined \[\mathcal{J}_{\mu\nu}(p_{1},k_{1}\to p_{2},k_{2})=\frac{p_{1\mu}p_{2\nu}}{p_{1 }\cdot k_{1}}+\frac{p_{2\mu}p_{1\nu}}{-p_{1}\cdot k_{2}}-\eta_{\mu\nu},\ \ \ p_{1}+k_{1}=p_{2}+k_{2}, \tag{111}\] satisfying \[\mathcal{J}_{\mu\nu}k_{1}^{\mu}\varepsilon^{\nu}(k_{2})=\mathcal{J}_{\mu\nu} \varepsilon^{\mu}(k_{1})k_{2}^{\nu}=0. \tag{112}\] Figure 3: Unitarity cut isolating two tree level Compton amplitudes (110). Figure 2: One of the Feynman diagram cuts needed for the radiated waveform calculation. The arrows indicate momentum flow. Making use of these definitions, the cut is given explicitly by \[\begin{split}\text{Cut}_{2}&=-4Q^{4}\sum_{\eta^{\prime \prime}}\int\hat{\mathrm{d}}^{D}\ell\,\hat{\delta}(2p_{2}\cdot(\ell-q_{1})) \hat{\delta}(\ell^{2})\varepsilon_{\eta}^{*\mu}(q_{1})\varepsilon_{\eta^{ \prime\prime}}^{\nu}(\ell)\varepsilon_{\eta^{\prime\prime}}^{*\rho}(\ell) \varepsilon_{\eta^{\prime}}^{*\sigma}(k)\\ &\qquad\qquad\times\mathcal{J}_{\mu\nu}(p_{2},-q_{1}\to p_{2}-q_{ 1}+\ell,-\ell)\mathcal{J}_{\rho\sigma}(p_{2}-q_{1}+\ell,-\ell\to p_{2}+q_{2},k).\end{split} \tag{108}\] We now proceed by using, just as in the previous sections, the gauge where \(\varepsilon\cdot p_{2}=0\). This means performing the helicity sum through (106) with \(q=p_{2}\). One soon obtains \[\text{Cut}_{2}=\frac{2Q^{4}}{m_{2}}\int\hat{\mathrm{d}}^{D}\ell\,\hat{\delta} (u_{2}\cdot(\ell-q_{1}))\hat{\delta}(\ell^{2})\left(\frac{\varepsilon_{\eta}^ {*}(q_{1})\cdot\ell\,\varepsilon_{\eta^{\prime}}^{*}(k)\cdot\ell}{(u_{2}\cdot q _{1})^{2}}+\varepsilon_{\eta}^{*}(q_{1})\cdot\varepsilon_{\eta^{\prime}}^{*}(k )\right). \tag{109}\] The loop integrals are easy to do here. The scalar one was first evaluated in [59] \[\int\hat{\mathrm{d}}^{D}\ell\,\hat{\delta}(u_{2}\cdot(\ell-q_{1}))\hat{\delta} (\ell^{2})=\frac{u_{2}\cdot k}{2\pi}\Theta(u_{2}\cdot k), \tag{110}\] and the tensor one follows by reduction. We obtain \[\int\hat{\mathrm{d}}^{D}\ell\,\hat{\delta}(u_{2}\cdot(\ell-q_{1}))\hat{\delta }(\ell^{2})\ell^{\mu}\ell^{\nu}=-\frac{(u_{2}\cdot k)^{3}}{2\pi(D-1)}\left(\eta ^{\mu\nu}-D\,u_{2}^{\mu}u_{2}^{\nu}\right)\Theta(u_{2}\cdot k), \tag{111}\] finally leading to, taking \(D=4\) \[\begin{split}\text{Cut}_{2}&=\frac{2Q^{4}}{m_{2}} \int\hat{\mathrm{d}}^{4}\ell\,\hat{\delta}(u_{2}\cdot(\ell-q_{1}))\hat{\delta} (\ell^{2})\left(\frac{\varepsilon_{\eta}^{*}(q_{1})\cdot\ell\,\varepsilon_{ \eta^{\prime}}^{*}(k)\cdot\ell}{(u_{2}\cdot q_{1})^{2}}+\varepsilon_{\eta}^{* }(q_{1})\cdot\varepsilon_{\eta^{\prime}}^{*}(k)\right)\\ &=\frac{2Q^{4}}{m_{2}}u_{2}\cdot k\,\Theta(u_{2}\cdot k)\frac{1}{ 3\pi}\varepsilon_{\eta}^{*}(q_{1})\cdot\varepsilon_{\eta^{\prime}}^{*}(k). \end{split} \tag{112}\] This result is quite simple and remarkable: the cut isolating two Compton amplitudes is proportional to a tree-level Compton amplitude itself (in the \(\varepsilon\cdot p_{2}\) gauge) times a geometric factor \(1/6\pi\). Having computed the two particle cut, we now have to fuse back the three point amplitude we had isolated at the start. To do so we reintroduce polarisation vectors in a generic gauge, getting \[\varepsilon_{\eta}^{*}(q_{1})\cdot\varepsilon_{\eta^{\prime}}^{*}(k)= \varepsilon_{\eta}^{*\mu}(q_{1})\varepsilon_{\eta^{\prime}}^{*\nu}(k)\mathcal{ J}_{\mu\nu}(p_{2},-q_{1}\to p_{2}+q_{2},k). \tag{113}\] This allows us to obtain the total five point cut-amplitude \(\text{Cut}_{3}(q_{1},k)\equiv\text{Cut}\left[\mathcal{A}_{5,1}(q_{1},k)\right]\) from tree-level unitarity. At this point we also dress electric couplings with particle labels \(Q\to Q_{i}\) \[\begin{split}\text{Cut}_{3,\eta^{\prime}}&=\frac{ iQ_{1}}{q_{1}^{2}}\sum_{\eta}(2p_{1}+q_{1})_{\rho}\varepsilon_{\eta}^{\rho}(q_{1}) \text{Cut}_{2,\eta\eta^{\prime}}(q_{1},k)\\ &=\frac{4iQ_{1}Q_{2}^{4}}{m_{2}^{2}}\frac{p_{2}\cdot k}{3\pi}\frac {1}{q_{1}^{2}}\left(p_{1}\cdot\varepsilon_{\eta^{\prime}}^{*}(k)-\frac{ \varepsilon_{\eta^{\prime}}^{*}(k)\cdot p_{2}\,p_{1}\cdot k}{k\cdot p_{2}} \right.\\ &\qquad\qquad\qquad\qquad\qquad\left.+\frac{p_{1}\cdot p_{2}\, \varepsilon_{\eta^{\prime}}^{*}(k)\cdot q_{1}}{p_{2}\cdot k}-\frac{p_{1}\cdot p _{2}\,\varepsilon_{\eta^{\prime}}^{*}(k)\cdot p_{2}k\cdot q_{1}}{(p_{2}\cdot k )^{2}}\right).\end{split} \tag{114}\] Above we have also set \(\Theta(u_{2}\cdot k)=1\), which holds on the support of the \(\mathrm{d}\Phi(k)\) integral of (10). Finally observe that \(2i\operatorname{Im}\mathcal{A}=\operatorname{Disc}\mathcal{A}\) where the cutting rules give the latter, thus the imaginary part of the amplitude is \[\begin{split}\operatorname{Im}\mathcal{A}_{5,1}=\frac{4Q_{1}Q_{2} ^{4}}{m_{2}^{2}}\frac{p_{2}\cdot k}{6\pi}\frac{1}{q_{1}^{2}}\left(p_{1}& \cdot\varepsilon_{\eta^{\prime}}^{*}(k)-\frac{\varepsilon_{\eta^{ \prime}}^{*}(k)\cdot p_{2}\,p_{1}\cdot k}{k\cdot p_{2}}\right.\\ &\left.+\frac{p_{1}\cdot p_{2}\,\varepsilon_{\eta^{\prime}}^{*}(k) \cdot q_{1}}{p_{2}\cdot k}-\frac{p_{1}\cdot p_{2}\,\varepsilon_{\eta^{\prime} }^{*}(k)\cdot p_{2}k\cdot q_{1}}{(p_{2}\cdot k)^{2}}\right).\end{split} \tag{11}\] We find the end result of our derivation to be evocative and simple: the classical one-loop five-point amplitude is, up to a factor, the tree level one times \(p_{2}\cdot k\). To complete the waveform calculation we substitute the amplitude into (10), giving \[\begin{split} F^{\mu\nu}(x)=\,-i\frac{Q_{1}Q_{2}^{4}}{6\pi m_{2}^ {2}}\sum_{\eta}\int\mathrm{d}\Phi(k)&\int\hat{\mathrm{d}}^{4}q_{ 1}\hat{\mathrm{d}}^{4}q_{2}\,\hat{\delta}(q_{1}\cdot u_{1})\hat{\delta}(q_{2} \cdot u_{2})\hat{\delta}^{4}(k+q_{1}+q_{2})\\ &\times\frac{k\cdot u_{2}}{q_{1}^{2}}k^{[\mu}\varepsilon_{\eta}^{ \nu]}\varepsilon_{\eta}^{*}\cdot\mathcal{J}(k,q_{1})e^{-i(k\cdot x+q_{1}\cdot b _{1}+q_{2}\cdot b_{2})}+c.c.\end{split} \tag{12}\] Where we defined for convenience the classical current \[\mathcal{J}^{\nu}(k,q_{1})=\left(u_{1}^{\nu}+q_{1}^{\nu}\frac{u_{1}\cdot u_{2 }}{k\cdot u_{2}}-u_{2}^{\nu}\frac{k\cdot u_{1}}{k\cdot u_{2}}-u_{2}^{\nu} \frac{u_{1}\cdot u_{2}\,k\cdot q_{1}}{(k\cdot u_{2})^{2}}\right)=\mathcal{J}^ {\nu}(-k,-q_{1}). \tag{13}\] In 7, we will explain how this contribution can be interpreted purely classically. This will be done through a derivation of the waveform that makes judicious use of the ALD force. ###...QCD radiation reaction... We now analyse in some detail the non conservative dynamics of QCD. As it happens, the preparatory work of 4.2 will be very convenient for us. In non-Abelian theories radiation reaction will be present as well, only under a more sophisticated disguise. In fact, from an amplitude standpoint, self force effects can be sourced by every diagram which has a cut isolating a Compton amplitude. As we explained, these cuts will yield an imaginary part of the waveshape that is not subtracted classically. What is more in Yang-Mills, is that we will have QED-like contributions as well as purely non-Abelian ones which involve three or four gluon vertices. We find that the non-Abelian radiation reaction channels are precisely those characterised by \(A_{4}\) (and \(A_{5}\)) which we studied in section 4.2. Let us then compute the radiation reaction diagrams of QCD. As before, we consider the following cut diagram We keep in mind that the gluon \(a\) will have to be joined to particle 1. Indicating the full left/right YM Compton amplitudes with \(\mathcal{A}^{ab}_{L,R}\) the diagram reads \[\text{Cut}_{2}=\sum_{\text{helicities}}\int\text{d}\Phi(\ell)\,\hat{\delta}(2p_{ 2}\cdot(\ell-q_{1}))\mathcal{A}^{ab}_{L}\cdot\mathcal{A}^{bA}_{R}. \tag{119}\] At this point it is useful to note again that the non-Abelian Compton amplitude \(\mathcal{A}^{ab}\equiv\mathcal{A}^{ab}_{4,0}\) can be written as (104) \[\mathcal{A}^{ab}=C^{a}\cdot C^{b}A+f^{abc}C^{c}A^{\prime}, \tag{120}\] where _both_\(A\) and \(A^{\prime}\) are abelian Compton amplitudes, which differ only by a factor. Using this piece of knowledge we can expand the integrand in the following manner \[\begin{split}\mathcal{A}^{ab}_{L}\cdot\mathcal{A}^{bA}_{R}& =\left(C^{a}_{2}\cdot C^{b}_{2}A_{L}+f^{abd}C^{d}_{2}A^{\prime}_{ L}\right)\cdot\left(C^{b}_{2}\cdot C^{A}_{2}A_{R}+f^{bAe}C^{e}_{2}A^{\prime}_{R} \right)\\ &\approx C^{a}_{2}\cdot C^{b}_{2}\cdot C^{b}_{2}\cdot C^{A}_{2}A _{L}A_{R}+f^{abd}f^{bAc}C^{d}_{2}\cdot C^{e}_{2}A^{\prime}_{L}A^{\prime}_{R}, \end{split} \tag{121}\] Figure 4: Two cuts contributing to non-Abelian radiation reaction. While the second cut appears in both kinematic sectors \(A_{4}\) (through a sub-sub-leading \(\hbar\) expansion of its color) and \(A_{2}\) (here with a leading color factor), the first topology belongs in \(A_{4}\) only. Here, we have also cut the single gluon line as explained in 118. Figure 5: Cut relevant to radiation reaction in QCD. For clarity we have just indicated color indices, momentum routing is the same as in figure 3. having ignored the cross terms since it's quantum. Now, the simple relation above makes it very easy to interpret the structure of the cut in non-Abelian gauge theories. The first term in (5.16), the one proportional to \(C_{2}^{a}\cdot C_{2}^{b}\cdot C_{2}^{b}\cdot C_{2}^{A}\), is exactly the one already encountered in QED in (5.8)! That is to say that \[\sum_{\text{helicities}}\int\mathrm{d}\Phi(\ell)\,\hat{\delta}(2p_{2}\cdot(\ell- q_{1}))A_{L}A_{R}=\frac{2g^{4}}{m_{2}}u_{2}\cdot k\frac{1}{3\pi}\varepsilon_{\eta}^ {*}(q_{1})\cdot\varepsilon_{\eta^{\prime}}^{*}(k). \tag{5.17}\] In other words, what we had computed in QED was also part of the QCD story, only now multiplied by a constant color structure. It is immediate to see that the last, non-Abelian, term of (5.16) has (up to relabelling) the structure (4.31). This is precisely the color of the partial amplitude \(A_{4}\). Then, for the computation of this channel we need precisely the \(1/q_{1}^{2}\) pole that we described in (4.32), entailing non-Abelian radiation reaction. From (4.32), the imaginary part of the five point amplitude is immediately obtained and it reads \[\mathrm{Im}\,A_{4}=\frac{8g^{5}m_{1}m_{2}}{q_{1}^{2}}\varepsilon_{\mu}^{*}(k) \int\mathrm{d}\Phi(\ell)\hat{\delta}(u_{2}\cdot(\ell-q_{1}))\frac{(u_{1}\cdot \ell+\gamma q_{1}\cdot u_{2})q_{1}^{\mu}-(u_{2}\cdot q_{1})^{2}u_{1}^{\mu}}{(q_ {1}-\ell)^{2}(\ell+k)^{2}}, \tag{5.18}\] in a gauge where \(\varepsilon\cdot p_{2}=0\). ###...GR radiation reaction As a final application of this chapter, we will here tackle gravity self force radiation. This will indeed demonstrate how the dissipative effects are cleanly taken into account by our analytic framework. Following the examples of QED and QCD we again focus on the 2-particle cut, just involving graviton lines now having taken the leg with momentum \(q_{1}\) to be on-shell as well (as before, we keep in mind that this graviton will have to be reconnected with the worldline of particle 1). The cut-amplitude is given by \[\text{Cut}_{2}=\sum_{\text{helicities}} \int\text{d}\Phi(\ell)\,\hat{\delta}(2p_{2}\cdot(\ell-q_{1}))\] \[\times\mathcal{M}_{4,0}(p_{2},-q_{1}\to p_{2}-q_{1}+\ell,-\ell) \mathcal{M}_{4,0}(p_{2}-q_{1}+\ell,-\ell\to p_{2}+q_{2},k). \tag{111}\] This quantity can be greatly simplified by choosing, as we did before, a gauge in which graviton polarisations are orthogonal to \(p_{2}\): \(\varepsilon_{\mu}\varepsilon_{\nu}p_{2}^{\mu}=\varepsilon_{\mu}\varepsilon_{ \nu}p_{2}^{\nu}=0\). In this case the product of amplitudes becomes proportional to a single contraction [148] \[\mathcal{M}_{4,0}(p_{2},q_{1},\ell)\mathcal{M}_{4,0}(p_{2},k,\ell)=\frac{ \kappa^{4}}{4}\frac{(p_{2}\cdot k)^{4}}{(\ell+k)^{2}(\ell-q_{1})^{2}}( \varepsilon^{*}(q_{1})\cdot\varepsilon(\ell)\,\varepsilon^{*}(\ell)\cdot \varepsilon^{*}(k))^{2}. \tag{112}\] At this point we have to evaluate the sum over physical states. Note that we haven't been explicit about helicity assignments since these always come with opposite signs inside the loop. We have \[\sum_{\text{helicities}}(\varepsilon^{*}(q_{1})\cdot\varepsilon(\ell)\, \varepsilon^{*}(\ell)\cdot\varepsilon^{*}(k))^{2}=\varepsilon^{*\mu}(q_{1}) \varepsilon^{*\nu}(k)\varepsilon^{*\rho}(q_{1})\varepsilon^{*\sigma}(k)P_{ \mu\nu\rho\sigma}(\ell), \tag{113}\] where the projector over physical states reads, for the case of a massive gauge vector \(p_{2}\) \[P_{\mu\nu\rho\sigma}(\ell)=\sum_{\text{helicities}}\varepsilon_{\mu}(\ell) \varepsilon^{*}_{\nu}(\ell)\varepsilon_{\rho}(\ell)\varepsilon^{*}_{\sigma}( \ell)=\frac{1}{2}\left(P_{\mu\nu}P_{\rho\sigma}+P_{\mu\sigma}P_{\rho\nu}- \frac{2}{D-2}P_{\mu\rho}P_{\sigma\nu}\right). \tag{114}\] Above we have defined \[P^{\mu\nu}(\ell)=-\left(\eta^{\mu\nu}-\frac{\ell^{(\mu}u_{2}^{\nu)}}{\ell \cdot u_{2}}+\frac{\ell^{\mu}\ell^{\nu}}{(\ell\cdot u_{2})^{2}}\right), \tag{115}\] which is the projection over on shell states that we had used in the electromagnetic case. The contraction is then straightforward and yields9 Footnote 9: At this point we have taken \(D=4\). \[\varepsilon^{*\mu}\varepsilon^{*\nu}\varepsilon^{*\rho}\varepsilon^{*\sigma}P _{\mu\nu\rho\sigma}=\left(\varepsilon^{*}(q_{1})\cdot\varepsilon^{*}(k)+ \frac{\varepsilon^{*}(q_{1})\cdot\ell\,\varepsilon^{*}(k)\cdot\ell}{(u_{2} \cdot\ell)^{2}}\right)^{2}-\frac{1}{2}\left(\frac{\varepsilon^{*}(q_{1}) \cdot\ell\,\varepsilon^{*}(k)\cdot\ell}{(u_{2}\cdot\ell)^{2}}\right)^{2}, \tag{116}\] finally giving us a direct expression of the cut \[\text{Cut}_{2}=\frac{\kappa^{4}}{8}(p_{2}\cdot k)^{4}\varepsilon^ {*}_{\mu}(q_{1})\varepsilon^{*}_{\nu}(q_{1}) \varepsilon^{*}_{\rho}(k)\varepsilon^{*}_{\sigma}(k)\int\text{d}\Phi(\ell)\, \hat{\delta}(p_{2}\cdot(\ell-q_{1}))\frac{1}{(\ell+k)^{2}(\ell-q_{1})^{2}}\] \[\times\,\left(\eta^{\mu\rho}\eta^{\nu\sigma}+\frac{1}{(u_{2} \cdot\ell)^{2}}\eta^{\mu\rho}\ell^{\nu}\ell^{\sigma}+\frac{1}{2(u_{2}\cdot \ell)^{4}}\ell^{\mu}\ell^{\nu}\ell^{\rho}\ell^{\sigma}\right). \tag{117}\] The integral is IR divergent; we discuss IR divergences further below. It is then straightforward to reproduce a very compact expression of the five point amplitude's imaginary part from tree level unitarity \[\begin{split}\operatorname{Im}\mathcal{M}_{5,1}=\frac{\kappa^{5}(p_ {2}\cdot k)^{4}}{16\,q_{1}^{2}}& P_{\alpha\beta\mu\nu}(q_{1})p_{1}^{ \alpha}p_{1}^{\beta}\varepsilon_{\rho}^{*}(k)\varepsilon_{\sigma}^{*}(k) \int\mathrm{d}\Phi(\ell)\,\frac{\hat{\delta}(p_{2}\cdot(\ell-q_{1}))}{(\ell+k) ^{2}(\ell-q_{1})^{2}}\\ &\times\,\left(\eta^{\mu\rho}\eta^{\nu\sigma}+\frac{1}{(u_{2} \cdot\ell)^{2}}\eta^{\mu\rho}\ell^{\nu}\ell^{\sigma}+\frac{1}{2(u_{2}\cdot\ell )^{4}}\ell^{\mu}\ell^{\nu}\ell^{\rho}\ell^{\sigma}\right).\end{split} \tag{100}\] We remind the reader that this result was achieved in a gauge where \(\varepsilon\cdot p_{2}=0\), but one can still retrieve the explicitly gauge invariant expression substituting in \[\varepsilon^{\mu}(k)\to\varepsilon^{\mu}(k)-\frac{\varepsilon(k)\cdot p_{2}}{ k\cdot p_{2}}k^{\mu}. \tag{101}\] It would be interesting to replicate (100) using purely classical methods, as we did for QED. One way to do this would be through the "MiSaTaQuWa" equations of [149, 150], which are known to describe linear self force in gravity. However, practical computations based on these (non-local) forces happen to be subtle. This is because they involve integrals of the (curved space) Green functions over the complete past history of the particle. We leave these exciting and challenging calculations to a future work. As a final remark for this section, we would like to underline the universality of the two-particle cuts of Compton amplitudes (100), (101) and (102). As we hope it is clear by now, this object is crucial for the characterisation of the imaginary part of the full five point amplitude describing the waveform. However, the same object can be used, as was done for instance in [59], to describe \(\mathcal{O}(g^{6})\) radiation reaction effects that alter the integrated momentum kick of the particle. Classically speaking, the physics of these two situations is different. The radiation reaction \(\mathcal{O}(g^{5})\) contribution is a transient one10 that doesn't affect the final trajectory. However, the emitted radiation and can still be observed in the waveform. Instead the \(\mathcal{O}(g^{6})\) one is able to change the final net trajectory of the particle \(\Delta p^{\mu}\). Nonetheless, this second (two loops) process is still entailed by \(\mathrm{Cut}_{2}\): in this case one has to attach both mass less states of the cut into the second massive worldline. Footnote 10: In QED this is described by a total derivative Schott term. ## 6 Renormalisation We now turn our attention to some subtleties associated with the divergences arising from loop diagrams of the type used in the waveshape calculations in the previous section. Specifically, we examine the structure of IR divergences and the renormalisation of UV divergences in QED. The former are dealt with by isolating the IR divergences in the manner indicated by Weinberg in [151], and performing an \(\hbar\) expansion of the IR divergent diagrams to find a complete cancellation of IR divergences in the classical waveshape. We then proceed to address UV divergences by carrying out the one-loop renormalisation of the relevant vertices in the On-Shell renormalisation scheme. Doing this we find that the diagrams corresponding to vertex corrections do not contribute to the classical waveshape, thereby justifying the exclusion of such diagrams from the analysis of the previous sections. ### Infrared divergences We will now discuss the effect of infrared divergences arising from soft virtual photons in loop amplitudes. These virtual IR divergences arise from diagrams where soft photon loops are attached to on-shell external legs of a scattering amplitude in the manner illustrated below. IR divergences arise from the region where the virtual loop momentum \(|\boldsymbol{\ell}|\) is much smaller than the momenta of of external particle \(p_{i}\). In this region, it is possible to show that the IR divergent amplitudes take the form [151]: \[\mathcal{A}^{IR}=\left(\frac{1}{2}\frac{1}{(2\pi)^{4}}\sum_{nm}e^{2}Q_{n}Q_{m} \eta_{n}\eta_{m}J_{nm}\right)\times\mathcal{A}^{\text{Hard}}, \tag{109}\] where \(\mathcal{A}^{\text{Hard}}\) is what is left of the amplitude after removing the virtual photons lines, and the divergent factor \(J_{nm}\) is given by11 Footnote 11: \(D=4\) here. \[J_{nm}\equiv-i\left(p_{n}\cdot p_{m}\right)\int_{\mu\leq|\boldsymbol{\ell}|\leq \Lambda}\frac{\hat{\mathrm{d}}^{4}\ell}{[\ell^{2}+i\epsilon]\left[p_{n}\cdot \ell+i\eta_{n}\epsilon\right]\left[-p_{m}\cdot\ell+i\eta_{m}\epsilon\right]} \tag{110}\] where \(\Lambda\) is a scale that defines the soft photons, \(\mu\) is an IR cutoff and \(\eta_{n}=\pm 1\) for outgoing and incoming particles respectively. In what follows, we will use primed indices to refer to outgoing particles so that \(\eta_{1}=\eta_{2}=-1\) an \(\eta_{1}^{\prime}=\eta_{3}^{\prime}=+1\). Specifically, we will be interested in IR divergences of one-loop five-point amplitudes, so that \(\mathcal{A}^{IR}\equiv\mathcal{A}^{IR}_{5,1},\ \mathcal{A}^{\text{Hard}}\equiv \mathcal{A}^{\text{Hard}}_{5,0}\). Turning to the integral \(J_{nm}\), we perform the integration over \(\ell^{0}\) by residues. In doing so, we will consider two distinct cases. First, we consider the case where one Figure 6: Infrared divergent diagrams in the in the 1-loop amplitude particle is incoming and the other outgoing. Then we consider the same integral when both particles are incoming/outgoing. Let us start by taking particle \(n\) to be outgoing and \(m\) to be incoming. In this case the poles are \(\ell^{0}=\pm|\mathbf{\ell}|\pm i\epsilon\), \(\ell^{0}=\frac{\mathbf{p}_{n}\cdot\mathbf{\ell}}{p_{m}^{0}}-i\epsilon\) and \(\ell^{0}=\frac{\mathbf{p}_{m}\cdot\mathbf{\ell}}{p_{m}^{0}}-i\epsilon\). The important point in this case is that the poles associated with the massive propagators both lie on the same half of the complex \(\ell^{0}\) plane, so their contribution can be avoided entirely by closing the contour on the other side of the complex plane. In this case, we close the contour on the upper half plane, picking the pole \(\ell^{0}=|\mathbf{\ell}|+i\epsilon\). Evaluating the integral in this manner yields the result \[\text{Re}\;\;J_{nm}=\frac{2\pi^{2}}{\beta_{nm}}\ln\left(\frac{1+\beta_{nm}}{1- \beta_{nm}}\right)\ln\left(\frac{\Lambda}{\mu}\right), \tag{108}\] where \(\beta_{nm}\) is the relative velocity \[\beta_{nm}\equiv\sqrt{1-\frac{m_{n}^{2}m_{m}^{2}}{\left(p_{n}\cdot p_{m} \right)^{2}}}. \tag{109}\] We now consider the case where both particles are either incoming or outgoing. For definiteness, consider the case where both particles are outgoing so that \(\eta_{n}=\eta_{m}=1\). In which case the poles are located at \[\begin{split}\ell^{0}&=\pm|\mathbf{\ell}|\pm i \epsilon\\ \ell^{0}&=\frac{\mathbf{p}_{n}\cdot\mathbf{\ell}}{p_{n}^ {0}}-i\epsilon\\ \ell^{0}&=\frac{\mathbf{p}_{m}\cdot\mathbf{\ell}}{p_{m}^ {0}}+i\epsilon\end{split} \tag{110}\] In this case, the poles of massive propagators lie on opposite sides of the complex plane, so that we would inevitably pick up one of these poles whichever way the contour is closed. Supposing we close the contour from above, we pick up a pole associated with the photon propagator and one from the massive propagators. The former contribution is identical to the integral above and gives the real part of the integral. The latter pole contributes another term which is [152] \[\begin{split}\text{Im}\,J_{nm}&=-i\left(p_{n}\cdot p _{m}\right)\int_{\mu\leq|\mathbf{\ell}|\leq\Lambda}\frac{|\mathbf{\ell}|^{2}\hat{ \mathbf{d}}|\mathbf{\ell}|\;\hat{\mathrm{d}}\Omega(\mathbf{n})}{\left[(\frac{ \mathbf{p}_{m}\cdot\mathbf{\ell}}{p_{m}^{0}})^{2}-|\mathbf{\ell}|^{2}+i\epsilon\right] \left[p_{n}^{0}(\mathbf{p}_{m}\cdot\mathbf{\ell})+p_{m}^{0}(\mathbf{p}_{n}\cdot \mathbf{\ell})+i\epsilon\right]}\\ &=2\pi\left(p_{n}\cdot p_{m}\right)\ln\left(\frac{\Lambda}{\mu} \right)\int\frac{\hat{\mathrm{d}}\Omega(\mathbf{n})}{\left[(\frac{\mathbf{p}_ {m}\cdot\mathbf{n}}{p_{m}^{0}})^{2}-1+i\epsilon\right]\left[p_{n}^{0}(\mathbf{ p}_{m}\cdot\mathbf{n})+p_{m}^{0}(\mathbf{p}_{n}\cdot\mathbf{n})+i\epsilon\right]} \end{split} \tag{111}\] The last integral is most easily evaluated by going to the rest frame of particle \(m\) such that \(\mathbf{p}_{m}\cdot\mathbf{n}=0\) and \(p_{m}^{0}(\mathbf{p}_{n}\cdot\mathbf{n})=m_{m}m_{n}\gamma_{nm}\beta_{nm}(\hat{ \beta}\cdot\mathbf{n})\) where \(\gamma_{nm}=p_{n}\cdot p_{m}/m_{n}m_{m}\) is the Lorentz factor and \(\beta_{nm}\) the relative velocity. In this frame the integral becomes \[4\pi^{2}\left(p_{n}\cdot p_{m}\right)\ln\left(\frac{\Lambda}{\mu}\right)\int_{-1}^ {1}\frac{\hat{\mathrm{d}}x}{m_{n}m_{m}\gamma_{nm}\beta_{nm}\left[x+i\epsilon\right]} \tag{100}\] where we have oriented such that \(\hat{\beta}\cdot\mathbf{n}=\cos\theta\equiv x\). Finally, we make use of the identity \[\frac{1}{x+i\varepsilon}=\mathrm{PV}\left(\frac{1}{x}\right)-\frac{i}{2}\hat{ \delta}(x) \tag{101}\] and, noting that only the second term contributes to the integral by parity, we arrive at the result \[\mathrm{Im}\;J_{nm}=-\frac{4i\pi^{3}}{\beta_{nm}}\ln\left(\frac{\Lambda}{\mu} \right). \tag{102}\] Note that is also possible to recover this imaginary divergence in the spirit of section 2.2. It is straightforward to check that cutting the massive propagators attaching to the virtual photon lines recovers the result above. To summarise, we find that the infrared divergences contain both real and imaginary parts. The diagrams where the virtual photons attach to one incoming and one outgoing leg yield purely real IR divergences, while the diagrams in which the virtual photon attaches to two incoming/outgoing legs give both real and imaginary divergences. We are now ready to address the issue of whether these IR divergences contribute to the classical waveshape. We do so by examining the \(\hbar\) expansion of the sum: \[\frac{1}{2(2\pi)^{4}}\sum_{nm}Q_{n}Q_{m}\eta_{n}\eta_{m}J_{nm}. \tag{103}\] Noting that the one-loop amplitude is suppressed by a factor \(Q^{2}/\hbar\) relative to the tree amplitude, we identify the quantum parts of \(J_{nm}\) as those of order \(\hbar^{2}\). We denote the incoming and outgoing massive momenta by \(p_{1},p_{2}\) and \(p_{1}^{\prime},p_{2}^{\prime}\) respectively such that \(\eta_{1}=\eta_{2}=-1\) and \(\eta_{1}^{\prime}=\eta_{2}^{\prime}=+1\) and \(Q_{1}=Q_{1}^{\prime}\), \(Q_{2}=Q_{2}^{\prime}\). Using momentum conservation, we write \[p_{1}^{\prime}=p_{1}+q \tag{104}\] \[p_{2}^{\prime}=p_{2}-q-k.\] Furthermore, from the on-shell conditions we have \[p_{1}\cdot q =\mathcal{O}(\hbar^{2}) \tag{105}\] \[p_{2}\cdot q =-p_{2}\cdot k+\mathcal{O}(\hbar^{2})\] Using this it is straightforward to expand \(J_{nm}\) in powers of \(\hbar\) noting that the \(\hbar\) dependence of follows from expanding the dot products \(p_{n}\cdot p_{m}\) using the on-shell conditions. We will find it convenient to express our results in terms of the in terms of the the Lorentz factor and relative velocity \[\gamma\equiv\frac{p_{1}\cdot p_{2}}{m_{1}m_{2}}\quad v\equiv\sqrt{1-\frac{1}{ \gamma^{2}}}. \tag{106}\] ### Real divergence We now examine the \(\hbar\) expansion of the real divergences in (114). To ensure that these divergences are quantum, we must show that the \(\mathcal{O}(\hbar^{0})\) and \(\mathcal{O}(\hbar)\) terms vanish in the sum over \((n,m)\). Since we are considering real divergences, all values of \((n,m)\) contribute to the sum. Considering the \(\mathcal{O}(\hbar^{0})\) terms first, it is easy to check the sum of terms cancels exactly. Consider for example the part of the sum proportional to \(Q_{1}^{2}\), we have \[\frac{1}{2(2\pi)^{4}}e^{2}Q_{1}^{2}\operatorname{Re}\left(\eta_{1}\eta_{1}J_{1 1}+2\eta_{1}\eta_{1}^{\prime}J_{11^{\prime}}+\eta_{1}^{\prime}\eta_{1}^{\prime }J_{1^{\prime}1^{\prime}}\right) \tag{115}\] To this order, we have that \(p_{1}\cdot p_{1}=p_{1}\cdot p_{1}^{\prime}=p_{1}^{\prime}\cdot p_{1}^{\prime}=m _{1}^{2}\) and since \(J_{nm}\) is a function of \(p_{n}\cdot p_{m}\) we conclude that \(J_{11}=J_{11^{\prime}}=J_{1^{\prime}1^{\prime}}\) but due to the sign differences arising from the \(\eta\) factors, we find that this sum vanishes. It is easy to verify that a similar cancellation occurs for the terms proportional to \(Q_{1}^{2}\) and \(Q_{1}Q_{2}\). We conclude that the \(O(\hbar^{0})\) terms do not contribute to the real IR divergences. Turning to the \(O(\hbar)\) terms, we start by noting that the terms in the sum proportional to \(Q_{i}^{2}\) for \(i=1,2\) still cancel in the same manner as before. This is because the equality \(p_{i}\cdot p_{i}=p_{i}\cdot p_{i}^{\prime}=p_{i}^{\prime}\cdot p_{i}^{\prime}\) still holds to this order (they only differ by terms of \(\mathcal{O}(\hbar^{2})\)). We therefore only need to look at the terms proportional to \(Q_{1}Q_{2}\). Noting that the terms \(J_{12}=J_{21}\) do not contribute powers of \(\hbar\), we are left with: \[\frac{1}{(2\pi)^{4}}e^{2}Q_{1}Q_{2}\text{Re}\left(\eta_{1}\eta_{2}^{\prime}J_ {12^{\prime}}+2\eta_{1}^{\prime}\eta_{2}J_{1^{\prime}2}+\eta_{1}^{\prime}\eta_ {2}^{\prime}J_{1^{\prime}2^{\prime}}\right). \tag{116}\] Expanding each term to linear order in \(\hbar\) using (114) and the kinematics (102) we find that \[\text{Re}\ \eta_{1}\eta_{2}^{\prime}J_{12^{\prime}}=\frac{2\pi^{2}m_{1}m_{2}k \cdot p_{1}\left(2\gamma^{2}m_{1}^{2}m_{2}^{2}+m_{1}^{2}m_{2}^{2}\left(v\log \left(\frac{1-v}{1+v}\right)-2\right)\right)}{\gamma^{3}v^{4}} \tag{117}\] \[\text{Re}\ \eta_{1}^{\prime}\eta_{2}^{\prime}J_{1^{\prime}2}=-\frac{2\pi^{2}m_{1} m_{2}(k\cdot p_{1}-k\cdot p_{2})\left(2\gamma^{2}m_{1}^{2}m_{2}^{2}+m_{1}^{2}m_{2}^{2 }\left(v\log\left(\frac{1-v}{1+v}\right)-2\right)\right)}{\gamma^{3}v^{4}} \tag{118}\] These terms once again cancel in the sum. Having established that the terms of \(\hbar\) cancel, we conclude that the real infrared divergences do not contribute classically. ### Imaginary part The \(\hbar\) expansion of the imaginary IR divergences proceeds in the manner as in the previous sections, where we now expand: \[\text{Im}\;\;J_{nm}=-\frac{4i\pi^{3}}{\beta_{nm}}\ln\left(\frac{\Lambda}{\mu} \right). \tag{111}\] This time however, the sum does not run over all pairs \((n,m)\) but only those for which \(\eta_{n}=\eta_{m}\). It is precisely this restriction of the sum which prevents the classical contributions from cancelling. We find that the term \((n,m)=(1^{\prime},2^{\prime})\) in the sum yields a classical contribution which survives the sum due to the absence of the terms \((1,2^{\prime})\) and \((1^{\prime},2)\) in the imaginary part, so that the imaginary part of the amplitude contains a classical IR divergence. Nevertheless, this divergence drops out of the waveform due to the simplification discussed in section 3.3. ### QCD and gravity The analysis of IR divergences in QCD and gravity proceeds broadly in the same manner. Soft divergences arising from graphs where a soft messenger connects two massive lines have the same fate as in QED, and do not contribute to the waveshape. There are two major differences in QCD and gravity, however. First, in both theories, soft divergences also arise in diagrams where a soft messenger connects a massive to a _massless_ line. Imaginary IR divergences in such diagrams do indeed have classical implications, and are discussed in references [130; 131; 94; 133]. Second, in QCD, collinear divergences arise at the level of the amplitude. It would be interesting to explore the classical implications of these collinear divergences in future. ### Renormalisation We now turn to the UV divergences arising form vertex corrections, with the aim of justifying the exclusion of such diagrams from the calculation of the QED waveshape in the preceding sections. It suffices for this purpose to consider the renormalisation of the one-loop Compton amplitude. We start by subtracting the one-loop divergences in the self energy, three-point vertex, and four-point vertex diagrams using the on-shell renormalisation scheme. The remaining finite results are then expanded in powers of \(\hbar\) revealing a cancellation of the classical terms at one-loop. To start, we rewrite the bare Lagrangian for scalar QED in terms of the renormalised fields by defining12 Footnote 12: In this section, we use the symbol \(e\) to represent a generic charge. \[\phi =\sqrt{Z_{2}}\phi_{0} \tag{112}\] \[A^{\mu} =\sqrt{Z_{3}}A^{\mu}_{0}\] \[e =\sqrt{Z_{e}}e_{0}\] so that \[\begin{split}\mathcal{L}_{\text{bare}}\ =&-\frac{1}{4}Z_{3}F_{\mu\nu}F^{\mu \nu}+Z_{2}\partial^{\mu}\phi^{*}\partial_{\mu}\phi-Z_{2}m_{0}^{2}\phi^{*}\phi \\ &+ieZ_{1}A^{\mu}\left(\phi^{*}\partial_{\mu}\phi-\phi\partial_{ \mu}\phi^{*}\right)+e^{2}Z_{4}A^{\mu}A_{\mu}\phi^{*}\phi,\end{split} \tag{108}\] where we have further defined \(Z_{1}=\sqrt{Z_{3}}Z_{2}Z_{e}\) and \(Z_{4}=Z_{3}Z_{2}Z_{e}\). We then expand the bare Lagrangian using \(Z_{i}=1+\delta_{i}\) and \(Z_{2}(m_{0}^{2}-m^{2})=\delta_{m}\) to obtain the counterterm Lagrangian \[\begin{split}\mathcal{L}_{\text{ct}}=&-\frac{1}{4} \delta_{3}F_{\mu\nu}F^{\mu\nu}+\delta_{2}\left(\partial^{\mu}\phi^{*}\partial_ {\mu}\phi-m^{2}\phi^{*}\phi\right)-\delta_{m}\phi^{*}\phi\\ &+ie\delta_{1}A^{\mu}\left(\phi^{*}\partial_{\mu}\phi-\phi \partial_{\mu}\phi^{*}\right)+e^{2}\delta_{4}A^{\mu}A_{\mu}\phi^{*}\phi,\end{split} \tag{109}\] from which we read off the Feynman rules for the counterterm vertices: \[\begin{split}&\Sigma_{\text{ct}}(p^{2})=i(\delta_{2}(p^{2}-m^{2})- \delta_{m})\\ &\Gamma^{\mu}_{\text{ct}}(p,q)=-ie\delta_{1}\left(2p^{\mu}-q^{\mu} \right)\\ &\Gamma^{\mu\nu}_{\text{ct}}(p,q,k)=2ie^{2}\delta_{4}g^{\mu\nu} \end{split} \tag{110}\] In what follows, we will consider three types of vertex corrections. First, we have the self energy correction \[\Sigma(p^{2})=\Sigma_{\text{loop}}(p^{2})+\Sigma_{\text{ct}}(p^{2}), \tag{111}\] where the first and second terms refer to the one-loop diagram and the counterterm diagram respectively. The 1PI cubic and quartic vertices are defined in a similar manner \[\begin{split}&\Gamma^{\mu}(p,q)=\Gamma^{\mu}_{\text{loop}}(p,q)+ \Gamma^{\mu}_{\text{ct}}(p,q)\\ &\Gamma^{\mu\nu}(p,q,k)=\Gamma^{\mu\nu}_{\text{loop}}(p,q,k)+ \Gamma^{\mu\nu}_{\text{ct}}(p,q,k)\end{split} \tag{112}\] where \(\Gamma^{\mu}(p,q)\) refers to a cubic vertex with incoming momentum \(p\) and outgoing photon momentum \(q\). Likewise \(\Gamma^{\mu\nu}(p,q,k)\) refers to a four-point vertex with photon momenta \(q\) and \(k\) emitted. Note that here, we do not include the tree-level vertices in our definition of \(\Gamma^{\mu}\) and \(\Gamma^{\mu\nu}\). We renormalise our diagrams in the on-shell scheme. In this scheme, we impose the following renormalisation conditions for the self energy diagrams: \[\begin{split}\Sigma(p^{2}\to m^{2})&=0\\ \frac{d}{dp^{2}}\Sigma(p^{2}\to m^{2})&=0\end{split} \tag{113}\] where it is understood that the on-shell condition is applied after taking the required derivative. These conditions ensure that the renormalised propagator coincides with the free propagator near the pole \(p^{2}=m^{2}\). This allows us to neglect self energy corrections in external lines since the on-shell renormalised propagator is still truncated by inverse (free) propagator factors in the LSZ formula. The renormalisation conditions for the three-point and four-point vertices are defined in the limit of zero photon momenta, in this limit the vertices are generically of the form: \[\begin{split}-ie\Gamma^{\mu}(p,q\to 0)=-ieF(p^{2})p^{\mu}\\ ie^{2}\Gamma^{\mu\nu}(p,q\to 0,k\to 0)=ie^{2}F_{1}(p^{2})g^{\mu\nu}+ie^{2}F_{2}(p^{2})p^{ \mu}p^{\nu}.\end{split} \tag{108}\] The renormalisation conditions then are \[F(p^{2})=0\quad F_{1}(p^{2})=0, \tag{109}\] for the cubic and quartic vertices respectively. The above conditions are sufficient to fix the counterterms to be \[\begin{split}\delta_{m}&=\Sigma_{\text{loop}}(p^{2 }\to m^{2})=0\\ \delta_{2}&=-\frac{d}{dp^{2}}\Sigma_{\text{loop}}(p^{ 2}\to m^{2})\\ \delta_{1}&=-F_{\text{loop}}(p^{2})\\ \delta_{4}&=-\frac{1}{2}F_{1,\text{loop}}(p^{2}). \end{split} \tag{110}\] It is straightforward to check through explicit calculations that the counterterms above are sufficient to subtract all divergences arsing from loop corrections. In the limit of zero photon momenta, the from factors of the three-point and four-point vertices are fixed by the Ward identities: \[\begin{split} F_{\text{loop}}(p^{2})&=\frac{d \Sigma_{\text{loop}}(p^{2})}{dp^{2}}\\ F_{1,\text{loop}}(p^{2})&=2\frac{d\Sigma_{\text{ loop}}\left(p^{2}\right)}{dp^{2}}\\ F_{2,\text{loop}}(p^{2})&=-4\frac{d^{2}\Sigma_{ \text{loop}}\left(p^{2}\right)}{\left(dp^{2}\right)^{2}}\end{split} \tag{111}\] Comparing this with (110), we see that the Ward identity implies the following equality for the counterterms \[\delta_{1}=\delta_{2}=\delta_{4}. \tag{112}\] The Ward identity makes the task of determining the counterterms significantly easier. Conversely, it serves as a nontrivial check if the counterterms for each vertex are calculated independently. We can now proceed to consider the overall contribution of these vertices. We will consider three classes of diagrams separately. First, we have the diagrams resulting from the self energy corrections: \[\begin{split}\mathcal{D}_{\text{self}}(p,k,q)&=(- ie)^{2}(2p-q)^{\mu}G(p-q)(2p-q-k)^{\nu}\\ +&(-ie)^{2}(2p-k)^{\mu}G(p-k)(2p-q-k)^{\nu}\end{split} \tag{113}\] where \(G(p-q)\) is the renormalised propagator given by \[G(p-q)=\frac{i}{(p-q)^{2}-m^{2}+i\epsilon}\left[i\Sigma((p-q)^{2})\right]\frac{i} {(p-q)^{2}-m^{2}+i\epsilon}. \tag{110}\] We also have the diagrams containing corrections to the three point vertex: \[\mathcal{D}^{\mu\nu}_{\text{Cubic}}(p,k,q)=(-ie)^{2}\frac{i\Gamma ^{\mu}(p,q)(2p-q-k)^{\nu}}{(p-q)^{2}-m^{2}+i\epsilon}+(-ie)^{2}\frac{(2p-q)^{ \mu}i\Gamma^{\nu}(p-q,k)}{(p-q)^{2}-m^{2}+i\epsilon}\] \[+(\mu\leftrightarrow\nu), \tag{111}\] and finally, we have the four-point vertex correction \[\mathcal{D}^{\mu\nu}_{\text{Quartic}}(p,k,q)=ie^{2}\Gamma^{\mu\nu}(p,q,k). \tag{112}\] We can now examine the \(\hbar\) expansions of each of the terms above. To ensure that the vertex corrections are quantum, we want the diagrams containing vertex corrections to be suppressed by one power of \(\hbar\) relative to the tree-level diagrams. Since the loop corrections are suppressed by a factor of \(e^{2}/\hbar\), this amounts to ensuring that the rest of the diagram is \(\mathcal{O}(\hbar^{2})\). Starting with the self energy diagram, let us focus on the first term in (110). Expanding \(\Sigma((p-q)^{2})\) in small \(q\) yields the series \[\Sigma((p-q)^{2})=\Sigma(p^{2})+(-2p\cdot q+q^{2})\Sigma^{\prime}(p^{2})+\frac {1}{2}(-2p\cdot q+q^{2})^{2}\Sigma^{\prime\prime}(p^{2}), \tag{113}\] where the primes denote derivatives with respect to the argument \(p^{2}\). Inserting this into the propagator \(G(p-q)\) and putting \(p\) on-shell gives \[G(p-q)=\left[\frac{i}{-2p\cdot q+q^{2}-i\epsilon}\right]^{2}i \Sigma(p^{2}\to m^{2})+\left[\frac{i^{2}}{-2p\cdot q+q^{2}-i\epsilon}\right]i \Sigma^{\prime}(p^{2}\to m^{2})\] \[+\frac{i^{3}}{2}\Sigma^{\prime\prime}(p^{2}\to m^{2}) \tag{114}\] Figure 7: Diagrams arising from corrections to the propagator, cubic, and quartic vertices, where the blobs represent 1PI vertex corrections. We have suppressed the crossed diagrams. The first two terms vanish identically by virtue of the renormalisation conditions (6.26). We thus find that the only parts of \(\mathcal{D}_{\rm self}\) that could yield a classical contribution are \[\frac{ie^{2}}{2}(2p-q)^{\mu}\Sigma^{\prime\prime}(p^{2}\to m^{2})(2p-q-k)^{\nu}+ \frac{ie^{2}}{2}(2p-k)^{\mu}\Sigma^{\prime\prime}(p^{2}\to m^{2})(2p-q-k)^{\nu}. \tag{6.38}\] As we will see below, these terms will ultimately cancel against a similar contribution originating from \(\mathcal{D}_{\rm Quartic}\). To this end, consider the form of \(\mathcal{D}_{\rm Quartic}\) in an \(\hbar\) expansion. The leading order piece follows from taking \(q=k=0\), in which case it takes the form given in (6.27): \[\mathcal{D}_{\rm Quartic}^{\mu\nu}(p,0,0)=ie^{2}F_{1}(p^{2})g^{\mu\nu}+ie^{2}F_ {2}(p^{2})p^{\mu}p^{\nu} \tag{6.39}\] The first term vanishes by the renormalisation conditions while the second is rewritten using the Ward identity (6.30) so that \[\mathcal{D}_{\rm Quartic}^{\mu\nu}(p,0,0)=-4ie^{2}\Sigma^{\prime\prime}(p^{2} )p^{\mu}p^{\nu}. \tag{6.40}\] It is easy to see that this exactly compensates the terms proportional to \(p^{\mu}p^{\nu}\) in (6.38). We conclude that the terms of order \(\hbar^{0}\) vanish in the sum \(\mathcal{D}_{\rm self}+\mathcal{D}_{\rm Quartic}\). Turning to the terms linear in \(\hbar\), we see that powers of \(\hbar\) can arise in two ways in \(\mathcal{D}_{\rm Quartic}\): terms which arise from introducing photon momenta in the scalar coefficients (\(F_{1}(p^{2})\) and \(F_{2}(p^{2})\)) and terms arising from introducing photon momenta in the tensor structure \(p^{\mu}p^{\nu}\). The latter terms are still cancelled by the contributions in (6.38) for the simple reason that they end up having the same tensor structure. As for the variation of scalar coefficients, the fact that the form factors are functions of \(p^{2}\) implies that switching on the photon momenta \(q\) and \(k\) leads to the following order \(\hbar\) corrections : \[\begin{split}& F_{1}((p-q-k)^{2})=F_{1}(p^{2})-2p\cdot(q+k)F_{1}^{ \prime}(p^{2})+\mathcal{O}(\hbar^{2})\\ & F_{2}((p-q-k)^{2})=F_{1}(p^{2})-2p\cdot(q+k)F_{2}^{\prime}(p^{ 2})+\mathcal{O}(\hbar^{2}).\end{split} \tag{6.41}\] We see that the leading correction is proportional \(2p\cdot(q+k)\) which, due to the on-shell condition for the outgoing momentum \((p-k-q)^{2}=m^{2}\), is in fact \(\mathcal{O}(\hbar^{2})\). We conclude that the sum of \(\mathcal{D}_{\rm Quartic}\) and \(\mathcal{D}_{\rm self}\) contains no classical terms. Turning to the cubic vertices, we again find that terms which are proportional to \(F(p^{2})\) vanish by the renormalisation conditions, while the leading variation of the scalar form factors are proportional to \(p\cdot(k+q)\) which are \(\mathcal{O}(\hbar^{2})\) on-shell. This ensures that the diagrams containing the corrected three-point vertices are suppressed by a power of \(\hbar\) relative to the tree-level diagrams. With this we arrive at the anticipated result that the vertex corrections do not contribute to the classical waveshape in QED. Classical confirmation This part of the paper is devoted to confirming our waveshape expressions integrating the classical equations of motion. We begin by focusing on the conservative part. Since the methods are standard, we will be brief. Our first order of business is to decide what classical object should be compared to the waveshape. One place to start is with the classical field strength in Fourier space, namely \[F_{\mu\nu}(x)=i\int\hat{\mathrm{d}}^{4}k\,\frac{e^{-ik\cdot x}}{k^{2}}k_{[\mu} \left(\tilde{A}_{\nu]}(k)\,k^{2}\right)\,. \tag{110}\] We have written \(\tilde{A}(k)\) for the Fourier components of the gauge field \(A_{\mu}(x)\), and we have extracted a factor of \(k^{2}\) from \(\tilde{A}(k)\); this factor is always present on account of the Maxwell equations. The integral over \(k\) is defined with retarded boundary conditions. Standard manipulation of the \(k\) integral at large distances \(x\) from the scattering event leads to the field strength in the form \[F_{\mu\nu}(x)=-\frac{1}{4\pi|\mathbf{x}|}2\,\mathrm{Re}\int_{0}^{\infty}\hat{ \mathrm{d}}\omega\,e^{-i\omega u}\sum_{\eta}k_{[\mu}\varepsilon_{\nu]}^{\eta}(k )\left(i\varepsilon^{\eta*}(k)\cdot\tilde{A}(k)\,k^{2}\right)\,, \tag{111}\] evaluated in the on-shell limit \(k^{2}\to 0\) with \(k=(\omega,\omega\mathbf{x}/|\mathbf{x}|)\). The retarded time appearing here is \(u=x^{0}-|\mathbf{x}|\). Comparing with equation (13), we identify the classical counterpart of the waveshape as \[\begin{split}\alpha_{\eta}(k)&=\langle\psi|S^{ \dagger}a_{\eta}(k)S|\psi\rangle\\ &=i\varepsilon^{\eta*}(k)\cdot\tilde{A}(k)\,k^{2}\,.\end{split} \tag{112}\] The problem then is to compute the gauge field in momentum space at the relevant perturbative order. Here we focus on the waveshape in the large \(m_{1}\) limit, which simplifies the problem since we can presume that particle 1 is stationary and therefore does not radiate. Leaving aside the self-field of particle 2 for now (we will include it below using the ALD force) we need only consider the motion of particle 2 in the Coulomb field of particle 1. The radiation field is entirely due to this accelerated motion, so the problem is to determine the field of particle 2 in perturbation theory. Working in Fourier space, the Maxwell equation to be solved (in Lorenz gauge) is \[-k^{2}\tilde{A}^{\mu}(k)=Q_{2}\int\mathrm{d}\tau v_{2}^{\mu}(\tau)\,e^{ik\cdot x _{2}(\tau)}\,, \tag{113}\] where \(x_{2}(\tau)\) is the position of particle 2 and \(v_{2}(\tau)\) is its proper velocity. The position and velocity can be written as a perturbative series \[x_{2}(\tau)=b_{2}+u_{2}\tau+\Delta^{(1)}x_{2}(\tau)+\Delta^{(2)}x_{2}(\tau)+\cdots\,, \tag{114}\] around the straight-line trajectory \(b_{2}+u_{2}\tau\). The objects \(\Delta^{(n)}x_{2}(\tau)\) are corrections to the trajectory at a given perturbative order; inspection of the Lorentz force law shows that \(\Delta^{(n)}x_{2}(\tau)\) is of order \(Q_{1}^{n}Q_{2}^{n}\). Similarly, the velocity can be written as \[v_{2}(\tau)=u_{2}+\Delta^{(1)}v_{2}(\tau)+\Delta^{(2)}v_{2}(\tau)+\cdots\,. \tag{111}\] In this notation, the order \(Q_{1}^{2}Q_{2}^{3}\) part of the acceleration field is \[\begin{split}-k^{2}\Delta^{(2)}\tilde{A}^{\mu}(k)=Q_{2}\int{ \rm d}\tau&\,e^{ik\cdot(b_{2}+u_{2}\tau)}\left[\Delta^{(2)}v_{2}^ {\mu}(\tau)+ik\cdot\Delta^{(1)}x_{2}(\tau)\Delta^{(1)}v_{2}^{\mu}(\tau)\right. \\ &\left.+u_{2}^{\mu}\left(ik\cdot\Delta^{(2)}x_{2}(\tau)-\frac{1}{ 2}(k\cdot\Delta^{(1)}x_{2}(\tau))^{2}\right)\right]\,.\end{split} \tag{112}\] However, in view of equation (110) we are only interested in projection of this radiation field onto a polarisation vector in the on-shell limit \(k^{2}\to 0\). We are then free to choose the gauge of the polarisation vector to suit us. It is clear that the choice \(\varepsilon^{\eta*}(k)\cdot u_{2}=0\) simplifies the calculation (and indeed this is the choice we made in section 4.1). It remains to compute the quantities \(\Delta^{(2)}v_{2}(\tau)\), \(\Delta^{(1)}v_{2}(\tau)\) and \(\Delta^{(1)}x_{2}(\tau)\). Of these, the first order perturbations \(\Delta^{(1)}v_{2}(\tau)\) and \(\Delta^{(1)}x_{2}(\tau)\) can be found in section 6 of reference [59]. The second-order term \(\Delta^{(2)}v_{2}(\tau)\) can be computed by iterating the Lorentz force at one further order than performed in that reference, using precisely the same method. The result is \[\begin{split}\Delta^{(2)}v_{2}^{\mu}(\tau)=\frac{iQ_{1}^{2}Q_{2} ^{2}}{m_{2}^{2}}\int&\hat{\rm d}^{D}\ell_{1}\hat{\rm d}^{D}\ell_ {2}\frac{\hat{\delta}(\ell_{1}\cdot u_{1})\hat{\delta}(\ell_{2}\cdot u_{2}}{ \ell_{1}^{2}\ell_{2}^{2}}\frac{e^{-i(\ell_{1}+\ell_{2})\cdot(b_{2}+u_{2}\tau) }}{(\ell_{1}\cdot u_{2})^{2}((\ell_{1}+\ell_{2})\cdot u_{2})^{2}}\\ &\times\ell_{2}^{[\mu}u_{1}^{\nu]}\left(\ell_{1}\cdot u_{2}\ell_ {1[\nu}u_{1\rho]}u_{2}^{\rho}+u_{2\nu}\ell_{2\rho}\ell_{1}^{[\rho}u_{1}^{\sigma ]}u_{2\sigma}\right)\,.\end{split} \tag{113}\] We must now combine the results and compare to the waveshape in eq. (108). To do so, we find it convenient to define \[\begin{split} R_{1}&=-iQ_{2}\int{\rm d}\tau\,e^{ik \cdot(b_{2}+u_{2}\tau)}\Delta^{(2)}v_{2}(\tau)\cdot\varepsilon^{*}(k)\,,\\ R_{2}&=-iQ_{2}\int{\rm d}\tau\,e^{ik\cdot(b_{2}+u_{ 2}\tau)}ik\cdot\Delta^{(1)}x_{2}(\tau)\Delta^{(1)}v_{2}(\tau)\cdot\varepsilon^ {*}(k)\,.\end{split} \tag{114}\] in the on-shell limit, so that the waveshape is \(R_{1}+R_{2}\). It is then necessary to relabel the variables of integration before comparing to the waveshape. In \(R_{1}\), for instance, this can be achieved by setting \(\ell_{1}=\ell\), \(\ell_{2}=q_{1}-\ell\). Combining everything, we found \[\begin{split} R_{1}+R_{2}=&\frac{iQ_{1}^{2}Q_{2}^{ 3}}{m_{2}^{2}}\int\hat{\rm d}^{4}q_{1}\hat{\rm d}^{4}q_{2}\,\hat{\delta}(q_{1} \cdot u_{1})\hat{\delta}(q_{2}\cdot u_{2})\hat{\delta}^{D}(q_{1}+q_{2}+k)e^{- iq_{1}\cdot b_{1}}e^{-iq_{2}\cdot b_{2}}\,\mathcal{R}\end{split} \tag{115}\] where the kernel \({\cal R}\) is \[{\cal R}= \int\hat{\mathrm{d}}^{D}\ell\frac{\hat{\delta}(\ell\cdot u_{1})}{ \ell^{2}(\ell-q_{1})^{2}}\left[\varepsilon^{*}\cdot u_{1}\frac{u_{1}\cdot u_{2} \,k\cdot\ell-k\cdot u_{1}\,\ell\cdot u_{2}-\ell\cdot(\ell-q_{1})u_{1}\cdot u_{ 2}}{(\ell\cdot u_{2})^{2}}\right. \tag{111}\] \[\left.+\varepsilon^{*}\cdot(\ell-q_{1})\left(\frac{1}{k\cdot u_ {2}}+\frac{1}{k\cdot u_{2}}\frac{(u_{1}\cdot u_{2})^{2}\ell\cdot(\ell-q_{1})}{ (\ell\cdot u_{2})^{2}}\right.\right.\] \[\left.\left.+u_{1}\cdot u_{2}\frac{k\cdot u_{1}\,\ell\cdot u_{2} -k\cdot\ell\,u_{1}\cdot u_{2}}{(\ell\cdot u_{2})^{2}(\ell-q_{1})\cdot u_{2}} \right)\right]\,.\] This quantity should be directly comparable to our previous expression for the wave-shape in equation (116). To see that the expressions do indeed match, it is necessary to take advantage of properties of the integrals involved. For example we may neglect \(\ell^{2}\) in the numerator of the kernel \({\cal R}\): as this cancels one of the photon propagators the result will be a contact term. It is also useful to note, for example, that \[\int\hat{\mathrm{d}}^{D}\ell\frac{\hat{\delta}(\ell\cdot u_{1})}{ \ell^{2}(\ell-q_{1})^{2}}\varepsilon^{*}\cdot(\ell-q_{1})\frac{\ell\cdot(\ell- q_{1})}{\ell\cdot u_{2}\,(\ell-q_{1})\cdot u_{2}} \tag{112}\] \[=-\frac{1}{2}\int\hat{\mathrm{d}}^{D}\ell\frac{\hat{\delta}(\ell \cdot u_{1})}{\ell^{2}(\ell-q_{1})^{2}}\varepsilon^{*}\cdot q_{1}\frac{\ell \cdot(\ell-q_{1})}{\ell\cdot u_{2}\,(\ell-q_{1})\cdot u_{2}}\,.\] When the dust settles, we do indeed find a complete match with the waveshape in equation (116). Even in electrodynamics, it is significantly easier to determine the waveshape using amplitudes once the relevant cuts are known: the QFT-based approach separates structural aspects of the waveshape (for example the delta functions and \(q\) integrals) from the dynamics from the start. The dynamics is elegantly captured in the cuts, avoiding rather lengthy algebra. Let us now move onto dissipative terms. As we stressed before, in electrodynamics, we are lucky enough that radiation reaction can be systematically computed through the ALD force [154; 155; 156]. This non-conservative force prescribes the following momentum kick \[\frac{\mathrm{d}p_{2}^{\mu}}{\mathrm{d}\tau}=\frac{Q_{2}^{2}}{6\pi m_{2}} \left(\frac{\mathrm{d}^{2}p_{2}^{\mu}}{\mathrm{d}\tau^{2}}+\frac{p_{2}^{\mu}}{ m_{2}^{2}}\frac{\mathrm{d}p_{2}}{\mathrm{d}\tau}\cdot\frac{\mathrm{d}p_{2}}{ \mathrm{d}\tau}\right), \tag{113}\] which is supplementing the lower order deflection due to the Lorentz force. To see how (113) feeds in \(F^{\mu\nu}(x)\) at order \(Q_{1}Q_{2}^{4}\) we start from the LO velocity correction updated through the Lorentz force only. If we expand perturbatively the four velocity of particle 2 \[u_{2}^{\mu}(\tau)=u_{2}^{\mu}+u_{2,\mathrm{LO}}^{\mu}(\tau)+\cdots \tag{114}\] then the first correction to the constant term is [59] \[u_{2,\mathrm{LO}}^{\mu}(\tau)=-\frac{Q_{1}Q_{2}}{m_{2}}\int\hat{\mathrm{d}}^{4 }q\,\hat{\delta}(q\cdot u_{1})e^{iq\cdot(b_{12}-u_{2}\tau)}\frac{u_{2\nu}q^{[ \mu}u_{1}^{\nu]}}{q^{2}\,q\cdot u_{2}}, \tag{115}\] with \(b_{12}=b_{1}-b_{2}\). Now we feed this into (116) noting that, at the lowest order, we only need to consider the first term \(\propto\mathrm{d}^{2}p^{\mu}/\mathrm{d}\tau^{2}\). In this way we obtain updated velocity and particle trajectory (integrating once more) caused by radiation reaction \[u^{\mu}_{2,\,\mathrm{ADD}}(\tau) =\frac{Q_{2}^{2}}{6\pi m_{2}}\int_{-\infty}^{\tau}\mathrm{d}\tau \frac{\mathrm{d}^{2}u^{\mu}_{2,\,\mathrm{LO}}}{\mathrm{d}\tau^{2}}=i\frac{Q_{1 }Q_{2}^{3}}{6\pi m_{2}^{2}}\int\hat{\mathrm{d}}^{4}q\,\hat{\delta}(q\cdot u_{1} )e^{iq\cdot(b_{12}-u_{2}\tau)}\frac{u_{2\nu}q^{[\mu}u_{1}^{\nu]}}{q^{2}},\] \[x^{\mu}_{2,\,\mathrm{ADD}}(\tau) =\int_{-\infty}^{\tau}\mathrm{d}\tau\,u^{\mu}_{2,\,\mathrm{ADD}} =-\frac{Q_{1}Q_{2}^{3}}{6\pi m_{2}^{2}}\int\hat{\mathrm{d}}^{4}q\,\hat{ \delta}(q\cdot u_{1})e^{iq\cdot(b_{12}-u_{2}\tau)}\frac{u_{2\nu}q^{[\mu}u_{1}^ {\nu]}}{q^{2}\,q\cdot u_{2}}. \tag{117}\] The crucial step is to now use the ALD data (117) to solve the field equations with retarded boundary conditions \[\partial^{2}A^{\mu}(x)=Q_{2}\int\mathrm{d}\tau\,u^{\mu}_{2}(\tau)\delta^{4}(x -x(\tau)). \tag{118}\] After going to momentum space, a short calculation leads to \[-k^{2}\tilde{A}^{\mu}(k) =\cdots+Q_{2}\int\mathrm{d}\tau\,e^{ik\cdot(b+u_{2}\tau)}\left(u^ {\mu}_{2,\mathrm{ADD}}(\tau)+ik\cdot x_{\mathrm{ADD}}(\tau)u^{\mu}_{2}\right)\] \[=\cdots+i\frac{Q_{1}Q_{2}^{4}}{6\pi m_{2}^{2}}\int\hat{\mathrm{d} }^{4}q_{1}\hat{\mathrm{d}}^{4}q_{2}\,e^{iq_{1}\cdot b_{1}}e^{iq_{2}\cdot b_{2} }\hat{\delta}(q_{1}\cdot u_{1})\hat{\delta}(q_{2}\cdot u_{2})\hat{\delta}^{4} (k+q_{1}+q_{2})\] \[\qquad\qquad\times\frac{k\cdot u_{2}}{q_{1}^{2}}\left(u^{\mu}_{1 }+q^{\mu}_{1}\frac{u_{1}\cdot u_{2}}{k\cdot u_{2}}-u^{\mu}_{2}\frac{k\cdot u_ {1}}{k\cdot u_{2}}-u^{\mu}_{2}\frac{u_{1}\cdot u_{2}\,k\cdot q_{1}}{(k\cdot u _{2})^{2}}\right), \tag{119}\] at the relevant \(Q_{1}Q_{2}^{4}\) order. One can already see that things start to look familiar: the quantity inside the brackets is exactly the source we defined earlier in (114). To complete the calculation we simply have to contract with a polarisation vector according to (104) to get \[i\varepsilon^{*}(k)\cdot\tilde{A}(k)\,k^{2}=\frac{Q_{1}Q_{2}^{4}} {6\pi m_{2}^{2}}\int\hat{\mathrm{d}}^{4}q_{1}\hat{\mathrm{d}}^{4}q_{2}\,e^{iq _{1}\cdot b_{1}}e^{iq_{2}\cdot b_{2}}\hat{\delta}(q_{1}\cdot u_{1})\hat{\delta }(q_{2}\cdot u_{2})\hat{\delta}^{4}(k+q_{1}+q_{2})\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\times\frac{k\cdot u_{2}}{q_{1}^{2}}\varepsilon^{*}(k )\cdot\mathcal{J}(k,q_{1}), \tag{120}\] clearly matching \(\alpha_{\eta}(k)\) derived with amplitudes (103). Let us remark that the term in the ALD acceleration \(\propto\mathrm{d}^{2}p^{\mu}/\mathrm{d}\tau^{2}\) which gave rise to the relevant correction (117) is known in the literature as a "Schott term" [157]. This contribution is leading in the coupling with respect to \(\propto p^{\mu}(\mathrm{d}p/\mathrm{d}\tau)^{2}\) but, importantly, is a total time derivative. This means that it does not contribute to the total emitted radiation, albeit being necessary to maintain the system's energy-momentum balance. Remarkably, our derivation shows that this transient correction is the only one which secretly sources dissipated radiation, at one loop. Conclusions In this paper we investigated how next-to-leading-order radiation fields can be elegantly computed using the techniques of modern scattering amplitudes. Building upon the KMOC formalism of [59], we characterised NLO radiation fields in terms of the real and imaginary parts of a waveshape. The real part is extracted by cutting one massive line of a five-point one-loop amplitude, whereas the imaginary part is obtained by a double cut of this amplitude. With this arrangement, all remaining propagators are defined through a principal-value prescription. This propagator structure emerges directly from the Feynman \(i\epsilon\) prescription together with the split into real and imaginary parts, with no further intervention by hand. Our organisation of the observable provides two key benefits. First, it improves computational efficiency: the cancellation of apparently singular inverse powers of \(\hbar\) (the "superclassical" terms) can be trivialised. Second, this organisation clarifies the underlying physics. Both real and imaginary parts have separate, gauge invariant, physical meaning. The real part describes the radiation emitted by a body moving under the influence of essentially conservative forces: for example, a charge accelerated by the Lorentz force in the field of a different charge. In contrast, the imaginary part captures intrinsically dissipative effects: radiation generated under the influence of the particle's self field. In electrodynamics, this can be understood as the portion of radiation generated by the action of the Abraham-Lorentz-Dirac force on the charge. Indeed the factor \(i\) between the real and imaginary part of the waveshape points to the breaking of time-reversal symmetry. It is worth noting that the acceleration of the charge at this order originates in a Schott term; the total impulse on the charge vanishes, but the time-dependent acceleration nevertheless leaves an imprint on the radiation field of the particle. In our description, this aspect of radiation reaction at one loop order is directly related to a simple unitarity cut involving a product of two Compton amplitudes in electrodynamics, Yang-Mills theory and gravity. Radiation reaction is a consequence of self-force. For point-like objects, this inevitably entails some kind of regulation and renormalisation of singularities. Our approach is directly rooted in traditional quantum field theory, so we took advantage of the opportunity to explain how the usual procedure of renormalisation in quantum field theory removes the real part of certain diagrams. The terms in question are intuitively quantum mechanical, for example, the renormalisation of the QED vertex which has the (quantum-mechanical) consequence of the running coupling. Using the on-shell renormalisation scheme, we showed that this class of diagram indeed cancels from classical computations. As the counterterms are real, they do not affect the imaginary parts of these graphs -- which capture the effects of radiation reaction, and are perfectly classical. It may be worth remarking that our result is scheme-independent as it is an observable quantity. However in other schemes (for example MS) apparently relevant terms from these diagrams would only cancel at the level of the observable itself. An interesting aspect of renormalisation of the waveform is that various quantum-mechanical diagrams are infrared divergent. A subset of these divergences are an obstacle to the on-shell scheme; we showed that these divergences are all quantum in nature so that the on-shell scheme is valid in the classical limit. In electrodynamics, all infrared divergences cancel in the waveshape. However theories with self-interacting massless messengers (Yang-Mills theory and gravity) retain a residual IR divergence [94; 130; 131; 153]. We tested our QFT-based computations in electrodynamics by comparing to a fully classical computation of the complete radiation field at order \(Q^{5}\), finding detailed agreement. The classical computation, although not especially arduous, is nevertheless more involved than the elegant approach based on generalised unitarity, once the relevant cuts are understood. Although electrodynamics is a comparatively simple theory, neverthless it is rich enough to provide a very stimulating laboratory to understand many aspects of the dialogue between amplitudes and classical physics, especially since it is often rather straightforward to pass from electrodynamics to Yang-Mills theory [158; 22; 159]. Turning to future directions, it would be very interesting to understand the physical meaning of the real and imaginary parts of the waveshape beyond one loop. Our initial motivatation to study these real and imaginary parts arose from studying references [36; 39], where the authors simplified other observables (the impulse and radiated momentum) after splitting into real and imaginary parts. The authors of references [36; 39] found this approach useful at both one and two loops. A first topic, then, would be to study the waveshape at two loops. Is it possible to link a well-defined part of the observable to radiation reaction at this order? What pole prescription emerges for the various propagators? A related motivation for studying real and imaginary parts of the waveshape emerges from eikonal and related approaches to amplitudes in the classical limit, especially [38; 40; 136; 42; 137]. The essence of these approaches is that classical physics arises from a stationary-phase approximation at the level of the path integral. Amplitudes in the classical limit are essentially perturbative expansions of this phase. Because the phase has the structure \(\exp(iS/\hbar)\), the expansion introduces inverse powers of \(\hbar\); these cancel in observables. The relevant product in the expansion of the phase has the momentum-space interpretation of a convolution, emerging from a cut (see, for example [19] for more on this link). Thus we should expect an interplay between real and imaginary parts and the cancellation of "superclassical" terms in amplitudes at _all_ orders. Indeed, in our work, we found that superclassical terms cancelled completely at the level of cuts after splitting into real and imaginary parts. This greatly simplifies the computation of the relevant cuts. Consequently we think it is very likely that this kind of organisation will be particularly useful at higher orders. There is also still a great deal of interesting physics to be understood without facing the extremely challenging situation at two loops. Capturing the physics of black hole spin in the tree-level waveform already requires an understanding of the Compton amplitude with spin, including relevant contact terms. These have been recently studied by different groups, for example [160; 161; 162; 163; 164]. At one loop, we also need the five-point analogue of the tree Compton amplitude which appears in the relevant one-loop cuts. Throughout this article, our focus was on the waveshape generated during a scattering experiment. It would obviously be very exciting if our computations could be analytically continued using some appropriate algorithm to the bound state case, perhaps along the lines of references [63; 64; 165]. We believe that our work shows once again that the perturbative structure of classical interactions is clarified when generalised unitarity and the double copy are exploited. Methods based on scattering amplitudes successfully factorise observables into a general kinematic structure (eg an integral over a waveshape) and a dynamical object (the waveshape) which must be determined. Generalised unitarity determines the dynamical content (the waveshape) from foreknowledge of its general analytic structure. The double copy allows us to bypass much of the complexity of gravity. Although our methods at first sight seem foreign to classical physics, obviously they successfully capture classical physics. We believe that a deeper understanding of these ideas should be available from a classical perspective. For instance, in section 5.3 we found an expression for the waveshape originating from the acceleration of a body under its own gravitational self-field. Our method makes it obvious that this phenomenon is the double-copy of the radiation emitted a charge accelerated by its ALD force. Can we then understand self-forces in gravity (for example the MiSaTaQuWa equations [149; 150]) more generally as a double copy? We leave all these questions for future work. ###### Acknowledgments. We particularly thank John Joseph Carrasco for collaborating with us in the early stages of this work, when he contributed significant ideas which enhanced our understanding of the physics. We also thank Andreas Brandhuber, Graham Brown, Gang Chen, Stefano De Angelis, Joshua Gowdy, Aidan Herderschee, Radu Roiban, Fei Teng, and Gabriele Travaglini for cooperating with us with the submission of this manuscript. Our work has also benefited from useful discussions with Andrea Cristofoli, Kays Haddad, Franz Herzog, Anton Ilderton, Alasdair Ross, Justin Vines, Pablo Vives Matasan and Mao Zeng. This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. AE is sponsored by a Higgs Fellowship. DOC is supported by the U.K. Science and Technology Facility Council (STFC) grant ST/P000630/1. MS is supported by a Principal's Career Development Scholarship from the University of Edinburgh and the School of Physics and Astronomy. IVH is supported by the Knut and Alice Wallenberg Foundation under grants KAW 2018.0116 (From Scattering Amplitudes to Gravitational Waves) and KAW 2018.0162. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission. ## Appendix A A derivation of the ALD force with cuts In this appendix we propose a classical derivation of radiation reaction which further supports the theory developed in 2.2. There, we related non conservative effects to the imaginary part of (cuts of) the five-point one-loop amplitude. Specifically, here we will demonstrate how the Schott term in (7.13) arises from a double cut integral which highly resembles the ones in section 5. We are inspired by Coleman's lectures on relativistic radiation [166]. Let us start by considering the electromagnetic field strength tensor \(F^{\mu\nu}\) and compute it in a close neighborhood of the particle. We will work in an arbitrary number of dimensions \(D\) to consistently drop scaleless contributions, and only at the end take \(D=4\). The field's Fourier transform with retarded boundary conditions reads \[F_{\mu\nu}(0)=iQ\int\hat{\mathrm{d}}^{D}k\int_{-\infty}^{0}\mathrm{d}\tau\,e^ {ik\cdot r(\tau)}\frac{k_{[\mu}u_{\nu]}(\tau)}{k^{2}},\] (A.1) with \(k^{2}\equiv(k^{0}+i\epsilon)^{2}-\mathbf{k}^{2}\). Note that we set for simplicity \(x=0\) in the field's argument. We now sit on top of the particle and expand all space-time dependent quantities in a series of small proper time such that \(\tau\sim k^{-1}\ll 1\). Guided by the fact that 7.13 involves quantities with three \(\tau\)-derivatives, we expand the position \(r^{\mu}(\tau)\) up to terms which involve the particle's acceleration change \(\dot{a}^{\mu}\). Thus we have \[r^{\mu}(\tau)\approx u^{\mu}\tau+\frac{1}{2}a^{\mu}\tau^{2}+\frac{1}{6}\tau^ {3}\dot{a}^{\mu},\quad u^{\mu}(\tau)\approx u^{\mu}+a^{\mu}\tau+\frac{1}{2} \tau^{2}\dot{a}^{\mu},\] (A.2) which we can substitute inside (A.1) to obtain \[F_{\mu\nu}(0)=iQ\int\!\hat{\mathrm{d}}^{D}k\int_{-\infty}^{0} \mathrm{d}\tau\,e^{ik\cdot u\cdot\tau}\frac{k_{[\mu}}{k^{2}}\left(u_{\nu]}+ \tau a_{\nu]}+\frac{1}{2}i\tau^{2}k\cdot a\,u_{\nu]}\] \[+\frac{1}{2}i\tau^{3}k\cdot a\,a_{\nu]}+\frac{1}{2}\tau^{2}\dot{a }_{\nu]}-\frac{1}{8}\tau^{4}(k\cdot a)^{2}u_{\nu]}+\frac{1}{6}i\tau^{3}k\cdot \dot{a}\,u_{\nu]}\bigg{)}+\cdots\] (A.3) Note again that in this expansion \(\tau k\sim 1\) so above we have kept terms up to order \(\tau^{3}/k\sim\tau^{2}/k^{2}\sim\tau^{4}\) in the second line. Next, we begin to simplify our expression using standard symmetry arguments and dimensional analysis. In fact, we will exploit the result that scaleless integrals vanish in dimreg: \(\int\mathrm{d}^{D}k\,k^{-p}=0\). Let us look at the zero-th order term. Here, we will often find it useful to use the decomposition \(k^{\mu}=k\cdot u\,u^{\mu}+k^{\mu}_{\perp}\). Then, the first piece of 114 is \[\int\hat{\mathrm{d}}^{D}k\int_{-\infty}^{0}\mathrm{d}\tau\,e^{ik\cdot u\,\tau }\frac{k^{[\mu}u^{\nu]}}{k^{2}}=\int\hat{\mathrm{d}}^{D}k\frac{i}{k\cdot u-i \epsilon}\frac{k^{[\mu}_{\perp}u^{\nu]}}{k^{2}}=0, \tag{115}\] because of the anti-symmetric \(k^{\mu}_{\perp}\) integrand13. Regarding to the second term in (114), we find it to be zero since it is scaleless. Indeed, using similar steps, one brings this integral to the form Footnote 13: In a frame where \(u^{\mu}=(1,\mathbf{0})\), the \(k^{\mu}_{\perp}\) directions are simply the spatial ones. \[\int\hat{\mathrm{d}}^{D}k\int_{-\infty}^{0}\mathrm{d}\tau\,e^{ik\cdot u\,\tau }\frac{k_{[\mu}a_{\nu]}}{k^{2}}\tau=iu_{[\mu}a_{\nu]}\int\hat{\mathrm{d}}^{D-1 }|\mathbf{k}|\frac{1}{|\mathbf{k}|^{2}}=0, \tag{116}\] in dimensional regularization. We won't discuss each term singularly, but similar arguments can be applied to the other pieces in (114). Some of them involve a tensor numerator that yields a vanishing result once the integral is reduced, and on the support of \(k^{2}=0\) and \(a\cdot u=0\). In the end, we find that only two terms survive, after integrating out \(\tau\) the field strength looks like \[F_{\mu\nu}(0)=Q\int\hat{\mathrm{d}}^{D}k\frac{k_{[\mu}}{k^{2}}\left(\frac{\dot {a}_{\nu]}}{(k\cdot u-i\epsilon)^{3}}-\frac{k\cdot\dot{a}\,u_{\nu]}}{(k\cdot u -i\epsilon)^{4}}\right), \tag{117}\] For the first term we exploit symmetry along the \(k_{\perp}\) integral to write \(k_{[\mu}\dot{a}_{\nu]}=k\cdot u\,u_{[\mu}\dot{a}_{\nu]}\). Then, a simple tensor reduction of the \(k\) integral of the second one yields \[\int\hat{\mathrm{d}}^{D}k\frac{k_{[\mu}u_{\nu]}}{k^{2}}\frac{k\cdot\dot{a}}{(k \cdot u-i\epsilon)^{4}}=\frac{\dot{a}_{[\mu}\,u_{\nu]}}{D-1}\int\hat{\mathrm{d }}^{D}k\frac{1}{k^{2}}\frac{1}{(k\cdot u-i\epsilon)^{2}}, \tag{118}\] in the end we obtain the following \[F_{\mu\nu}(0)=-Q\frac{D}{D-1}\dot{a}_{[\mu}\,u_{\nu]}\int\hat{\mathrm{d}}^{D} k\frac{1}{k^{2}}\frac{1}{(k\cdot u-i\epsilon)^{2}}. \tag{119}\] Our final task is to perform the \(k\) integration. Therefore, we first write \[\frac{1}{(k\cdot u-i\epsilon)^{2}}=-\frac{\mathrm{d}}{\mathrm{d}(k\cdot u)} \frac{1}{k\cdot u-i\epsilon} \tag{120}\] and then use the Sokhotski-Plemelj formula (3.32) for both denominators. At this point, dimreg instructs once more to drop all principal value parts of the propagators. Essentially, we are taking the imaginary part of the integral now. We finally remain with \[F_{\mu\nu}(0)=\frac{Q}{3}\dot{a}_{[\mu}\,u_{\nu]}\int\hat{\mathrm{d}}^{4}k\,\hat{ \delta}(k^{2})\hat{\delta}^{\prime}(k\cdot u)=\frac{Q}{6\pi}\dot{a}_{[\mu}\,u_{ \nu]}, \tag{111}\] having taken \(D=4\). Nicely, this last expression (111) matches the Schott term in (110). What is interesting here is to see how the double cut of 2.2 reverberates in our final steps of the proof. To this end one can interpret the fact the differentiated delta function \(\hat{\delta}^{\prime}(k\cdot u)\) in (111) as a small \(q\)-expansion of \(\hat{\delta}(u\cdot(\ell-q))\) in 5.
2310.08304
CHIP: Contrastive Hierarchical Image Pretraining
Few-shot object classification is the task of classifying objects in an image with limited number of examples as supervision. We propose a one-shot/few-shot classification model that can classify an object of any unseen class into a relatively general category in an hierarchically based classification. Our model uses a three-level hierarchical contrastive loss based ResNet152 classifier for classifying an object based on its features extracted from Image embedding, not used during the training phase. For our experimentation, we have used a subset of the ImageNet (ILSVRC-12) dataset that contains only the animal classes for training our model and created our own dataset of unseen classes for evaluating our trained model. Our model provides satisfactory results in classifying the unknown objects into a generic category which has been later discussed in greater detail.
Arpit Mittal, Harshil Jhaveri, Swapnil Mallick, Abhishek Ajmera
2023-10-12T13:11:38Z
http://arxiv.org/abs/2310.08304v1
# CHIP: Contrastive Hierarchical Image Pretraining ###### Abstract Few-shot object classification is the task of classifying objects in an image with limited num- be of examples as supervision. We propose a one-shot/few-shot classification model that can classify an object of any unseen class into a relatively general category in an hierarchically based classification. Our model uses a three-level hierarchical contrastive loss based ResNet152 classifier for classifying an object based on its features extracted from Image embedding, not used during the training phase. For our experimentation, we have used a subset of the ImageNet (ILSVRC-12) dataset that contains only the animal classes for training our model and created our own dataset of unseen classes for evaluating our trained model. Our model provides satisfactory results in classify- ing the unknown objects into a generic category which has been later discussed in greater detail. ## 1 Introduction Recent research in the field of few shot object classification models [11, 8, 13] presume that one of the target classes is present in the query image. These models are not equipped to handle cases where the query image does not contain any of the target class objects along with incapability to categorize the classes. This limits the utility of these models making them incompetent for generic object classification tasks. We try to solve this problem by introducing a one-shot/few-shot learning model with Contrastive Learning [3, 9, 5] approach that can classify any unseen class into a relatively general category (Figure1). For our project, we aim to classify only animal classes into more general categories. Our model exploits the class hierarchy using contrastive loss using CNNs with respect to the parent embeddings. Contrastive Learning is a method that is known to improve the performance of tasks by contrasting query samples against target labels. In doing so, this method helps to learn both the common attributes between data classes as well as the features that differentiate a data class from another. 1 Footnote 1: View the source code on GitHub: [https://github.com/harshiljhaveri/CHIP/tree/main](https://github.com/harshiljhaveri/CHIP/tree/main) ## 2 Related Work Recently, a considerable amount of research has been conducted to add the aspect of generaliza- tion in few-shot learning models. However, most of these proposed networks, in spite of coming up with different novel approaches, fail to achieve good performance in generic few-shot object classification. In order to improve generalizability, Jiawei Yang et al. [12] have used contrastive learning to annotate unlabelled data followed by latent augmentation methods consisting of k-means selection for training instances. Their key interest is to transfer semantic diversity from the training set to the augmented set. The features in the augmented sets are extrapolated and computed as a dot product and the most useful features are selected based on the magnitude of this dot product. However, the main problem with this approach is that it may be impactful only towards datasets with fewer classes but may not show significant improvements in generalization in datasets with relatively more classes. Alayrac et al. [1] achieve state-of-the-art results with regards to huge corpora of interleaved visual and text data with disparity in the method that context is sent for each example pair. A visual encoder is used to produce fixed length tokens which is then fed as additional input to weighted attention nets for embeddings generated from the text vectors. A frozen pretrained ResNet-50 architecture is used to preprocess image and video data. Since this architecture heavily utilizes transfer learning from large language models, it inherits their inductive biases. Moreover since the model aims at performing a series of tasks including classification, it lags in its performance as compared to other contrastive models. adressing the issues of overlooked classes during classification of multi-label images into a single entity. At first, this method decomposes higher level images into smaller patches and annotates each of these using a fine grained vision transformer. Then a weighted tokening method is carried out to assign labels with minimum losses to each patch. Consequently, similarities are computed between query image patches and support images to assign closest classes. Tokens are assigned to each class based on temperature scaling while minimizing the cross entropy loss. Masks are applied to each patch in order to generalize the constraint and help it fit within the limited support image class options. However, the main drawback of this method is, the transformer is unable to de- fine strict decision boundaries where high feature similarity exists between classes and training data is limited in size. We have tried to come up with a novel method to solve the problem of generalization in one-shot/few-shot learning that has been discussed in Section 4 in greater detail. ## 3 Dataset The ImageNet [10] dataset consists of 14,197,122 images annotated according to the WordNet hierarchy and divided into 1000 classes. The ImageNet dataset is a part of the ImageNet Large Scale Visual Recognition Challenge (ILSVRC-12) which is a benchmark in object category classification and detection of images. Since the ImageNet dataset contains classes other than that of animals, we have segregated the images of all the animal classes from the dataset for our experiment. In doing so, we have got 366 animal classes with 1300 images in each class. We have used images from these classes in our training phase. For testing our model, we have created our own dataset which contains 20 unseen animal classes that are not present in the ImageNet dataset. This dataset contains the following classes - _American Bobtail cat, American Paint horse, British Short- hair cat, Burmese cat, Camarillo White horse, Cat- fish, Crow, Cuckoo, Deer, Friesian horse, Giraffe, Ibis, Kingfisher, Kiwi, Parrot, Puffbird, Ragdoll cat, Rhinoceros, Sparrow and Tuna_. ## 4 Proposed Architecture Our proposed architecture can be broadly divided into three phases: ### Create Target Parent Image Embeddings (Phase 1): The first phase of our architecture involves generating mean image embeddings of each ImageNet animal class using the pre-trained ResNet-152 [6] model and using unsupervised K-Means clustering to learn the class hierarchical structure (Figure2). Figure 1: Pictorial representation of the aim of our project ``` Input : Training Set \(T\), Resnet-152 Image classifier \(M\), Model pretrained weights \(\Theta\) /* Load \(M\) with \(\Theta\) and remove last FC layer to get image embeddings from \(M\), Embedding Dataframe \(D\) */ FunctionoptimalKforCluster(mean_image_embedding, mink, maxK): for\(k\) in range(mink, maxK)do KMeansMap [k] = Kmeans(mean_image_embedding, k) end for\(optimalK\) = \(silhouetteAnalysis\)(KMeansMap) return\(optimalK\) end\(funtion\) foreachClass \(C\) in \(T\)do foreachImage I in \(C\)do \(imageEmbeddings\) = \(M\)(Theta, I) end for\(mean\_image\_embedding\_for\_class\) = \(mean\)(imageEmbeddings) \(Level0ParentEmbeddingsperClass,D[C,0]\) = mean_image_embedding_for_class end foreach\(Level1,2ParentEmbeddingsperClass,D[C,1]\) = \(KMeans\)(D[C,level], \(optimalKforCluster\)(D[C, level], minK, maxK)) return\(D\) ``` **Algorithm 1**Creating Target Parent Image Embeddings (\(Phase\) 1) Figure 2: Phase 1 Network Architecture ``` Input : Training Data \(Tr\), Validation Data \(Va\), Test Data \(Te\), Target Parent Embedding Dataframe \(D\), Resnet-152 Image classifier \(M\), Model pre-trained weights \(\Theta\) \(Tr\) = One random image from each class \(Va\) = 5 random image from each class \(Te\) = 5 random image from each class \(pretrainedResnet152Model\) = \(M(\Theta)\) Function contrastivLoss (imageEmbed, targetEmbed): \(dist\) = \(CosineDistance\)(imageEmbed, targetEmbed) \(negDist\) = ( margin - dist ).relu() \(posDist\) = dist \(loss\) = \(Concat\)(posDist, negDist).mean return\(lossK\) end\(funtion\) FunctionfineTuneModel (model, trainData, valData, numEpochs): forepoch in range(numEpochs)do forimage, \(targetEmbed\) in \(trainData\)do \(embed\) = \(model\)(image) \(loss\) = \(contrastiveLossFn\)(embed, targetEmbed) backPass updateGrads end forimage, targetEmbed in valDatado \(embed\) = \(model\)(image) \(loss\) = \(contrastiveLossFn\)(embed, targetEmbed) end \(saveModel\)for\(EachEpoch\) end forreturn\(optimalPreTrainedModelForValData\) end\(funtion\) FunctiontestModel (model, testData): forimage, targetEmbed in testDatado \(imageEmbed\) = \(model\)(image) \(loss\) = \(contrastiveLossFn\)(embed, targetEmbed) \(cosineSim\) = \(CosineSimilarity\)(imageEmbed, targetEmbed) end for return\(mean(loss),mean(cosineSim)\) end\(funtion\) \(Level\_HeirachialModel\) = \(fineTuneModel\)(new M(\(\Theta\)), (Tr, level_target_embedding), (Va, level_target_embedding), numEpoch, 0) return\(Level\_HeirachialModel\) ``` **Algorithm 2**One-Shot Hierarchical Model Learning (\(Phase\) 2) The steps in Phase 1 are as follows: 1. At first, the mean image embeddings of each of the 366 ImageNet animal classes is computed by loading the ResNet-152 model with its pre-trained weights. So, we get 366 mean embeddings for 366 classes. They would be the target embeddings for the leaf node in our hierarchy and represent as Level 0 embeddings. 2. Using these mean embeddings from 1, these 366 animal classes are then grouped using K-Means clustering and the centroid embedding of each of the clusters is also calculated. Now we can associate each of the 366 classes to a cluster and its corresponding cluster centroid embedding. They would be the target embeddings for the parent node in our hierarchy and represent as Level 1 embeddings. 3. Next using the centroid embeddings from the previous step, these already formed clusters are further grouped into clusters using K-Means clustering and the centroid embeddings of these clusters are also computed. They would be the target embeddings for the super-parent node in our hierarchy and represent as Level 2 embeddings In order to get the optimal number of clusters, Silhouette Analysis has been done both the times. 4. Finally, a mapping dataframe has been created that stores all the 366 classes, their corresponding mean class embeddings (Level 0) and maps each of these classes to their corresponding Level 1 and Level 2 clusters and their cluster centroid embeddings. ### One-Shot Hierarchical Model Learning (Phase 2): In the second phase of our architecture, a three- layered pre-trained ResNet-152 model is fine-tuned using one-shot learning approach (Figure3). The steps of Phase 2 are as follows: 1. In this phase, we have three separate models for three levels. We use ResNet-152 encoder for Level 0 Ground Truth. The Level 1 encoder is finetuned on parent clusters whereas the Level 2 encoder is finetuned on the super-parent clusters. 2. One image is taken from each of the 366 ImageNet animal classes and fed into all the three models (or layers). 3. For each image, an embedding is generated and Contrastive loss is calculated between the generated image embedding and the corresponding target embedding retrieved from the mapping dataframe generated in Phase 1. 4. Then we backpropagate and learn the respective Figure 3: Phase 2 Network Architecture ``` Input : Unseen Data \(UD\), Pre-trained Novel Models for Hierarchical Embeddings - \(Level\_0\_HeirachialModel\), \(Level\_1\_HeirachialModel\), \(Level\_2\_HeirachialModel\) FunctionassignTargetLabelToNewClass (classEmbeddings, level): \(distanceMatrix\) = \(CosineDistances\)(classEmbeddings, D[level]) \(targetLabels\) = embedding with min distance for each class embeddings return\(targetLabels\) end\(funtion\) FunctiontargetLabelsForUnseenClasses (unseenClassData): \(classEmbeddings\) = \(Level\_HeirachialModel\)(unseenClassData) \(targetLables\) = \(assignTargetLabelToNewClass\)(classEmbeddings, level) return\(targetLables\) end\(funtion\) \(D[UD,level]\) = \(targetLabelsForUnseenClasses\)(UD) \(Level\_UnseenModel\) = \(fineTuneModel\)(Level\_HeirachialModel\), D[UD,level], numepochs, level) return\(Level\_UnseenModel\) ``` **Algorithm 3** Few Shot Learning on Unseen Data (\(Phase\) 3) Figure 4: Phase 3 Network Architecture weights. 5. Steps 2-4 are repeated for a set number of epochs and this is done for all the three separate models. Thus, we get pre-trained one-shot hierarchical model for each of the three levels. ### Few Shot Learning on Unseen Data (Phase 3) In this final phase, we use the pre-trained ResNet- 152 model from Phase 2 and the clusters obtained from Phase 1 to classify our unseen data of 20 animal classes into similar clusters (Figure4). This phase consists of the following steps: 1. First we feed the unseen class images to the three pre-trained ResNet-152 models from Phase 2 and obtain the mean class embedding for each level. 2. Then we assign target labels to each of the 20 unseen classes by calculating the cosine distance between the mean class embedding of the unseen classes and the mean embeddings obtained in Phase 1 dataframe. The class with the minimum cosine distance is assigned as the class of the unseen data. This done separately for each level. 3. Then we store the mapping of the unseen class embeddings in a new dataframe. 4. Following this, the three pre-trained one-shot models are finetuned using the mapping dataframe of the unseen classes over a set number of epochs. ## 5 Results **Phase 1:** With 1300 images per class, it was computationally expensive to perform clustering and figuring out the optimal K for clustering. We performed 3 methods to get our clusters : 1. Mean Embedding of 1300 images per class. 2. Majority Pooling for 1300 data-points per class. 3. Mean Embedding of 10 images per class in succession, and then performing majority pooling of 130 embedding per class. We perform a Silhouette analysis for the appropri- ate number of clusters and overall belongingness of each feature to a cluster. Other popular techniques like top-k accuracy and Cluster Purity were not used since we attempted to cluster features based on similarity rather than comparing with ground truth labels. With comparable Max Silhouette scores for all 3 choices, and method 1 being the most computationally feasible, we proceeded with Mean embedding of all images in a class. Level 1 results: We implemented K-Means on a range of 50 to 200 clusters for Level 1 clustering and then performed silhouette analysis to evaluate the quality of the clustering results and selected the optimal k for level 1 of the hierarchy which came out to be 88 as in Figure 5. Level 2 results: We decided to have minimum 7 clusters for level 2, and implemented K-Means on a range of 7 to 20 and based on the silhouette Analysis and Elbow Method, we decided to select 8 as the number of clusters for level 2. For Phase 2 and Phase 3, we tried few shot, one shot and training on whole ImageNet dataset. One Shot training gave us much better performance, and we believe this is due to Variance in feature embeddings of the images when our ResNet-152 learns to classify images based on Feature Extraction with Contrastive Loss. **Phase 2:** We used Cosine Similarity and Contrastive Accuracy to evaluate Phase 2 using validation and test data. Cosine similarity, in contrast to Euclidean distance, uses dot product rather than the magnitude of distances between the spatial features. As shown in Figure 7, we received an impressive accuracy of around 94% for each level and cosine similarity of over 0.85 for Level 0 and Level 1, and over 0.74 for Level 2. We believe, the drop in Level 2 cosine similarity is due to the fact that Level 2 is classifying more generalized features of the images at Super-Parent level (Table 1) Figure 5: Level 1 results Figure 6: Level 2 results **Phase 3:** The evaluation metrics used for embeddings are Cosine Similarity for Pairwise Similarity between feature embedding of unseen images and the target embedding from Phase 2. While accuracy based on a threshold of pairwise similarity and precision-recall were calculated, they provided highly skewed results and hence were not considered further. Figure 8 shows the Training loss and Cosine Similarity for the 3 levels. There is a significant drop in cosine similarity for levels 1 and 2 when compared to level 0, which may be due to learning more generalized feature mapping. We also noticed a drop in Cosine Similarity for unseen classes, as compared to seen classes. We believe the reason for the drop in both cases is due to the nature of the unseen classes, i.e., the models were not pre trained on unseen classes to learn their features, as compared to pretrained model we used in Phase 2, which already had pretrained weights on ImageNet. We tested our model on 20 unseen classes, and 14 of them were classified to their correct parent and super parent. ## 6 Conclusion and Future Work In conclusion, the technique of using ground truth embeddings by clustering images based on feature similarity has been useful to the task of comparing feature extractions if image embeddings. Compared to the commonly performed classification tasks, comparing embedding features using contrastive learning required task adaptive ground truth labels. Thorough experimentation further proves that developing separate functions for each level of the hierarchy is better than using base levels to infer higher level labels. Further, not all evaluation metrics that commonly work for image classification tasks work in this domain since the points of comparison are pairwise similarities and cluster appropriateness rather than singular labels. The proposed architecture has a lots of scope of improvement in each each of its phases. For Phase 1, we can try a more apt approach for clustering the Images based on Image Features, either by reducing its dimensions using PCA, using density based clustering like DBSCAN [4] or HDBSCAN [2] or other methods to cluster high dimension data. For Phase 2, we can use motivation from CLIP [9] model, and introduce Multi-Modal architecture, adding Textual, Prompt, and contextual features as well while training Level hierarchical models. For Phase 3, we can Better feature extractions and then modulate it with our Phase 2 models, similar to encoder-decoder architecture in transformers, to have the models learn features of our unseen classes in a broader way to improve similarity between our generalized Feature mapping of Hierarchical embeddings. \begin{table} \begin{tabular}{|c c c|} \hline & Avg Loss & Avg Cosine Similarity \\ \hline Level 0 & 0.035117637 & 0.92117435 \\ \hline Level 1 & 0.03500075 & 0.5109769 \\ \hline Level 2 & 0.03300126 & 0.5102735 \\ \hline \end{tabular} \end{table} Table 1: Phase 2 results for Seen Classes. \begin{table} \begin{tabular}{|c c c|} \hline & Avg Accuracy & Avg Cosine Similarity \\ \hline Level 0 & 93.8 & 81.5 \\ \hline Level 1 & 93.9 & 82 \\ \hline Level 2 & 95 & 74.5 \\ \hline \end{tabular} \end{table} Table 2: Phase 3 results for Unseen Classes. Figure 8: Phase 3 results Figure 7: Phase 2 results Figure 9: Phase 3 results
2302.07226
Negative moments of the Riemann zeta-function
Assuming the Riemann Hypothesis we study negative moments of the Riemann zeta-function and obtain asymptotic formulas in certain ranges of the shift in $\zeta(s)$. For example, integrating $|\zeta(1/2+\alpha+it)|^{-2k}$ with respect to $t$ from $T$ to $2T$, we obtain an asymptotic formula when the shift $\alpha$ is roughly bigger than $\frac{1}{\log T}$ and $k < 1/2$. We also obtain non-trivial upper bounds for much smaller shifts, as long as $\log\frac{1}{\alpha} \ll \log \log T$. This provides partial progress towards a conjecture of Gonek on negative moments of the Riemann zeta-function, and settles the conjecture in certain ranges. As an application, we also obtain an upper bound for the average of the generalized M\"{o}bius function.
Hung M. Bui, Alexandra Florea
2023-02-14T18:07:58Z
http://arxiv.org/abs/2302.07226v1
# Negative moments of the Riemann zeta-function ###### Abstract. Assuming the Riemann Hypothesis we study negative moments of the Riemann zeta-function and obtain asymptotic formulas in certain ranges of the shift in \(\zeta(s)\). For example, integrating \(|\zeta(1/2+\alpha+it)|^{-2k}\) with respect to \(t\) from \(T\) to \(2T\), we obtain an asymptotic formula when the shift \(\alpha\) is roughly bigger than \(\frac{1}{\log T}\) and \(k<1/2\). We also obtain non-trivial upper bounds for much smaller shifts, as long as \(\log\frac{1}{\alpha}\ll\log\log T\). This provides partial progress towards a conjecture of Gonek on negative moments of the Riemann zeta-function, and settles the conjecture in certain ranges. As an application, we also obtain an upper bound for the average of the generalized Mobius function. Key words and phrases:Riemann zeta-function, moments, negative moments, Mobius function 2010 Mathematics Subject Classification: 11M06, 11M50 ## 1. Introduction For \(k>0\), the \(2k^{\text{th}}\) moment of the Riemann zeta-function is given by \[I_{k}(T)=\int_{0}^{T}|\zeta(\tfrac{1}{2}+it)|^{2k}\,dt.\] Hardy and Littlewood [11] computed the second moment, and Ingham [17] computed the fourth moment. It is conjectured that \[I_{k}(T)\sim c_{k}T(\log T)^{k^{2}} \tag{1.1}\] for an explicit constant \(c_{k}\), whose value was predicted by Keating and Snaith [18] using analogies with random matrix theory. Their conjecture was later refined by Conrey, Farmer, Keating, Rubinstein and Snaith [6] to include lower order powers of \(\log T\), for integer \(k\). Under the Riemann Hypothesis (RH), Soundararajan [29] established almost sharp upper bounds for all the positive moments. This result was later improved by Harper [12], who obtained upper bounds of the conjectural magnitude as in (1.1). There is a wealth of literature on obtaining lower and upper bounds for positive moments of \(\zeta(s)\); for (an incomplete) list of results, we refer the reader to [27, 28, 15, 24, 26, 3, 13, 14]. In this paper, we are interested in studying _negative_ moments of the Riemann zeta-function. For \(k>0\) and \(\alpha>0\), let \[I_{-k}(\alpha,T)=\frac{1}{T}\int_{0}^{T}|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k} \,dt.\] A conjecture due to Gonek [10] states the following. **Conjecture 1.1** (Gonek).: _Let \(k>0\). Uniformly for \(\frac{1}{\log T}\leq\alpha\leq 1\),_ \[I_{-k}(\alpha,T)\asymp\Big{(}\frac{1}{\alpha}\Big{)}^{k^{2}},\] _and uniformly for \(0<\alpha\leq\frac{1}{\log T}\),_ \[I_{-k}(\alpha,T)\asymp\left\{\begin{array}{ll}(\log T)^{k^{2}}&\mbox{ if }k<1/2,\\ \big{(}\log\frac{e}{\alpha\log T}\big{)}(\log T)^{k^{2}}&\mbox{ if }k=1/2. \end{array}\right.\] Gonek's original conjecture predicted formulas for \(k>1/2\) and \(\alpha\leq\frac{1}{\log T}\) as well, which seem to be contradicted however by more recent evidence (see the next paragraph). Under RH, Gonek [10] also proved lower bounds of the conjectured order of magnitude for all \(k>0\) and \(\frac{1}{\log T}\leq\alpha\leq 1\), and for \(k<1/2\) and \(0<\alpha\leq\frac{1}{\log T}\). No other progress has been made towards Gonek's conjecture so far, but more recent random matrix theory computations due to Berry and Keating [2] and Forrester and Keating [9] suggest that certain corrections to the above conjecture are due in some ranges. Namely, when \(\alpha\leq\frac{1}{\log T}\), random matrix theory computations seem to contradict Gonek's prediction for the negative moments when \(k\geq 3/2\). In particular, the work in [2, 9] suggests certain transition regimes when \(k=(2n+1)/2\), for \(n\) a positive integer (Gonek's conjecture already captures the first transition at \(k=1/2\) featuring a logarithmic correction, and in this case the two conjectures do agree.) Reinterpreting the random matrix theory computations in [2], one would expect that for a shift \(0<\alpha\leq\frac{1}{\log T}\) and \(j\) a natural number such that \((2j-1)/2<k<(2j+1)/2\), \[I_{-k}(\alpha,T)\asymp(\log T)^{k^{2}}(\alpha\log T)^{-j(2k-j)}, \tag{1.2}\] while for \(k=(2j-1)/2\) and \(j\) a natural number, one would expect \[I_{-k}(\alpha,T)\asymp\log\Big{(}\frac{e}{\alpha\log T}\Big{)}(\log T)^{k^{2} }(\alpha\log T)^{-j(2k-j)}.\] We note that the above prediction indeed agrees with Conjecture 1.1 for \(k=1/2\) and \(\alpha\leq\frac{1}{\log T}\). We remark that one could also predict (1.2) for integer \(k\) using heuristic ideas similar as in [6]. In this paper, we study the negative moments of the Riemann zeta-function. While obtaining lower bounds for the negative moments is a more tractable problem (see the comment after Conjecture 1.1), no progress has been made so far on obtaining asymptotic formulas or non-trivial upper bounds. When the shift \(\alpha\) is "big enough", we obtain upper bounds which are almost sharp according to Conjecture 1.1, up to some logarithmic factors. We also obtain the first non-trivial upper bounds for the negative moments for a wide range of much smaller shifts \(\alpha\) (roughly \(\alpha\gg(\log T)^{-O(1)}\)); however, the bounds in these cases are far from sharp. More precisely, we prove the following. **Theorem 1.2**.: _Assume RH. Let \(k\geq 1/2,\alpha>0\) and \(\varepsilon,\delta>0\), such that \(u=\frac{\log\frac{1}{\alpha}}{\log\log T}\ll 1\). Then_ \[\frac{1}{T}\int_{T}^{2T}|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}\,dt\ll\] \[\begin{cases}(\log\log T)^{k}(\log T)^{k^{2}}&\text{if }\alpha\gg \frac{(\log\log T)^{\frac{4}{k}+\varepsilon}}{(\log T)^{\frac{1}{2k}}},\end{cases} \tag{1.3}\] \[\exp\Big{(}\frac{(4+\varepsilon)\log T\log\log\log T}{\log\log T }\Big{)}\quad\text{if }\tfrac{1}{(\log T)^{\frac{1}{2k}}}\ll\alpha=o\big{(}\tfrac{(\log\log T)^{ \frac{4}{k}+\varepsilon}}{(\log T)^{\frac{1}{2k}}}\big{)},\] (1.4) \[T^{(1+\delta)(ku-\frac{1}{2}+k\varepsilon)} \text{if }\alpha\ll\tfrac{1}{(\log T)^{\frac{1}{2k}}}. \tag{1.5}\] We also have the following bounds for \(k<1/2\). **Theorem 1.3**.: _Assume RH. Let \(k<1/2,\alpha>0\) and \(\varepsilon,\delta>0\), such that \(u=\frac{\log\frac{1}{\alpha}}{\log\log T}\ll 1\). Then_ \[\frac{1}{T}\int_{T}^{2T}|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}\,dt\ll\] \[\begin{cases}(\log\log T)^{k}\Big{(}\frac{\log(\alpha\log T)}{ \alpha}\Big{)}^{k^{2}}&\text{if }\alpha\gg\frac{\log\log T}{\log T},\\ \exp\Big{(}C_{1}(\log\log T)\Big{(}\log\frac{\log\log T}{\alpha\log T}\Big{)} ^{2}\Big{)}&\text{if }\frac{1}{\log T}\ll\alpha=o\big{(}\frac{\log\log T}{\log T}\big{)},\\ \exp\Big{(}C_{2}\Big{(}\frac{1}{\alpha\log T}\Big{)}^{\frac{2k}{1-2k-k \varepsilon}}\Big{)}&\text{if }\frac{1}{(\log T)^{\frac{1}{2k}-\varepsilon}}\ll \alpha=o\big{(}\frac{1}{\log T}\big{)},\\ T^{(1+\delta)(ku-\frac{1}{2}+k\varepsilon)}\exp\Big{(}O\Big{(}\frac{\log T \log\log\log T}{\log\log T}\Big{)}\Big{)}\quad\text{if }\alpha=o\big{(}\frac{1}{(\log T)^{ \frac{1}{2k}-\varepsilon}}\big{)},\end{cases} \tag{1.6}\] _for some constants \(C_{1},C_{2}>0\)._ **Remarks.** 1. We remark that results about negative moments of quadratic Dirichlet \(L\)-functions were recently proven in the function field setting, first in [4] for shifts big enough, and later improved in [8] to include small shifts. 2. We note that the bound (1.3) in Theorem 1.2 and the bound (1.6) in Theorem 1.3 above are "almost sharp". The former is off possibly by some powers of \(\log\), while the latter is off by some \(\log\log\) factor. We also further use the upper bounds to obtain an asymptotic formula for the negative moments in the following ranges. **Theorem 1.4**.: _Assume RH. Let \(k>0\), \(C,\varepsilon>0\) and \(\alpha\geq\max\Big{\{}C\frac{(\log\log T)^{\frac{4}{k}+\varepsilon}}{(\log T) ^{\frac{1}{2k}}},\frac{(1+\varepsilon)\log\log T}{2\log T}\Big{\}}\). Then we have_ \[\frac{1}{T}\int_{T}^{2T}|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}\,dt=\big{(}1+o(1) \big{)}\zeta(1+2\alpha)^{k^{2}}\prod_{p}\Big{(}1-\frac{1}{p^{1+2\alpha}} \Big{)}^{k^{2}}\Big{(}1+\sum_{j=1}^{\infty}\frac{\mu_{k}(p^{j})^{2}}{p^{(1+2 \alpha)j}}\Big{)},\] _where \(\mu_{k}(n)\) denotes the \(n^{\text{th}}\) Dirichlet coefficient of \(\zeta(s)^{-k}\)._ As an application, we study averages of the generalized Mobius function. Under RH, Littlewood [20] proved that \(\sum_{n\leq x}\mu(n)\ll x^{\frac{1}{2}+\varepsilon}\). This was improved in several works [19, 31, 21]. Soundararajan [30] showed that \[\sum_{n\leq x}\mu(n)\ll\sqrt{x}\exp\left(\sqrt{\log x}(\log\log x)^{14}\right)\] on RH. A note of Balazard and de Roton [1] on Soundararajan's paper [30] improves the power of \(\log\log x\) in the bound above, and it is stated in [1] that the power \((\log\log x)^{\frac{5}{2}+\varepsilon}\) is the limitation of the ideas in [30, 1]. Assuming RH and the conjectured order of magnitude of the negative second moment of \(\zeta^{\prime}(\rho)\) by Gonek [10] and Hejhal [16], Ng [23] also showed that \[\sum_{n\leq x}\mu(n)\ll\sqrt{x}(\log x)^{\frac{3}{2}}.\] Using Theorem 1.2, we improve the result in [30]. As before, let \(\mu_{k}(n)\) denote the \(n^{\text{th}}\) Dirichlet coefficient of \(\zeta(s)^{-k}\). We prove the following. **Theorem 1.5**.: _Assume RH. We have_ \[\sum_{n\leq x}\mu_{k}(n)\ll\left\{\begin{aligned} &\sqrt{x}\exp\left( \varepsilon\sqrt{\log x}\right)&\text{if }k<1,\\ &\sqrt{x}\exp\left((\log x)^{\frac{k}{k+1}}(\log\log x)^{\frac{7}{ k+1}+\varepsilon}\right)&\text{if }k\geq 1.\end{aligned}\right.\] _Also, if we assume Conjecture 1.1, then_ \[\sum_{n\leq x}\mu_{k}(n)\ll\left\{\begin{aligned} &\sqrt{x}(\log x)^{\frac{k^{2}}{4}}& \text{if }k\leq 2,\\ &\sqrt{x}\exp\left((\log x)^{\frac{k-2}{k}}(\log\log x)^{-1+ \varepsilon}\right)&\text{if }k>2.\end{aligned}\right.\] Proving Theorem 1.5 requires good bounds for the negative moments of \(\zeta(s)\) when roughly \(\alpha\gg\frac{1}{(\log T)^{\frac{1}{2k}}}\). We remark that obtaining the bound (1.3) in Theorem 1.2 is the most delicate part of the proof and requires a recursive argument which allows us to obtain improved estimates at each step. The ideas in the proof of Theorems 1.2 and 1.3 are sieve-theoretic inspired ideas, as in the work of Soundararajan [29] and Harper [12]. However, unlike in the work in [29] and [12], the contributions of zeros of \(\zeta(s)\) play an important part, and one needs to be careful about the choice of parameters in the sieve-theoretic argument, in order to account for the big contributions coming from the zeros. The paper is organized as follows. In Section 2, we prove some lemmas providing a lower bound for the logarithm of \(\zeta(s)\), as well as a pointwise lower bound for \(|\zeta(s)|\). In Section 3, we introduce the setup of the problem, and prove three main propositions. In Section 4, we consider the case of a "big" shift \(\alpha\), and do the first steps towards proving Theorems 1.2 and 1.3 in this case. In Section 5, we obtain the bound (1.3) in the smaller region \(\alpha\gg\frac{1}{(\log T)^{\frac{1}{2k}-\varepsilon}}\), and prove the bounds (1.6), (1.7) and (1.8) from Theorem 1.3 as well. In Section 6, we obtain the bound (1.3) in the wider region stated in the theorem by using a recursive argument. We consider the cases of "small" shifts in Section 7 and prove the bounds (1.5) and (1.9). We then prove Theorem 1.4 in Section 8, and Theorem 1.5 in Section 9. Throughout the paper \(\varepsilon\) denotes an arbitrarily small positive number whose value may change from one line to the next. Acknowledgments. The authors would like to thank Steve Gonek, Jon Keating and Nathan Ng for helpful conversations and comments. The second author gratefully acknowledges support from NSF grant DMS-2101769 while working on this paper. ## 2. Preliminary lemmas The first two lemmas concern the lower bounds for \(|\zeta(s)|\). **Lemma 2.1**.: _Assume RH. Let \(\alpha>0\). Then_ \[\log|\zeta(\tfrac{1}{2}+\alpha+it)|\geq\frac{\log\frac{t}{2\pi}}{2 \pi\Delta}\log\big{(}1-e^{-2\pi\Delta\alpha}\big{)}+\Re\Big{(}\sum_{p\leq e^{2 \pi\Delta}}\frac{(\log p)a_{\alpha}(p;\Delta)}{p^{1/2+\alpha+it}}\Big{)}\] \[\qquad\qquad-\frac{\log\log\log T}{2}-\Big{(}\log\frac{1}{\Delta \alpha}\Big{)}^{\gamma(\Delta)}+O\Big{(}\frac{\Delta^{2}e^{\pi\Delta}}{1+ \Delta t}+\frac{\Delta\log(1+\Delta\sqrt{t})}{\sqrt{t}}+1\Big{)},\] _where_ \[a_{\alpha}(n;\Delta)=\sum_{j=0}^{\infty}\Big{(}\frac{(j+1)}{\log n+2\pi j \Delta}e^{-2\pi j\Delta\alpha}-\frac{(j+1)n^{2\alpha}}{2\pi(j+2)\Delta-\log n }e^{-2\pi(j+2)\Delta\alpha}\Big{)} \tag{2.1}\] _and_ \[\gamma(\Delta)=\begin{cases}1&\text{if }\Delta\alpha=o(1),\\ 0&\text{if }\Delta\alpha\gg 1.\end{cases}\] Proof.: We use the work in [5]. Let \[f_{\alpha}(x)=\log\Big{(}\frac{4+x^{2}}{\alpha^{2}+x^{2}}\Big{)},\] and let \(m_{\Delta}(x)\) be an extremal majorant for \(f_{\alpha}\), whose Fourier transform \(\widehat{m}_{\Delta}\) is supported in \([-\Delta,\Delta]\), satisfying the properties from Lemma 8 in [5]. Equation 3.1 in [5] gives \[\log|\zeta(\tfrac{1}{2}+\alpha+it)|\geq\Big{(}1-\frac{\alpha}{2} \Big{)}\log\frac{t}{2}-\frac{1}{2}\sum_{\gamma}m_{\Delta}(t-\gamma)+O(1), \tag{2.2}\] where the sum over \(\gamma\) is over the ordinates of the zeros of \(\zeta(s)\) on the critical line. Using the explicit formula, Equation 3.2 in [5] leads to \[\sum_{\gamma}m_{\Delta}(t-\gamma) =m_{\Delta}\Big{(}t-\frac{1}{2i}\Big{)}+m_{\Delta}\Big{(}t+\frac {1}{2i}\Big{)}-\frac{\log\pi}{2\pi}\widehat{m}_{\Delta}(0) \tag{2.3}\] \[\qquad+\frac{1}{2\pi}\int_{-\infty}^{\infty}m_{\Delta}(x)\Re \bigg{(}\frac{\Gamma^{\prime}}{\Gamma}\Big{(}\frac{1}{4}+\frac{i(t-x)}{2} \Big{)}\bigg{)}\,dx-\frac{1}{\pi}\Re\bigg{(}\sum_{n=2}^{\infty}\frac{\Lambda( n)}{n^{1/2+it}}\widehat{m}_{\Delta}\Big{(}\frac{\log n}{2\pi}\Big{)}\bigg{)},\] where \[\widehat{m}_{\Delta}(\xi)=\sum_{j=0}^{\infty}\bigg{(}\frac{j+1}{\xi+ j\Delta}\Big{(}e^{-2\pi(\xi+j\Delta)\alpha}-e^{-4\pi(\xi+j\Delta)}\Big{)}\] \[\qquad\qquad\qquad\qquad\qquad-\frac{j+1}{(j+2)\Delta-\xi}\Big{(}e ^{-2\pi((j+2)\Delta-\xi)\alpha}-e^{-4\pi((j+2)\Delta-\xi)}\Big{)}\bigg{)}.\] For \(n\leq e^{2\pi\Delta}\) we note that \[\widehat{m}_{\Delta}\Big{(}\frac{\log n}{2\pi}\Big{)}=\frac{2\pi a_{\alpha}(n; \Delta)}{n^{\alpha}}+O\Big{(}\frac{1}{n^{2}\log n}\Big{)}.\] Combining that with (2.2), (2.3) and Equations \((\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eq:eqeq:eq:eqeq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq the sum over primes and use the bound (2.7), and it follows that the contribution from prime squares is \[\geq-\frac{\log\log\log T}{2}-\Big{(}\log\frac{1}{\Delta\alpha}\Big{)}^{\gamma( \Delta)}-O(1).\] Dealing similarly with the sum over prime cubes and higher powers and combining the equation above and (2.4) finishes the proof. **Lemma 2.2**.: _Assume RH. If \(0<\alpha=o(\frac{1}{\log\log t})\), then_ \[\log|\zeta(\tfrac{1}{2}+\alpha+it)|\geq\frac{\log t}{2\log\log t} \log\big{(}1-(\log t)^{-2\alpha}\big{)}\\ +O\Big{(}\frac{\log t}{\log\log t}\Big{)}+O\Big{(}\frac{\log t}{( \log\log t)^{2}}\log\frac{1}{1-(\log t)^{-2\alpha}}\Big{)}, \tag{2.8}\] _and if \(1/2-\alpha\ll\frac{1}{\log\log t}\), then_ \[\log|\zeta(\tfrac{1}{2}+\alpha+it)|\geq-\log\log\log t+O(1),\] _otherwise_ \[\log|\zeta(\tfrac{1}{2}+\alpha+it)|\geq-\Big{(}\frac{1}{2}+\frac{8\alpha}{1- 4\alpha^{2}}\Big{)}\frac{(\log t)^{1-2\alpha}}{\log\log t}-\log\log\log t+O \Big{(}\frac{(\log t)^{1-2\alpha}}{(1-2\alpha)^{2}(\log\log t)^{2}}\Big{)}.\] Proof.: Carneiro and Chandee established the last two bounds in [5, Theorem 2] and in the case \(0<\alpha=o(\frac{1}{\log\log t})\) they obtained \[\log|\zeta(\tfrac{1}{2}+\alpha+it)|\geq\frac{\log t}{2\log\log t}\log\big{(}1 -(\log t)^{-2\alpha}\big{)}+O\Big{(}\frac{(\log t)^{1-2\alpha}}{(\log\log t)^ {2}(1-(\log t)^{-2\alpha})}\Big{)}.\] We will now prove the improved bound (2.8). Let \(\Delta=\frac{\log\log t}{\pi}\). From (2.4) and (2.6) we have \[\log|\zeta(\tfrac{1}{2}+\alpha+it)|\geq\frac{\log\frac{t}{2\pi}} {2\log\log t}\log\big{(}1-(\log t)^{-2\alpha}\big{)}\\ -\sum_{n\leq(\log t)^{2}}\frac{\Lambda(n)}{n^{1/2+\alpha}}\bigg{(} \frac{1}{\log n}-\frac{2(2\log\log t-\log n)}{(2\log\log t)^{2}}\log\big{(}1- (\log t)^{-2\alpha}\big{)}+O\Big{(}\frac{1}{\log\log t}\Big{)}\bigg{)}\\ +O(1).\] We now consider the sum over \(n\) above, which expands into three terms. The contribution of the first and the last term is \[\ll\frac{\log t}{\log\log t},\] while the second term is \[=-\frac{\log(1-(\log t)^{-2\alpha})}{\log\log t}\Big{(}\frac{2( \log t)^{1-2\alpha}}{1-2\alpha}+O\big{(}(\log\log t)^{3}\big{)}\Big{)}\\ +\frac{\log(1-(\log t)^{-2\alpha})}{2(\log\log t)^{2}}\Big{(} \frac{4(\log t)^{1-2\alpha}\log\log t}{1-2\alpha}+O(\log t)\Big{)}\] \[=O\Big{(}\frac{\log t}{(\log\log t)^{2}}\log\frac{1}{1-(\log t)^{-2\alpha}}\Big{)},\] by the prime number theorem and partial summation. This establishes (2.8). The next lemma is the classical mean value theorem (see, for instance, [22, Theorem 6.1]). **Lemma 2.3**.: _For any complex numbers \(a(n)\) we have_ \[\int_{T}^{2T}\Big{|}\sum_{n\leq N}\frac{a(n)}{n^{1/2+it}}\Big{|}^{2}\,dt=\big{(} T+O(N)\big{)}\sum_{n\leq N}\frac{|a(n)|^{2}}{n}.\] Now let \(\ell\) be an integer, and for \(z\in\mathbb{C}\), let \[E_{\ell}(z)=\sum_{s\leq\ell}\frac{z^{s}}{s!}.\] If \(z\in\mathbb{R}\), then for \(\ell\) even, \(E_{\ell}(z)>0\). We have the following elementary inequality. **Lemma 2.4**.: _Let \(\ell\) be an even integer and let \(z\) be a complex number such that \(|z|\leq\ell/e^{2}\). Then_ \[e^{\Re(z)}\leq\max\Big{\{}1,|E_{\ell}(z)|\Big{(}1+\frac{1}{15e^{\ell}}\Big{)} \Big{\}}.\] Proof.: We have \[e^{\Re(z)}=|e^{z}|\leq|e^{z}-E_{\ell}(z)|+|E_{\ell}(z)|.\] Now we have \[|e^{z}-E_{\ell}(z)|\leq\sum_{j=\ell+1}^{\infty}\frac{|z|^{j}}{j!},\] and we can proceed as in [25] (see Lemma 1) to get that \[|e^{z}-E_{\ell}(z)|\leq\frac{1}{16e^{\ell}}.\] Hence \[e^{\Re(z)}\leq|E_{\ell}(z)|+\frac{1}{16e^{\ell}}.\] If \(|E_{\ell}(z)|\leq\frac{15}{16}\), then we have \(e^{\Re(z)}\leq 1\). Otherwise, if \(|E_{\ell}(z)|>\frac{15}{16}\), we get that \[e^{\Re(z)}\leq|E_{\ell}(z)|\Big{(}1+\frac{1}{15e^{\ell}}\Big{)},\] and this completes the proof. Let \(\nu(n)\) be the multiplicative function given by \[\nu(p^{j})=\frac{1}{j!},\] for \(p\) a prime. We will frequently use the following fact. For any interval \(I\), \(s\in\mathbb{N}\) and \(a(n)\) a completely multiplicative function, we have \[\Big{(}\sum_{p\in I}a(p)\Big{)}^{s}=s!\sum_{\begin{subarray}{c}p|n\Rightarrow p \in I\\ \Omega(n)=s\end{subarray}}a(n)\nu(n), \tag{2.9}\] where \(\Omega(n)\) denotes the number of prime factors of \(n\), counting multiplicity. ## 3. Setup of the proof and main propositions We will first introduce some notation and state some key lemmas, before proceeding to the proof of Theorems 1.2 and 1.3. Let \[I_{0}=(1,T^{\beta_{0}}],\ I_{1}=(T^{\beta_{0}},T^{\beta_{1}}],\ \ldots,\ I_{K}=(T^{ \beta_{K-1}},T^{\beta_{K}}]\] for a sequence \(\beta_{0},\ldots,\beta_{K}\) to be chosen later such that \(\beta_{j+1}=r\beta_{j}\), for some \(r>1\). Also let \(\ell_{j}\) be even parameters which we will also choose later on. Let \(s_{j}\) be integers. For now, we can think of \(s_{j}\beta_{j}\asymp 1\), and \(\sum_{h=0}^{K}\ell_{h}\beta_{h}\ll 1\). We let \(T^{\beta_{j}}=e^{2\pi\Delta_{j}}\) for every \(0\leq j\leq K\) and let \[P_{u,v}(t)=\sum_{p\in I_{u}}\frac{b_{\alpha}(p;\Delta_{v})}{p^{1/2+\alpha+it}},\] where \(b_{\alpha}(n;\Delta)\) is a completely multiplicative function in the first variable and \[b_{\alpha}(p;\Delta)=-a_{\alpha}(p;\Delta)\log p.\] For \(p\leq e^{2\pi\Delta}\), if \(\Delta\alpha\gg 1\), we note from equation (2.5) that \[|b_{\alpha}(p;\Delta)| \leq 1+\frac{2\pi\Delta-\log p}{2\pi\Delta}\sum_{j=1}^{\infty} \frac{2\log p}{\log p+2\pi j\Delta}e^{-2\pi j\Delta\alpha}+\frac{\log p}{2 \pi\Delta}\sum_{j=1}^{\infty}e^{-2\pi j\Delta\alpha}\] \[\leq 1+\sum_{j=1}^{\infty}e^{-2\pi j\Delta\alpha}=\frac{1}{1-e^{-2 \pi\Delta\alpha}}.\] On the other hand, if \(\Delta\alpha=o(1)\), then from (2.6) we get that \[|b_{\alpha}(p;\Delta)|\leq\frac{2(2\pi\Delta-\log p)\log p}{(2\pi\Delta)^{2} }\sum_{j=1}^{\infty}\frac{e^{-2\pi j\Delta\alpha}}{j}+O(1)\leq\frac{1}{2}\log \frac{1}{\Delta\alpha}+O(1).\] Thus we can rewrite the bound for \(b_{\alpha}(p;\Delta)\) as a single bound, \[|b_{\alpha}(p;\Delta)|\leq b(\Delta)\Big{(}\log\frac{1}{\Delta\alpha}\Big{)}^ {\gamma(\Delta)}, \tag{3.1}\] where \(\gamma(\Delta)=0\) if \(\Delta\alpha\gg 1\) and \(\gamma(\Delta)=1\) if \(\Delta\alpha=o(1)\), and where \[b(\Delta)=\begin{cases}\frac{1}{1-e^{-2\pi\Delta\alpha}}&\text{ if }\Delta\alpha\gg 1, \\ \frac{1}{2}+\varepsilon&\text{ if }\Delta\alpha=o(1).\end{cases} \tag{3.2}\] Let \[\mathcal{T}_{u}=\Big{\{}T\leq t\leq 2T:\max_{u\leq v\leq K}|P_{u,v}(t)|\leq\frac{ \ell_{u}}{ke^{2}}\Big{\}}.\] Denote the set of \(t\) such that \(t\in\mathcal{T}_{u}\) for all \(u\leq K\) by \(\mathcal{T}^{\prime}\). For \(0\leq j\leq K-1\), let \(\mathcal{S}_{j}\) denote the subset of \(t\in[T,2T]\) such that \(t\in\mathcal{T}_{h}\) for all \(h\leq j\), but \(t\notin\mathcal{T}_{j+1}\). We will prove the following lemma. **Lemma 3.1**.: _For \(t\in[T,2T]\), we either have_ \[\max_{0\leq v\leq K}|P_{0,v}(t)|>\frac{\ell_{0}}{ke^{2}},\] _or_ \[|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}\leq S_{1}(t)+S_{2}(t),\] _where_ \[S_{1}(t)=(\log\log T)^{k}\Big{(}\frac{1}{1-T^{-\beta_{K}\alpha} }\Big{)}^{\frac{2k}{\beta_{K}}}\exp\Big{(}2k\Big{(}\log\frac{1}{\Delta_{K} \alpha}\Big{)}^{\gamma(\Delta_{K})}\Big{)}\] \[\times\prod_{h=0}^{K}\max\Big{\{}1,|E_{\ell_{h}}(kP_{h,K}(t))|^{ 2}\Big{(}1+\frac{1}{15e^{\ell_{h}}}\Big{)}^{2}\Big{\}}\exp\Big{(}O\Big{(} \frac{\Delta_{K}^{2}e^{\pi\Delta_{K}}}{1+\Delta_{K}t}+\frac{\Delta_{K}\log(1+ \Delta_{K}\sqrt{t})}{\sqrt{t}}+1\Big{)}\Big{)}\] _and_ \[S_{2}(t)=(\log\log T)^{k}\sum_{j=0}^{K-1}\sum_{v=j+1}^{K}\Big{(} \frac{1}{1-T^{-\beta_{j}\alpha}}\Big{)}^{\frac{2k}{\beta_{j}}}\exp\Big{(}2k \Big{(}\log\frac{1}{\Delta_{j}\alpha}\Big{)}^{\gamma(\Delta_{j})}\Big{)}\] \[\times\prod_{h=0}^{j}\max\Big{\{}1,|E_{\ell_{h}}(kP_{h,j}(t))|^{ 2}\Big{(}1+\frac{1}{15e^{\ell_{h}}}\Big{)}^{2}\Big{\}}\Big{(}\frac{ke^{2}}{ \ell_{j+1}}|P_{j+1,v}(t)|\Big{)}^{2s_{j+1}}\] \[\times\exp\Big{(}O\Big{(}\frac{\Delta_{j}^{2}e^{\pi\Delta_{j}}}{ 1+\Delta_{j}t}+\frac{\Delta_{j}\log(1+\Delta_{j}\sqrt{t})}{\sqrt{t}}+1\Big{)} \Big{)}.\] Proof.: For \(T\leq t\leq 2T\), we have the following possibilities: 1. \(t\notin\mathcal{T}_{0}\); 2. \(t\in\mathcal{T}^{\prime}\); 3. \(t\in\mathcal{S}_{j}\) for some \(0\leq j\leq K-1\). If \(t\notin\mathcal{T}_{0}\), then the first condition is automatically satisfied. Now suppose that \(t\in\mathcal{T}_{0}\). Suppose that \(t\in\mathcal{T}^{\prime}\). We use Lemma 2.1 with \(\Delta=\Delta_{K}\) and the inequality in Lemma 2.4 with \(z=k\Re P_{h,K}(t)\). Now if \(t\in\mathcal{S}_{j}\), then we use Lemma 2.1 with \(\Delta=\Delta_{j}\), the inequality in Lemma 2.4 with \(z=k\Re P_{h,j}(t)\) and the fact that there exists some \(v\geq j+1\) such that \(\frac{ke^{2}}{\ell_{j+1}}|P_{j+1,v}(t)|>1\). We will also need the following propositions. **Proposition 3.2**.: _For \(0\leq v\leq K\) and \(\beta_{0}s_{0}\leq 1\), we have_ \[\int_{T}^{2T}|P_{0,v}(t)|^{2s_{0}}\,dt\ll Ts_{0}!b(\Delta_{0})^{2s_{0}}\Big{(} \log\frac{1}{\Delta_{0}\alpha}\Big{)}^{2s_{0}\gamma(\Delta_{0})}(\log\log T^{ \beta_{0}})^{s_{0}}.\] Proof.: Using (2.9), we have \[P_{0,v}(t)^{s_{0}}=s_{0}!\sum_{\begin{subarray}{c}p|n\Rightarrow p\leq T^{\beta_{ 0}}\\ \Omega(n)=s_{0}\end{subarray}}\frac{b_{\alpha}(n;\Delta_{v})\nu(n)}{n^{1/2+\alpha +it}}.\] It then follows from Lemma 2.3 that \[\int_{T}^{2T}|P_{0,v}(t)|^{2s_{0}}\,dt=\big{(}T+O(T^{\beta_{0}s_{ 0}})\big{)}(s_{0}!)^{2}\sum_{\begin{subarray}{c}p|n\Rightarrow p\leq T^{\beta_ {0}}\\ \Omega(n)=s_{0}\end{subarray}}\frac{b_{\alpha}(n;\Delta_{v})^{2}\nu(n)^{2}}{n^ {1+2\alpha}}\] \[\ll Ts_{0}!\bigg{(}\sum_{p\leq T^{\beta_{0}}}\frac{b_{\alpha}(p; \Delta_{v})^{2}}{p^{1+2\alpha}}\bigg{)}^{s_{0}}\leq Ts_{0}!b(\Delta_{v})^{2s_ {0}}\Big{(}\log\frac{1}{\Delta_{v}\alpha}\Big{)}^{2s_{0}\gamma(\Delta_{v})}( \log\log T^{\beta_{0}})^{s_{0}},\] where we have used the fact that \(\nu(n)^{2}\leq\nu(n)\) and the bound (3.1). Now since \[b(\Delta_{v})\Big{(}\log\frac{1}{\Delta_{v}\alpha}\Big{)}^{2s_{0}\gamma( \Delta_{v})}\leq b(\Delta_{0})\Big{(}\log\frac{1}{\Delta_{0}\alpha}\Big{)}^{2 s_{0}\gamma(\Delta_{0})},\] the conclusion follows. **Proposition 3.3**.: _Let \(0\leq j\leq K-1\). For \(\sum_{h=0}^{j}\ell_{h}\beta_{h}+s_{j+1}\beta_{j+1}\leq 1\) and for \(j+1\leq v\leq K\), we have_ \[\int_{T}^{2T}\prod_{h=0}^{j}|E_{\ell_{h}}(kP_{h,j}(t))|^{2}|P_{j+1,v}(t)|^{2s_{j+1}}\,dt\ll Ts_{j+1}!\big{(}\log T^{\beta_{j}}\big{)}^{k^{2}b( \Delta_{j})^{2}(\log\frac{1}{\Delta_{j}})^{2\gamma(\Delta_{j})}}\] \[\qquad\times b(\Delta_{j+1})^{2s_{j+1}}\Big{(}\log\frac{1}{\Delta _{j+1}\alpha}\Big{)}^{2s_{j+1}\gamma(\Delta_{j+1})}(\log r)^{s_{j+1}}.\] Proof.: Using (2.9), we have \[\int_{T}^{2T}\prod_{h=0}^{j}|E_{\ell_{h}}(kP_{h,j}(t))|^{2}|P_{j+1,v}(t)|^{2s_{j+1}}\,dt=(s_{j+1}!)^{2}\int_{T}^{2T}\Big{|}\sum_{n\leq T^{\sum_{ h=0}^{j}\ell_{h}\beta_{h}+s_{j+1}\beta_{j+1}}}\frac{c(n)\nu(n)}{n^{1/2+\alpha +it}}\Big{|}^{2}\,dt, \tag{3.3}\] where \[c(n)=\sum_{\begin{subarray}{c}n=n_{1}\dots n_{j+1}\\ p|n_{h}\Rightarrow p\in I_{h}\\ \Omega(n_{h})\leq\ell_{h},h=0,\dots,j\\ \Omega(n_{j+1})=s_{j+1}\end{subarray}}\Big{(}\prod_{h=0}^{j}b_{\alpha}(n_{h}; \Delta_{j})k^{\Omega(n_{h})}\Big{)}b_{\alpha}(n_{j+1};\Delta_{v}).\] Using Lemma 2.3 in (3.3) and the fact that \(\nu(m)^{2}\leq\nu(m)\) for any \(m\), we obtain that \[\eqref{eq:c(n)}\ll\Big{(}T+T^{\sum_{h=0}^{j}\ell_{h}\beta_{h}+s _{j+1}\beta_{j+1}}\Big{)}(s_{j+1}!)^{2}\Big{(}\prod_{h=0}^{j}\sum_{ \begin{subarray}{c}p|n_{h}\Rightarrow p\in I_{h}\\ \Omega(n_{h})\leq\ell_{h}\end{subarray}}\frac{|b_{\alpha}(n_{h};\Delta_{j})|^{ 2}k^{2\Omega(n_{h})}\nu(n_{h})}{n_{h}^{1+2\alpha}}\Big{)}\] \[\times\sum_{\begin{subarray}{c}p|n_{j+1}\Rightarrow p\in I_{j+1}\\ \Omega(n_{j+1})=s_{j+1}\end{subarray}}\frac{|b_{\alpha}(n_{j+1};\Delta_{v})|^{2 }\nu(n_{j+1})}{n_{j+1}^{1+2\alpha}}.\] Now we use the assumption that \(\sum_{h=0}^{j}\ell_{h}\beta_{h}+s_{j+1}\beta_{j+1}\leq 1\). Bounding \(\nu(n_{h})\leq 1\), removing the condition on the number of primes of \(n_{h}\), and using (3.1), we get that \[\ll Ts_{j+1}!\big{(}\log T^{\beta_{j}}\big{)}^{k^{2}b(\Delta_{j} )^{2}(\log\frac{1}{\Delta_{j}a})^{2\gamma(\Delta_{j})}}b(\Delta_{v})^{2s_{j+1} }\Big{(}\log\frac{1}{\Delta_{v}\alpha}\Big{)}^{2s_{j+1}\gamma(\Delta_{v})} \Big{(}\log\frac{\beta_{j+1}}{\beta_{j}}\Big{)}^{s_{j+1}}\] \[\ll Ts_{j+1}!\big{(}\log T^{\beta_{j}}\big{)}^{k^{2}b(\Delta_{j} )^{2}(\log\frac{1}{\Delta_{j}a})^{2\gamma(\Delta_{j})}}b(\Delta_{j+1})^{2s_{ j+1}}\Big{(}\log\frac{1}{\Delta_{j+1}\alpha}\Big{)}^{2s_{j+1}\gamma(\Delta_{j+1})}( \log r)^{s_{j+1}}, \tag{3.3}\] which finishes the proof of the proposition. A minor modification of the proposition above (where we do not have the contribution from the \(j+1\) interval) yields the following. **Proposition 3.4**.: _For \(\sum_{h=0}^{K}\ell_{h}\beta_{h}\leq 1\), we have_ \[\int_{T}^{2T}\prod_{h=0}^{K}|E_{\ell_{h}}(kP_{h,K}(t))|^{2}\,dt\ll T\big{(}\log T ^{\beta_{K}}\big{)}^{k^{2}b(\Delta_{K})^{2}(\log\frac{1}{\Delta_{K}}\alpha)^{2 \gamma(\Delta_{K})}}.\] ## 4. The case \(\alpha\gg\frac{1}{(\log T)^{\frac{1}{2k}-\varepsilon}}\), first steps Here, we will consider the case \(\alpha\gg\frac{1}{(\log T)^{\frac{1}{2k}-\varepsilon}}\), which will be a starting point in the proof of Theorems 1.2 and 1.3. We choose \[\beta_{0}=\frac{a(2d-1)(\log\log T)^{2}}{(1+2\varepsilon)k(\log T)\big{(}\log \frac{1}{1-(\log T)^{-2\alpha}}\big{)}},\quad s_{0}=\Big{[}\frac{a}{\beta_{0} }\Big{]},\quad\ell_{0}=2\Big{[}\frac{s_{0}^{d}}{2}\Big{]} \tag{4.1}\] and \[\beta_{j}=r^{j}\beta_{0},\quad s_{j}=\Big{[}\frac{a}{\beta_{j}}\Big{]},\quad \ell_{j}=2\Big{[}\frac{s_{j}^{d}}{2}\Big{]},\quad 1\leq j\leq K, \tag{4.2}\] where we can pick, for example, \[a=\frac{4-3k\varepsilon}{2(2-k\varepsilon)},\,r=\frac{2}{2-k\varepsilon},\,d =\frac{8-7k\varepsilon}{2(4-3k\varepsilon)},\] so that \[\frac{a(2d-1)}{r}=1-k\varepsilon. \tag{4.3}\] Here, \(K\) is chosen such that it is the maximal integer for which \[\beta_{K}\leq\left\{\begin{aligned} &\frac{\log(\alpha\log T)}{ \alpha\log T}&&\text{ if }\alpha\log T\to\infty,\\ & c&&\text{ if }\alpha\ll\frac{1}{\log T},\end{aligned}\right. \tag{4.4}\] where \(c>0\) is a small constant such that \[c^{1-d}\Big{(}\frac{a^{d}r^{1-d}}{r^{1-d}-1}+\frac{2r}{r-1}\Big{)}\leq 1-a. \tag{4.5}\] Note that the above ensures that the conditions in Propositions 3.3 and 3.4 are satisfied. If \(t\notin\mathcal{T}_{0}\), then there exists \(0\leq v\leq K\) such that \(\frac{ke^{2}}{\ell_{0}}|P_{0,v}(t)|>1\). Then we have \[\int_{[T,2T]\setminus\mathcal{T}_{0}}|\zeta(\tfrac{1}{2}+\alpha+ it)|^{-2k}\,dt \leq\int_{T}^{2T}\Big{(}\frac{ke^{2}}{\ell_{0}}|P_{0,v}(t)|\Big{)} ^{2s_{0}}|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}\,dt\] \[\leq\Big{(}\frac{ke^{2}}{\ell_{0}}\Big{)}^{2s_{0}}\Big{(}\frac{1} {1-(\log T)^{-2\alpha}}\Big{)}^{\frac{(1+c)k\log T}{\log\log T}}\int_{T}^{2T}|P _{0,v}(t)|^{2s_{0}}\,dt,\] by using the pointwise bound in Lemma 2.2, \[|\zeta(\tfrac{1}{2}+\alpha+it)|^{-1}\ll\Big{(}\frac{1}{1-(\log T)^{-2\alpha} }\Big{)}^{\frac{(1+c)k\log T}{2\log\log T}}. \tag{4.6}\] By Proposition 3.2 and Stirling's formula, we get that \[\int_{[T,2T]\setminus\mathcal{T}_{0}}|\zeta(\tfrac{1}{2}+\alpha+ it)|^{-2k}\,dt\ll T\Big{(}\frac{1}{1-(\log T)^{-2\alpha}}\Big{)}^{\frac{(1+c)k \log T}{\log\log T}}\] \[\qquad\times\sqrt{s_{0}}\exp\bigg{(}-(2d-1)s_{0}\log s_{0}+2s_{0} \log\Big{(}ke^{3/2}b(\Delta_{0})\Big{(}\log\frac{1}{\Delta_{0}\alpha}\Big{)} ^{\gamma(\Delta_{0})}\sqrt{\log\log T^{\beta_{0}}}\Big{)}\bigg{)}\] \[=o(T), \tag{4.7}\] using the choice of \(s_{0}\) in (4.1). Now assume that \(t\in\mathcal{T}_{0}\). Using Lemma 3.1, we have that \[\int_{\mathcal{T}_{0}}|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}\,dt\leq\int_{T}^{ 2T}S_{1}(t)\,dt+\int_{T}^{2T}S_{2}(t)\,dt.\] With the choice (4.4) of \(\beta_{K}\), we have that \[\Big{(}\frac{1}{1-T^{-\beta_{K}\alpha}}\Big{)}^{\frac{2k}{\beta_{K}}}\ll\, \begin{cases}1&\text{ if }\alpha\gg\frac{1}{\log T},\\ (\log T)^{O(1)}&\text{ if }\alpha=o\big{(}\frac{1}{\log T}\big{)}\end{cases}\] and \[\exp\Big{(}O\Big{(}\frac{\Delta_{K}^{2}e^{\pi\Delta_{K}}}{1+\Delta_{K}t}+ \frac{\Delta_{K}\log(1+\Delta_{K}\sqrt{t})}{\sqrt{t}}+1\Big{)}\Big{)}=O(1),\] so \[\int_{T}^{2T}S_{1}(t)\,dt\ll(\log\log T)^{k}\exp\Big{(}2k\Big{(} \log\frac{1}{\Delta_{K}\alpha}\Big{)}^{\gamma(\Delta_{K})}\Big{)}(\log T)^{c _{1}\gamma(\Delta_{K})}\] \[\qquad\qquad\qquad\times\int_{T}^{2T}\prod_{h=0}^{K}\max\Big{\{} 1,|E_{\ell_{h}}(kP_{h,K}(t))|^{2}\Big{(}1+\frac{1}{15e^{\ell_{h}}}\Big{)}^{2} \Big{\}}\,dt.\] Note that in the inequality above, we can assume without loss of generality that \[\max\Big{\{}1,|E_{\ell_{h}}(kP_{h,K}(t))|^{2}\Big{(}1+\frac{1}{15e^{\ell_{h}}} \Big{)}^{2}\Big{\}}=|E_{\ell_{h}}(kP_{h,K}(t))|^{2}\Big{(}1+\frac{1}{15e^{\ell_{ h}}}\Big{)}^{2}.\] Using Proposition 3.4 and the observation that \(\gamma(\Delta_{K})=0\) in the first two cases, and \(\gamma(\Delta_{K})=1\) in the third case, we get that \[\int_{T}^{2T}S_{1}(t)\,dt\ll\left\{\begin{aligned} & T(\log\log T)^{k}\big{(}\frac{\log(\alpha \log T)}{\alpha}\big{)}^{k^{2}}&\text{ if }\alpha\log T\to\infty,\\ & T(\log\log T)^{k}(\log T)^{k^{2}(\frac{1}{1-T-c\alpha})^{2}}& \text{ if }\alpha\asymp\frac{1}{\log T},\\ & T(\log\log T)^{k}\exp\Big{(}k^{2}\big{(}\frac{1}{4}+\varepsilon \big{)}(\log\log T)\big{(}\log\frac{1}{\alpha\log T}\big{)}^{2}\Big{)}& \text{ if }\alpha=o\big{(}\frac{1}{\log T}\big{)}.\end{aligned}\right. \tag{4.8}\] Now we will need to bound the contribution from \(S_{2}(t)\). Using Proposition 3.3 and Stirling's formula, we have \[\int_{T}^{2T}S_{2}(t)\,dt\ll T(\log\log T)^{k}\sum_{j=0}^{K-1}(K- j)\sqrt{s_{j+1}}\exp\bigg{(}\frac{2k}{\beta_{j}}\log\frac{1}{1-T^{-\beta_{j} \alpha}}-(2d-1)s_{j+1}\log s_{j+1}\] \[\qquad\qquad\qquad+2s_{j+1}\log\Big{(}ke^{3/2}b(\Delta_{j+1}) \Big{(}\log\frac{1}{\Delta_{j+1}\alpha}\Big{)}^{\gamma(\Delta_{j+1})}\Big{)} +2k\Big{(}\log\frac{1}{\Delta_{j}\alpha}\Big{)}^{\gamma(\Delta_{j})}\bigg{)}\] \[\qquad\qquad\qquad\times\big{(}\log T^{\beta_{j}}\big{)}^{k^{2}b( \Delta_{j})^{2}(\log\frac{1}{\Delta_{j}\alpha})^{2\gamma(\Delta_{j})}}.\] In the equation above, note that if \(\beta_{j}\alpha\log T\geq\varepsilon\), then \[\frac{2k}{\beta_{j}}\log\frac{1}{1-T^{-\beta_{j}\alpha}}\leq\frac{2k}{\beta_{ j}}\log\frac{1}{1-e^{-\varepsilon}}<(2d-1)s_{j+1}\log s_{j+1},\] so the contribution in this case will be \(o(T(\log\log T)^{k})\). Now if \(\beta_{j}\alpha\log T<\varepsilon\), then \[\log\frac{1}{1-T^{-\beta_{j}\alpha}}<\log\frac{1}{\alpha}-\log(\beta_{j}\log T )+\log\frac{1}{1-\varepsilon/2},\] so we obtain that \[\int_{T}^{2T}S_{2}(t)\,dt\ll T(\log\log T)^{k}\sum_{j=0}^{K-1}(K- j)\sqrt{\frac{1}{\beta_{j+1}}}\exp\bigg{(}\frac{\log\log T}{\beta_{j}}\Big{(}2k \frac{\log\frac{1}{\alpha}}{\log\log T}-\frac{a(2d-1)}{r}\Big{)}\] \[+\frac{\log(\beta_{j}\log T)}{\beta_{j}}\Big{(}\frac{a(2d-1)}{r} -2k\Big{)}+\frac{2a}{r\beta_{j}}\log\Big{(}ke^{3/2}b(\Delta_{j+1})\Big{(}\log \frac{1}{\Delta_{j+1}\alpha}\Big{)}^{\gamma(\Delta_{j+1})}\Big{)}\] \[+\frac{2k}{\beta_{j}}\log\frac{1}{1-\varepsilon/2}+\frac{a(2d-1)} {r\beta_{j}}\log\frac{r}{a}+2k\Big{(}\log\frac{1}{\Delta_{j}\alpha}\Big{)}^{ \gamma(\Delta_{j})}\Big{(}\log T^{\beta_{j}}\big{)}^{k^{2}b(\Delta_{j})^{2}( \log\frac{1}{\Delta_{j}\alpha})^{2\gamma(\Delta_{j})}}. \tag{4.9}\] ## 5. Proof of Theorems 1.2 and 1.3 for "big" shifts \(\alpha\) Here, we will prove the bound (1.3) in Theorem 1.2 when \(\alpha\gg\frac{1}{(\log T)^{\frac{1}{2k}-\varepsilon}}\), and the bounds (1.6), (1.7), (1.8) in Theorem 1.3. Recall that \[u=\frac{\log\frac{1}{\alpha}}{\log\log T}.\] The contribution from \(t\notin\mathcal{T}_{0}\) and from \(S_{1}(t)\) has already been bounded in Section 4 (see equations (4.7) and (4.8)). Now we focus on bounding the contribution from \(S_{2}(t)\). First assume that \(\alpha\gg\frac{1}{(\log T)^{\frac{1}{2k}-\varepsilon}}\) and \(k\geq 1/2\). We rewrite (4.9) as \[\int_{T}^{2T}S_{2}(t)\,dt\ll T(\log\log T)^{k}\sum_{j=0}^{K-1}(K-j) \sqrt{\frac{1}{\beta_{j+1}}}\exp\bigg{(}\frac{\log\log T}{\beta_{j}}\Big{(}2 ku-\frac{a(2d-1)}{r}\Big{)}\] \[\qquad+\frac{\log(\beta_{j}\log T)}{\beta_{j}}\Big{(}\frac{a(2d- 1)}{r}-2k\Big{)}+2k\Big{(}\log\frac{1}{\Delta_{j}\alpha}\Big{)}^{\gamma( \Delta_{j})}+O\Big{(}\frac{\log\log\log T}{\beta_{j}}\Big{)}\bigg{)}\] \[\qquad\times\big{(}\log T^{\beta_{j}}\big{)}^{k^{2}b(\Delta_{j} )^{2}(\log\frac{1}{\Delta_{j}\alpha})^{2\gamma(\Delta_{j})}}.\] Note that \[\frac{a(2d-1)}{r}-2k<0,\] and using (4.3) and the fact that \(\alpha\gg\frac{1}{(\log T)^{\frac{1}{2k}-\varepsilon}}\), it follows that \[2ku-\frac{a(2d-1)}{r}\leq-k\varepsilon.\] Hence we get that \[\int_{T}^{2T}S_{2}(t)\,dt\ll T(\log\log T)^{k}\sum_{j=0}^{K-1}(K-j )\sqrt{\frac{1}{\beta_{j+1}}}\exp\bigg{(}-\frac{k\varepsilon\log\log T}{\beta _{j}}+2k\Big{(}\log\frac{1}{\Delta_{j}\alpha}\Big{)}^{\gamma(\Delta_{j})}\] \[\qquad\qquad\qquad+O\Big{(}\frac{\log\log\log T}{\beta_{j}} \Big{)}\bigg{)}\big{(}\log T^{\beta_{j}}\big{)}^{k^{2}b(\Delta_{j})^{2}(\log \frac{1}{\Delta_{j}\alpha})^{2\gamma(\Delta_{j})}}. \tag{5.1}\] We first consider the contribution from those \(j\) for which \(\gamma(\Delta_{j})=1\). Let \(R_{1}\) denote this contribution. Using the fact that \(K\ll\log\log T\) and after a relabeling of the \(\varepsilon\), we have that \[R_{1}\ll T(\log\log T)^{k}\sum_{j}\exp\bigg{(}-\frac{k \varepsilon\log\log T}{\beta_{j}}+2k\log\frac{1}{\beta_{j}\alpha\log T}+O \Big{(}\frac{\log\log\log T}{\beta_{j}}\Big{)}\] \[\qquad\qquad+\Big{(}\frac{1}{4}+\varepsilon\Big{)}k^{2}\Big{(} \log\frac{1}{\beta_{j}\alpha\log T}\Big{)}^{2}\log(\beta_{j}\log T)\bigg{)},\] where the sum over \(j\) is such that \(\gamma(\Delta_{j})=1\). Since \(\alpha\gg\frac{1}{\log T}\), we have \[\Big{(}\log\frac{1}{\beta_{j}\alpha\log T}\Big{)}^{2}\log(\beta_{j}\log T)\ll \Big{(}\log\frac{1}{\beta_{j}}\Big{)}^{2}\log\log T.\] As \(\gamma(\Delta_{j})=1\), we have \(\beta_{j}\to 0\) in the sum in \(R_{1}\) above, and then it follows that \[R_{1}=o\big{(}T(\log\log T)^{k}\big{)}. \tag{5.2}\] Now we consider the contribution in (7.7) from those \(j\) with \(\gamma(\Delta_{j})=0\). Let \(R_{2}\) denote that term. We have that \[R_{2}\ll T(\log\log T)^{k}\sum_{j}\exp\bigg{(}-\frac{k\varepsilon\log\log T}{ \beta_{j}}+O\Big{(}\frac{\log\log\log T}{\beta_{j}}\Big{)}+k^{2}b(\Delta_{j})^ {2}\log(\beta_{j}\log T)\bigg{)},\] where the sum is over \(j\) such that \(\gamma(\Delta_{j})=0\). Keeping in mind the choices for \(\beta_{j}\) (equations (4.2) and (4.4)), it then follows that \[R_{2}=\begin{cases}o(T(\log\log T)^{k})&\text{ if }\alpha\log T\to\infty,\\ o\Big{(}T(\log\log T)^{k}(\log T)^{k^{2}(\frac{1}{1-T^{-\alpha}})^{2}}\Big{)} &\text{ if }\alpha\asymp\frac{1}{\log T}.\end{cases} \tag{5.3}\] Combining the bounds (4.7), (4.8), (5.2) and (5.3), the bound (1.3) follows when \(\alpha\gg\frac{1}{(\log T)^{\frac{1}{2k}-\varepsilon}}\) and \(k\geq 1/2\). Now assume that \(k<1/2\) and \(\alpha\gg\frac{1}{\log T}\). If \(\frac{a(2d-1)}{r}-2k\leq 0\), then the same argument as before works. Hence we assume we have \(\frac{a(2d-1)}{r}-2k>0\). We rewrite the bound (4.9) for \(S_{2}(t)\) as \[\int_{T}^{2T}S_{2}(t)\,dt\ll T(\log\log T)^{k}\sum_{j=0}^{K-1}(K- j)\sqrt{\frac{1}{\beta_{j+1}}}\exp\bigg{(}\frac{\log\log T}{\beta_{j}}\Big{(}- \frac{2k\log(\alpha\log T)}{\beta_{j}}\] \[\quad+k^{2}b(\Delta_{j})^{2}\Big{(}\log\frac{1}{\Delta_{j}\alpha }\Big{)}^{2\gamma(\Delta_{j})}\log(\beta_{j}\log T)+O\Big{(}\frac{1}{\beta_{j }}\Big{)}\bigg{)}. \tag{5.4}\] In the equation above, we first consider the contribution from those \(j\) for which \(\gamma(\Delta_{j})=1\), i.e., those \(j\) for which \(\beta_{j}=o(\frac{1}{\alpha\log T})\). As before, we denote this contribution by \(R_{1}\). We let \[f(x)=-\frac{2k\log(\alpha\log T)}{x}+\frac{\log x}{x}\Big{(} \frac{a(2d-1)}{r}-2k\Big{)}+\frac{2}{x}\log\log\frac{1}{x\alpha\log T}+2k\log \frac{1}{x\alpha\log T}\] \[\qquad\quad+k^{2}\Big{(}\frac{1}{4}+\varepsilon\Big{)}\Big{(} \log\frac{1}{x\alpha\log T}\Big{)}^{2}\log(x\log T).\] By taking the derivative, we see that when \(\frac{1}{\log T}\ll\alpha=o\big{(}\frac{\log\log T}{\log T}\big{)}\), the maximum of \(f(x)\) (when \(x=o(\frac{1}{\alpha\log T})\)) is attained at some \(x_{0}\asymp\frac{1}{\log\log T}\), while if \(\alpha\gg\frac{\log\log T}{\log T}\), the function \(f(x)\) is increasing on the interval under consideration. Hence, we get that \[R_{1}=\begin{cases}o(T(\log\log T)^{k})&\text{ if }\alpha\gg\frac{\log\log T}{ \log T},\\ O\Big{(}T\exp\Big{(}C_{1}(\log\log T)\big{(}\log\frac{\log\log T}{\alpha\log T }\big{)}^{2}\Big{)}\Big{)}&\text{ if }\frac{1}{\log T}\ll\alpha=o\big{(}\frac{\log\log T}{\log T} \big{)},\end{cases} \tag{5.5}\] for some \(C_{1}>0\). Now we bound the contribution in (5.4) from those \(j\) for which \(\gamma(\Delta_{j})=0\). It is easy to see that in this case, the function in (5.4) is decreasing in \(j\), so \[R_{2}=o(T(\log\log T)^{k}).\] Combining the above with (4.7), (4.8) and (5.5), the bounds (1.6) and (1.7) follow. Now we assume that \(\frac{1}{(\log T)^{\frac{1}{2k}-\varepsilon}}\ll\alpha=o\big{(}\frac{1}{\log T }\big{)}\). We rewrite the bound (4.9) for \(S_{2}(t)\) as \[\int_{T}^{2T}S_{2}(t)\,dt\ll T(\log\log T)^{k}\sum_{j=0}^{K-1}(K- j)\sqrt{\frac{1}{\beta_{j+1}}}\exp\bigg{(}-\frac{2k\log(\alpha\log T)}{\beta_{j}}\] \[\quad+\frac{\log\beta_{j}}{\beta_{j}}\Big{(}\frac{a(2d-1)}{r}-2 k\Big{)}+\frac{2}{\beta_{j}}\log\log\frac{1}{\Delta_{j+1}\alpha}+2k\log\frac{1}{ \Delta_{j}\alpha}+O\Big{(}(\log\log T)^{3}+\frac{1}{\beta_{j}}\Big{)}\bigg{)}.\] In the sum over \(j\) above, the maximum is attained at \(j_{0}\) such that \(\beta_{j_{0}}\asymp(\alpha\log T)^{\frac{2k}{\frac{a(2d-1)}{r}-2k}}\). It then follows that \[\int_{T}^{2T}S_{2}(t)\,dt\ll T\exp\Big{(}C_{2}\Big{(}\frac{1}{\alpha\log T} \Big{)}^{\frac{2k}{1-2k-k\varepsilon}}\Big{)},\] for some \(C_{2}>0\) (and after a relabeling of the \(\varepsilon\)). Combining the above and (4.7), (4.8), the bound (1.8) follows when \(\frac{1}{(\log T)^{\frac{1}{2k}-\varepsilon}}\ll\alpha=o(\frac{1}{\log T})\). ## 6. Proof of Theorem 1.2, bound (1.3); some recursive estimates Here, we will prove the bound (1.4). To do that, we will use an inductive argument, which will be performed in Subsection 6.2. The first step of the argument in carried out in the next subsection. The range \(\alpha\gg\frac{(\log\log T)^{\frac{4}{k}+\varepsilon}}{(\log T)^{\frac{1}{2k}}}\), \(k\geq 1/2\), the first step We previously obtained the bound (1.3) in the region \(\alpha\gg\frac{1}{(\log T)^{\frac{1}{2k}-\varepsilon}}\) and \(k\geq 1/2\). Hence, we will assume that \[\frac{(\log\log T)^{\frac{4}{k}+\varepsilon}}{(\log T)^{\frac{1}{2k}}}\ll \alpha=o\Big{(}\frac{1}{(\log T)^{\frac{1}{2k}-\varepsilon}}\Big{)}, \tag{6.1}\] when \(k\geq 1/2\). Let \[\alpha=\frac{(\log\log T)^{b}}{(\log T)^{\frac{1}{2k}}}, \tag{6.2}\] where \(b\geq 4/k+\varepsilon\). From (6.1), we have that \(b=o(\frac{\log\log T}{\log\log\log T})\). We will show that for any \(\delta>0\), we have \[\int_{T}^{2T}|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}\,dt\ll T\exp\bigg{(}\frac{( \log T)^{\frac{3(1+\delta)}{kb-1}}}{\exp\big{(}\frac{2\log\log T\log\log\log T }{(kb-1)\log\log\log\log T}\big{)}}\bigg{)}.\] We choose \(\beta_{0},\ell_{0},s_{0}\) as in (4.1) and \(\beta_{j},\ell_{j},s_{j}\) are chosen as in (4.2). We choose \(a,d,r\) such that \[\frac{a(2d-1)}{r}=1-\frac{n\log\log\log T}{\log\log T}, \tag{6.3}\] where \[n=(2kb-2)\Big{(}1-\frac{10\delta}{12(1+\delta)}\Big{)}. \tag{6.4}\] For simplicity of notation, let \(x=\frac{\log\log\log T}{\log\log T}\). We can take \[a=\frac{1-n(1-\frac{\delta}{24})x}{1-n(1-\frac{\delta}{12})x},\qquad d=1-\frac {n}{2}\Big{(}1-\frac{\delta}{12}\Big{)}x,\qquad r=\frac{1-n(1-\frac{\delta}{2 4})x}{1-nx}. \tag{6.5}\] We choose \(\beta_{K}\) such that \[\beta_{K}^{1-d}\Big{(}\frac{a^{d}r^{1-d}}{r^{1-d}-1}+\frac{2r}{r-1}\Big{)}\leq 1 -a.\] Again, the above inequality ensures that the conditions in Propositions 3.3 and 3.4 are satisfied. Note that the condition above can be re-expressed as \[\beta_{K}^{1-d}\leq c_{1}(1-a)(r-1)(1-d),\] for some constant \(c_{1}>0\). We then choose \(K\) such that \(\beta_{K}\) is the largest of the form in (4.2) such that \[\beta_{K}\leq\frac{\exp\big{(}\frac{6\log\log T\log\log\log\log T}{n\log\log \log T}\big{)}}{(\log T)^{\frac{6}{n(1-\frac{\delta}{12})}}}. \tag{6.6}\] If \(t\notin\mathcal{T}_{0}\), then we proceed as in Section 5 and similarly to equation (4.7), we get that \[\int_{[T,2T]\setminus\mathcal{T}_{0}}|\zeta(\tfrac{1}{2}+\alpha+ it)|^{-2k}\,dt\ll T\Big{(}\frac{1}{1-(\log T)^{-2\alpha}}\Big{)}^{\frac{(1+ \varepsilon)k\log T}{\log\log T}}\] \[\qquad\times\sqrt{s_{0}}\exp\bigg{(}-(2d-1)s_{0}\log s_{0}+2s_{0 }\log\Big{(}ke^{3/2}b(\Delta_{0})\Big{(}\log\frac{1}{\Delta_{0}\alpha}\Big{)} ^{\gamma(\Delta_{0})}\sqrt{\log\log T^{\beta_{0}}}\Big{)}\bigg{)}\] \[=o(T). \tag{6.7}\] Now we suppose that \(t\in\mathcal{T}_{0}\). Similarly as in Section 5, using Proposition 3.4 we get that \[\int_{T}^{2T}S_{1}(t)\,dt \ll T(\log\log T)^{k}\exp\Bigg{(}c_{1}\frac{(\log T)^{\frac{6}{n (1-\frac{\delta}{12})}}\log\log T}{\exp\big{(}\frac{6\log\log T\log\log\log \log T}{n(1-\frac{\delta}{12})\log\log\log T}\big{)}}\Bigg{)}\exp\Big{(}k^{2} (\log\log T)^{3}\Big{)}\] \[\ll T\exp\Bigg{(}\frac{(\log T)^{\frac{6}{n(1-\frac{\delta}{12}) }}}{\exp\big{(}\frac{4\log\log T\log\log\log\log T}{n\log\log\log T}\big{)}} \Bigg{)}, \tag{6.8}\] for some \(c_{1}>0\) (note that the constant \(c_{1}\) can change from line to line). To bound the contribution from \(S_{2}(t)\), we proceed as in equation (4.9) and obtain that \[\int_{T}^{2T}S_{2}(t)\,dt\ll T(\log\log T)^{k}\sum_{j=0}^{K-1}(K-j) \sqrt{\frac{1}{\beta_{j+1}}}\exp\bigg{(}\frac{\log\log T}{\beta_{j}}\Big{(}2ku- \frac{a(2d-1)}{r}\Big{)}\] \[\quad+\frac{\log(\beta_{j}\log T)}{\beta_{j}}\Big{(}\frac{a(2d-1) }{r}-2k\Big{)}+\frac{2a}{r\beta_{j}}\log\Big{(}ke^{3/2}b(\Delta_{j+1})\Big{(} \log\frac{1}{\Delta_{j+1}\alpha}\Big{)}^{\gamma(\Delta_{j+1})}\Big{)}\] \[\quad+\frac{2k}{\beta_{j}}\log\frac{1}{1-\varepsilon/2}+\frac{a(2 d-1)}{r\beta_{j}}\log\frac{r}{a}+2k\Big{(}\log\frac{1}{\Delta_{j}\alpha} \Big{)}^{\gamma(\Delta_{j})}\Big{)}\big{(}\log T^{\beta_{j}}\big{)}^{k^{2}b( \Delta_{j})^{2}(\log\frac{1}{\Delta_{j}\alpha})^{2\gamma(\Delta_{j})}}. \tag{6.9}\] Since \(\alpha=\frac{(\log\log T)^{b}}{(\log T)^{\frac{1}{2k}}}\) and given (6.3), we get that \[\int_{T}^{2T}S_{2}(t)\,dt\ll T(\log\log T)^{k}\sum_{j=0}^{K-1}\exp\bigg{(} \frac{\log\log\log T}{\beta_{j}}(n-2kb+2)+(\log\log T)^{3}+O\Big{(}\frac{1}{ \beta_{j}}\Big{)}\bigg{)},\] where we used the fact that \(K\ll\log\log T\) and the fact that \[\log\log\frac{1}{\beta_{j}\alpha\log T}\leq\log\log\frac{1}{\beta_{0}\alpha \log T}\leq\frac{\log\log\log T}{2k}.\] With the choice (6.4), we have that \(n-2kb+2\leq-\delta\), and then \[\int_{T}^{2T}S_{2}(t)\,dt\ll T(\log\log T)^{k}\sum_{j=0}^{K-1}\exp\bigg{(}- \frac{\delta\log\log\log T}{\beta_{j}}+(\log\log T)^{3}+O\Big{(}\frac{1}{ \beta_{j}}\Big{)}\bigg{)}.\] In the sum over \(j\) above, the maximum is attained at \(j=K-1\), and given the choice (6.6) for \(\beta_{K}\), it follows that \[\int_{T}^{2T}S_{2}(t)\,dt=o(T). \tag{6.10}\] Combining equations (6.7), (6.8) and (6.10), it follows that \[\int_{T}^{2T}|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}\,dt\ll T\exp\Bigg{(}\frac{ (\log T)^{\frac{3(1+\delta)}{kb-1}}}{\exp\big{(}\frac{2\log\log T\log\log\log \log T}{(kb-1)\log\log\log T}\big{)}}\Bigg{)}. \tag{6.11}\] The range \(\alpha\gg\frac{(\log\log T)^{\frac{4}{k}+\varepsilon}}{(\log T)^{\frac{1}{2k}}}\), \(k\geq 1/2\), a recursive bound Here, we have the same setup as in the previous subsection. Namely, we assume (6.1) and (6.2). We will perform the same argument as before, but with a different choice of parameters. We suppose that at step \(m-1\), for any \(\delta>0\), we have the bound \[\int_{T}^{2T}|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}\,dt\ll T\exp\Bigg{(}\frac{ (\log T)^{(1+\delta)(\frac{3}{kb-1})^{m-1}}\log\log T}{\exp\big{(}\frac{2\cdot 3 ^{m-2}\log\log T\log\log\log\log T}{(kb-1)^{m-1}\log\log\log T}\big{)}}\Bigg{)} \exp\Big{(}(1+\delta)k^{2}(\log\log T)^{3}\Big{)}. \tag{6.12}\] Note that we proved the first step of the induction in Subsection 6.1 (see equation (6.11)). Using (6.12), we will show that \[\int_{T}^{2T}|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}\,dt\ll T\exp\left(\frac{( \log T)^{(1+\delta)(\frac{3}{kb-1})^{m}}\log\log T}{\exp\big{(}\frac{2\cdot 3^{m-1} \log\log T\log\log\log\log T}{(kb-1)^{m}\log\log\log T}\big{)}}\right)\exp\Big{(} (1+\delta)k^{2}(\log\log T)^{3}\Big{)}. \tag{6.13}\] Let \[\varepsilon^{\prime}=\frac{\delta(kb-1)\log\log T}{4kb(m-1)(\log\log T-2kb \log\log\log T+\frac{\delta(kb-1)\log\log\log T}{2(m-1)})}, \tag{6.14}\] and \[p=\frac{\log\log T}{\log\log T-2kb\varepsilon^{\prime}\log\log\log T},\qquad q =\frac{\log\log T}{2kb\varepsilon^{\prime}\log\log\log T}, \tag{6.15}\] so that \(1/p+1/q=1\). Let \[f=b(1-\varepsilon^{\prime}). \tag{6.16}\] We will perform the inductive argument as long as \[\Big{(}\frac{kpf-1}{3}\Big{)}^{m-1}<\frac{\delta\log\log T}{9\log\log\log T}. \tag{6.17}\] We choose \[\beta_{0}=\frac{3(2d-1)\big{(}1+\frac{\delta}{3}\big{)}(\frac{3}{kpf-1})^{m-1 }\exp\big{(}\frac{2\cdot 3^{m-2}\log\log T\log\log\log\log T}{(kpf-1)^{m-1} \log\log\log T}\big{)}}{4q(\log T)^{(1+\frac{\delta}{3})(\frac{3}{kpf-1})^{m- 1}}},\,s_{0}=\Big{[}\frac{1}{q\beta_{0}}\Big{]},\,\ell_{0}=2\Big{\lceil}\frac{ s_{0}^{d}}{2}\Big{\rceil}, \tag{6.18}\] where \[a=\frac{1-n(1-\frac{\delta}{12})x}{1-n(1-\frac{\delta}{6})x},\qquad d=1-\frac {n}{2}\Big{(}1-\frac{\delta}{6}\Big{)}x,\qquad r=\frac{1-n\big{(}1-\frac{ \delta}{12}\big{)}x}{1-nx}.\] As before, we have \[\frac{a(2d-1)}{r}=1-nx.\] Recall that \(x=\frac{\log\log\log T}{\log\log T}\) and we choose \[n=\frac{2(kb-1)(kpf-1)^{m-1}\big{(}1-\frac{\delta}{24}\big{)}}{3^{m-1}\big{(} 1+\frac{\delta}{3}\big{)}}. \tag{6.19}\] We also choose \(\beta_{K}\) such that \(K\) is the maximal integer for which \[\beta_{K}\leq\frac{\exp\big{(}\frac{6\log\log T\log\log\log\log T}{n\log\log \log T}\big{)}}{(\log T)^{\frac{6}{n(1-\frac{\delta}{24})}}}. \tag{6.20}\] If \(t\notin\mathcal{T}_{0}\), then there exists \(0\leq v\leq K\) such that \(\frac{ke^{2}}{\ell_{0}}|P_{0,v}(t)|>1\). Using Holder's inequality, we have \[\int_{[T,2T]\setminus\mathcal{T}_{0}}|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}\,dt \leq\int_{T}^{2T}\Big{(}\frac{ke^{2}}{\ell_{0}}|P_{0,v}(t)|\Big{)}^{2s_{0}}| \zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}\,dt\] \[\leq\Big{(}\frac{ke^{2}}{\ell_{0}}\Big{)}^{2s_{0}}\Big{(}\int_{T}^{2T}| \zeta(\tfrac{1}{2}+\alpha+it)|^{-2kp}\,dt\Big{)}^{\frac{1}{p}}\Big{(}\int_{T}^{2T }|P_{0,v}(t)|^{2s_{0}q}\,dt\Big{)}^{\frac{1}{q}}.\] For the first integral above, we will use the bound (6.12) with \(\delta\mapsto\delta/3\). Note that \[\alpha=\frac{(\log\log T)^{f}}{(\log T)^{\frac{1}{2kp}}}.\] Then using the recursive bound (6.12), we obtain that \[\int_{T}^{2T}|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2kp}\ll T\exp\Bigg{(} \frac{(\log T)^{(1+\frac{\delta}{3})(\frac{3}{kpf-1})^{m-1}}\log\log T}{\exp \big{(}\frac{2\cdot 3^{m-2}\log\log T\log\log\log\log T}{(kpf-1)^{m-1}\log\log\log T} \big{)}}\Bigg{)}\] \[\times\exp\Big{(}\Big{(}1+\frac{\delta}{3}\Big{)}p^{2}k^{2}(\log \log T)^{3}\Big{)}. \tag{6.21}\] Now using Proposition (3.2), we have that \[\int_{T}^{2T}|P_{0,v}(t)|^{2s_{0}q}\,dt\ll T(s_{0}q)!b(\Delta_{0})^{2s_{0}q} \Big{(}\log\frac{1}{\Delta_{0}\alpha}\Big{)}^{2s_{0}q\gamma(\Delta_{0})}(\log \log T^{\beta_{0}})^{s_{0}q}.\] Combining the bound above and (6.21) and using Stirling's formula, we get that \[\int_{[T,2T]\setminus\mathcal{T}_{0}} |\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}\,dt\ll T\exp\Bigg{(}\frac{( \log T)^{(1+\frac{\delta}{3})(\frac{3}{kpf-1})^{m-1}}\log\log T}{\exp\big{(} \frac{2\cdot 3^{m-2}\log\log T\log\log\log T}{(kpf-1)^{m-1}\log\log\log T} \big{)}}\Bigg{)}\] \[\times\exp\Big{(}\Big{(}1+\frac{\delta}{3}\Big{)}pk^{2}(\log\log T )^{3}\Big{)}s_{0}^{\frac{1}{2q}}\exp\bigg{(}-(2d-1)s_{0}\log s_{0}\] \[+2s_{0}\log\Big{(}ke^{3/2}\sqrt{q}b(\Delta_{0})\Big{(}\log\frac{1 }{\Delta_{0}\alpha}\Big{)}^{\gamma(\Delta_{0})}\sqrt{\log\log T^{\beta_{0}}} \Big{)}\Bigg{)}. \tag{6.22}\] Recall the choice (6.18) for \(s_{0}\). Note that we have \[\log\log\frac{1}{\Delta_{0}\alpha}\leq\log\log\log T,\qquad\log\log(\beta_{0} \log T)<\log\log\log T\] and \[\log q\leq\log\log\log T.\] Using the three bounds above in (6.22), it follows that \[\int_{[T,2T]\setminus\mathcal{T}_{0}}|\zeta(\tfrac{1}{2}+\alpha+ it)|^{-2k}\,dt\ll T\exp\Bigg{(}\frac{(\log T)^{(1+\frac{\delta}{3})(\frac{3}{kpf-1})^{m-1 }}\log\log T}{\exp\big{(}\frac{2\cdot 3^{m-2}\log\log T\log\log\log T}{(kpf-1)^{m-1} \log\log\log T}\big{)}}\Bigg{)}\] \[\times\exp\Big{(}\Big{(}1+\frac{\delta}{3}\Big{)}pk^{2}(\log\log T )^{3}\Big{)}\exp\Big{(}-(2d-1)s_{0}\log s_{0}+4s_{0}\log\log\log\log T+O(s_{0} )\Big{)}.\] By (6.17) we get \[\int_{[T,2T]\setminus\mathcal{T}_{0}}|\zeta(\tfrac{1}{2}+\alpha+ it)|^{-2k}\,dt\ll T\exp\Bigg{(}-\frac{(\log T)^{(1+\frac{\delta}{3})(\frac{3}{kpf-1})^{m-1 }}\log\log T}{4\exp\big{(}\frac{2\cdot 3^{m-2}\log\log T\log\log\log T}{(kpf-1)^{m-1} \log\log\log T}\big{)}}\Bigg{)}\] \[\times\exp\Big{(}(1+\delta)k^{2}(\log\log T)^{3}\Big{)}\] \[\ll T\exp\Big{(}(1+\delta)k^{2}(\log\log T)^{3}\Big{)}. \tag{6.23}\] Now suppose that \(t\in\mathcal{T}_{0}\). Using Proposition 3.4 and proceeding as before, we get that \[\int_{T}^{2T}S_{1}(t)\,dt\ll(\log\log T)^{k}\exp\bigg{(}c_{1}(\log T )^{\frac{3^{m}(1+\frac{\delta}{2})}{(kb-1)(kpf-1)^{m-1}(1-\frac{\delta}{24})^{ 2}}}\log\log T\] \[\times\exp\Big{(}-\frac{3^{m}\log\log T\log\log\log\log T}{(kb-1)( kpf-1)^{m-1}\log\log\log T}\Big{)}\bigg{)}\exp\Big{(}(1+\delta)k^{2}(\log\log T)^{3} \Big{)}, \tag{6.24}\] for some \(c_{1}>0\), and where we trivially bounded \(\gamma(\Delta_{K})\leq 1\). Now given the choice of parameters in (6.14), (6.15), (6.16), we have \[kpf-1=(kb-1)\Big{(}1-\frac{\delta}{4(m-1)}\Big{)}+O\Big{(}\frac{\log\log\log T }{\log\log T}\Big{)}>(kb-1)\Big{(}1-\frac{7\delta}{24(m-1)}\Big{)}.\] Using this in (6.24) leads to \[\int_{T}^{2T}S_{1}(t)\,dt\ll T\exp\Bigg{(}\frac{(\log T)^{(1+\delta)(\frac{3} {kb-1})^{m}}\log\log T}{\exp\big{(}\frac{2\cdot 3^{m-1}\log\log T\log\log\log T }{(kb-1)^{m}\log\log\log T}\big{)}}\Bigg{)}\exp\Big{(}(1+\delta)k^{2}(\log\log T )^{3}\Big{)}. \tag{6.25}\] To bound the contribution from \(S_{2}(t)\), we proceed as in (6.9). We rewrite \[\int_{T}^{2T}S_{2}(t)\,dt \ll T(\log\log T)^{k}\sum_{j=0}^{K-1}\exp\bigg{(}\frac{\log\log \log T}{\beta_{j}}(n-2kb+2)-\frac{n\log\log\log T\log(\beta_{j}\log T)}{\beta _{j}\log\log T}\] \[+\log\log T+k^{2}(\log\log T)^{3}+O\Big{(}\frac{1}{\beta_{j}} \Big{)}\bigg{)}, \tag{6.26}\] where we used the fact that \(k\geq 1/2\). Note that the maximum in the sum over \(j\) is attained either at \(j=0\) or \(j=K-1\). Now given the choices (6.18) and (6.19), the contribution from \(j=0\) is \[\ll\exp\Big{(}\frac{\log\log\log T}{\beta_{0}}\Big{(}-\frac{\delta(kb-1)}{4}+ \frac{n\log\big{(}\frac{kpf-1}{3}\big{)}^{m-1}}{\log\log T}\Big{)}+O\Big{(} \frac{1}{\beta_{0}}+(\log\log T)^{3}\Big{)}\Big{)},\] and again using the choices (6.19) and (6.17), it follows that the contribution from \(j=0\) is negligible. For the contribution from \(j=K-1\), proceeding similarly as in the bound for \(S_{1}(t)\), it follows that \[\int_{T}^{2T}S_{2}(t)\,dt\ll T\exp\Bigg{(}\frac{(\log T)^{(1+\delta)(\frac{3} {kb-1})^{m}}\log\log T}{\exp\big{(}\frac{2\cdot 3^{m-1}\log\log T\log\log\log T }{(kb-1)^{m}\log\log\log T}\big{)}}\exp\Big{(}(1+\delta)k^{2}(\log\log T)^{3} \Big{)}.\] Combining the above, (6.23) and (6.25), the induction conclusion (6.13) follows. Now taking \(m\) maximal as in (6.17), we get that \[\int_{T}^{2T}|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}\,dt\ll T\exp\Big{(}(1+\delta) k^{2}(\log\log T)^{3}\Big{)}. \tag{6.27}\] The range \(\alpha\gg\frac{(\log\log T)^{\frac{4}{k}+\varepsilon}}{(\log T)^{\frac{1}{2k}}}\), \(k\geq 1/2\), once more Here, we use the same setup as in Subsection 6.2. Once again, we assume (6.1) and (6.2). We will improve the bound (6.27). The proof is similar to the proof in the previous cases, so we will skip some of the details. As before, let \[p=\frac{\log\log T}{\log\log T-2kb\delta\log\log\log T}\] and \[q=\frac{\log\log T}{2kb\delta\log\log\log T}, \tag{6.28}\] so that \(1/p+1/q=1\). We choose \[\beta_{0}=\frac{(6d-5)\log\log\log T}{(1+2\delta)pqk^{2}(\log\log T)^{3}}, \quad s_{0}=\Big{[}\frac{1}{q\beta_{0}}\Big{]},\quad\ell_{0}=2\Big{[}\frac{s_ {0}^{d}}{2}\Big{]}. \tag{6.29}\] We choose \(\beta_{j},s_{j},\ell_{j}\) as in (4.2), and \(a,d,r\) are such that \[\frac{a(2d-1)}{r}=1-\delta.\] We also choose \(K\) maximal such that \[\beta_{K}\leq c, \tag{6.30}\] where \(c\) is a small constant as in (4.5). Note that the conditions in Propositions 3.3 and 3.4 are satisfied. We now proceed as before. If \(t\notin\mathcal{T}_{0}\), then as in Subsection (6.2), we have that \[\int_{[T,2T]\setminus\mathcal{T}_{0}} |\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}\,dt\leq\Big{(}\frac{ke^{2}}{ \ell_{0}}\Big{)}^{2s_{0}}\Big{(}\int_{T}^{2T}|\zeta(\tfrac{1}{2}+\alpha+it)|^ {-2kp}\,dt\Big{)}^{\frac{1}{p}}\Big{(}\int_{T}^{2T}|P_{0,v}(t)|^{2s_{0}q}\,dt \Big{)}^{\frac{1}{q}}\] \[\ll T\exp\Big{(}(1+\delta)pk^{2}(\log\log T)^{3}\Big{)}\exp\bigg{(} -(2d-1)s_{0}\log s_{0}\] \[\qquad\qquad+2s_{0}\log\Big{(}ke^{3/2}\sqrt{q}b(\Delta_{0})\Big{(} \log\frac{1}{\Delta_{0}\alpha}\Big{)}^{\gamma(\Delta_{0})}\sqrt{\log\log T^{ \beta_{0}}}\Big{)}\bigg{)}.\] Note that \(\gamma(\Delta_{0})=0\) with the choice of parameters (6.29). We deduce that \[\int_{[T,2T]\setminus\mathcal{T}_{0}}|\zeta(\tfrac{1}{2}+\alpha+ it)|^{-2k}\,dt \ll T\exp\Big{(}(1+\delta)pk^{2}(\log\log T)^{3}\Big{)}\] \[\qquad\qquad\times\exp\Big{(}-(2d-1)s_{0}\log s_{0}+2s_{0}\log \log\log T+O(s_{0})\Big{)}\] \[=o(T), \tag{6.31}\] by (6.28) and the choice of parameters in (6.29). Using the choice of \(\beta_{K}\) in (6.30), we also get that \[\int_{T}^{2T}S_{1}(t)\,dt\ll T(\log\log T)^{k}(\log T)^{k^{2}}. \tag{6.32}\] To bound the contribution from \(S_{2}(t)\), we rewrite equation (6.9) using the fact that \(\gamma(\Delta_{j})=0\) for all \(j\) and that \(K\ll\log\log T\). We have \[\int_{T}^{2T}S_{2}(t)\,dt\ll T(\log\log T)^{k+1}\sum_{j=0}^{K-1} \exp\bigg{(}\frac{\log\log T}{\beta_{j}}\Big{(}1-2k-\frac{2kb\log\log\log T}{ \log\log T}\Big{)}\\ +\frac{4\log\log\log T}{\beta_{j}}\Big{(}2k-\frac{a(2d-1)}{r} \Big{)}+k^{2}\log\log T+O\Big{(}\frac{1}{\beta_{j}}\Big{)}\bigg{)}.\] In the sum over \(j\) above, note that if \(k=1/2\), then the exponential term is bounded by \[\exp\Big{(}-\frac{c_{1}\log\log\log T}{\beta_{j}}+k^{2}\log\log T\Big{)},\] for some \(c_{1}>0\). If \(k>1/2\), then the exponential term is bounded by \[\exp\Big{(}-\frac{c_{1}\log\log T}{\beta_{j}}+k^{2}\log\log T\Big{)},\] for some \(c_{1}>0\). In both cases, we obtain that \[\int_{T}^{2T}S_{2}(t)\,dt=o\Big{(}T(\log\log T)^{k}(\log T)^{k^{2}}\Big{)}.\] Combining the above, (6.32) and (6.31), the bound (1.3) follows. The range \(\frac{1}{(\log T)^{\frac{1}{2k}}}\ll\alpha=o\big{(}\frac{(\log\log T)^{\frac{ 4}{k}+\varepsilon}}{(\log T)^{\frac{1}{2k}}}\big{)}\) Here, we will prove the bound (1.4). We will only sketch the proof, since it is similar to the proof in the previous cases. We choose the parameters \(\beta_{j},\ell_{j},s_{j}\) as in (4.1), (4.2), and \(a,d,r\) as in (6.3) and (6.5), where \[n=\frac{6}{1-\delta}. \tag{6.33}\] We also choose \(\beta_{K}\) as in (6.6). We now proceed as in Subsection 6.1. As in (6.7), we have \[\int_{[T,2T]\backslash\mathcal{T}_{0}}|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}\, dt=o(T). \tag{6.34}\] Also, as in (6.8), and keeping in mind the choice (6.33) for \(n\), we get that \[\int_{T}^{2T}S_{1}(t)\,dt\ll T\exp\Bigg{(}\frac{\log T\log\log T}{\exp(\frac{ 2\log\log T\log\log\log T}{3\log\log\log T})}\Bigg{)}. \tag{6.35}\] To bound the contribution from \(S_{2}(t)\), we proceed as in equation (6.9). Since \(\alpha\gg\frac{1}{(\log T)^{\frac{3}{2k}}}\), we have \[\int_{T}^{2T}S_{2}(t)\,dt\ll T(\log\log T)^{k}\sum_{j=0}^{K-1}\exp\bigg{(}\frac{ \log\log\log T}{\beta_{j}}(n+2)+(\log\log T)^{3}+O\Big{(}\frac{1}{\beta_{j}} \Big{)}\bigg{)}.\] The maximum in the sum above is attained when \(j=0\). So, keeping in mind the choice of \(\beta_{0}\) in (4.1), we obtain that \[\int_{T}^{2T}S_{2}(t)\,dt\ll T\exp\Big{(}\frac{(4+\delta)\log T\log\log\log T} {\log\log T}\Big{)}, \tag{6.36}\] after a relabeling of the \(\delta\). Combining (6.34), (6.35) and (6.36), the conclusion follows in this case. ## 7. Proof of Theorems 1.2 and 1.3 for small shifts \(\alpha\) Here, we will prove the bounds (1.5) and (1.9). The proof will be similar to the one in the previous subsection, but we choose the parameters differently. Recall that \[u=\frac{\log\frac{1}{\alpha}}{\log\log T}.\] Let \(\delta>0\). We choose \[\beta_{0}=\frac{(2ku+2d-1-\frac{a(2d-1)}{r})\log\log T}{(1+\delta)ku\log T}, \quad s_{0}=\Big{[}\frac{1}{\beta_{0}}\Big{]},\quad\ell_{0}=2\Big{\lceil} \frac{s_{0}^{d}}{2}\Big{\rceil}, \tag{7.1}\] where we pick \[a=\frac{1-3k\varepsilon}{1-2k\varepsilon},\quad r=\frac{1}{1-2k\varepsilon}, \quad d=\frac{2-7k\varepsilon}{2(1-3k\varepsilon)},\] so that \[\frac{a(2d-1)}{r}=1-4k\varepsilon. \tag{7.2}\] We further pick \(\beta_{j},\ell_{j},s_{j}\) as in (4.2). We choose \(K\) to be maximal such that \[\beta_{K}\leq c, \tag{7.3}\] for \(c\) a small constant such that \[c^{1-d}\Big{(}\frac{ra^{d}}{r^{1-d}-1}+\frac{2r}{r-1}\Big{)}\leq 1-a-\beta_{0}^{1 -d}. \tag{7.4}\] Note that the above ensures that the conditions in Propositions 3.2, 3.3 and 3.4 are satisfied. If \(t\notin\mathcal{T}_{0}\), then we proceed as before, and similarly to equation (4.7) we get that \[\int_{[T,2T]\setminus\mathcal{T}_{0}}|\zeta(\tfrac{1}{2}+\alpha+ it)|^{-2k}\,dt\ll T^{1+(1+\delta)ku}\\ \times\exp\bigg{(}-(2d-1)s_{0}\log s_{0}+2s_{0}\log\Big{(}ke^{3/2 }b(\Delta_{0})\Big{(}\log\frac{1}{\Delta_{0}\alpha}\Big{)}^{\gamma(\Delta_{0} )}\sqrt{\log\log T^{\beta_{0}}}\Big{)}\bigg{)}.\] Keeping in mind the choice of parameters (7.1), we obtain that \[\int_{[T,2T]\backslash\mathcal{T}_{0}}|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}\,dt \ll T^{1+(1+\delta)\frac{2ku-\frac{a(2d-1)}{r}}{2ku-\frac{a(2d-1)}{r}+2d-1}}\exp \Big{(}O\Big{(}\frac{\log T\log\log\log T}{\log\log T}\Big{)}\Big{)}. \tag{7.5}\] Now if \(t\in\mathcal{T}_{0}\), then since \(\gamma(\Delta_{K})=1\), we use Proposition 3.4 and the expression (7.3) for \(\beta_{K}\) as before to get that \[\int_{T}^{2T}S_{1}(t)\,dt\ll T(\log T)^{O(1)}\exp\bigg{(}\Big{(}\frac{1}{4}+ \varepsilon\Big{)}k^{2}\Big{(}\log\frac{1}{\alpha}\Big{)}^{2}\log\log T\bigg{)}. \tag{7.6}\] Similarly as before (see equation (4.9)), we have \[\int_{T}^{2T}S_{2}(t)\,dt \ll T(\log\log T)^{k}\sum_{j=0}^{K-1}(K-j)\sqrt{\frac{1}{\beta_{ j+1}}}\exp\bigg{(}\frac{\log\log T}{\beta_{j}}\Big{(}2ku-\frac{a(2d-1)}{r} \Big{)}\] \[+\frac{\log(\beta_{j}\log T)}{\beta_{j}}\Big{(}\frac{a(2d-1)}{r} -2k\Big{)}+\frac{2a}{r\beta_{j}}\log\Big{(}e^{3/2}b(\Delta_{j+1})\Big{(}\log \frac{1}{\Delta_{j+1}\alpha}\Big{)}^{\gamma(\Delta_{j+1})}\Big{)}\] \[+\frac{2k}{\beta_{j}}\log\frac{1}{1-\varepsilon/2}\Big{)}\big{(} \log T^{\beta_{j}}\big{)}^{k^{2}b(\Delta_{j})^{2}(\log\frac{1}{\Delta_{j}})^ {2\gamma(\Delta_{j})}}.\] As \(K\ll\log\log T\), we further write the above as \[\int_{T}^{2T}S_{2}(t)\,dt\ll T\sum_{j=0}^{K-1}\exp\bigg{(}\frac{\log\log T}{ \beta_{j}}\Big{(}2ku-\frac{a(2d-1)}{r}\Big{)}+O\Big{(}\frac{\log T\log\log\log T }{\log\log T}\Big{)}\bigg{)}\] Since \(\alpha=o\big{(}\frac{1}{(\log T)^{\frac{1}{2k}-\varepsilon}}\big{)}\), we have that \[2ku-\frac{a(2d-1)}{r}\geq 2k\varepsilon.\] Hence the sum over \(j\) above achieves its maximum when \(j=0\). Using (7.1), we obtain that \[\int_{T}^{2T}S_{2}(t)\,dt\ll T^{1+(1+\delta)\frac{2ku-\frac{a(2d-1)}{r}}{2ku- \frac{a(2d-1)}{r}+2d-1}}\exp\Big{(}O\Big{(}\frac{\log T\log\log\log T}{\log \log T}\Big{)}\Big{)}. \tag{7.7}\] Combining equations (7.2), (7.5), (7.6) and (7.7) and after a relabeling of the \(\varepsilon\), the bounds (1.5) and (1.9) follow. ## 8. The asymptotic formula In this section we shall prove Theorem 1.4. We have \[\frac{1}{2\pi i}\int_{1-iT/2}^{1+iT/2}e^{z^{2}/2}X^{z}\frac{1}{ \zeta(\frac{1}{2}+\alpha+it+z)^{k}\zeta(\frac{1}{2}+\alpha-it+z)^{k}}\frac{dz} {z} \tag{8.1}\] \[=\sum_{m,n=1}^{\infty}\frac{\mu_{k}(m)\mu_{k}(n)}{(mn)^{1/2+ \alpha}}\Big{(}\frac{m}{n}\Big{)}^{-it}W\Big{(}\frac{mn}{X}\Big{)},\] where \[W(x)=\frac{1}{2\pi i}\int_{1-iT/2}^{1+iT/2}e^{z^{2}/2}x^{-z}\frac{dz}{z}, \tag{8.2}\] by writing the zeta-functions in (8.1) as Dirichlet series and integrating term-by-term. On the other hand, by deforming the line of integration, the left hand side of (8.1) is equal to \[|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}+\frac{1}{2\pi i}\int_{-(1- \varepsilon)\alpha-iT/2}^{-(1-\varepsilon)\alpha+iT/2}e^{z^{2}/2}X^{z}\frac{1 }{\zeta(\tfrac{1}{2}+\alpha+it+z)^{k}\zeta(\tfrac{1}{2}+\alpha-it+z)^{k}}\frac {dz}{z}\] \[\qquad+O\Big{(}\frac{e^{-T^{2}/8}X}{T}\max_{\varepsilon\alpha\leq \sigma\leq 1+\alpha}\frac{1}{|\zeta(\tfrac{1}{2}+\sigma+i(t\pm T/2))|^{k}|\zeta( \tfrac{1}{2}+\sigma-i(t\mp T/2))|^{k}}\Big{)}.\] For \(t\in[T,2T]\), the \(O\)-term is \[\ll\frac{e^{-T^{2}/8}X}{T}T^{\frac{k\log\frac{1}{2\pi}}{\log\log T}}\ll e^{- T^{2}/8}XT^{O_{k}(1)},\] by Lemma 2.2. Hence, integrating over \(t\in[T,2T]\) we obtain that \[\int_{T}^{2T}|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}dt=\sum_{m,n=1} ^{\infty}\frac{\mu_{k}(m)\mu_{k}(n)}{(mn)^{1/2+\alpha}}W\Big{(}\frac{mn}{X} \Big{)}\int_{T}^{2T}\Big{(}\frac{m}{n}\Big{)}^{-it}dt\] \[\qquad-\frac{1}{2\pi i}\int_{-(1-\varepsilon)\alpha-iT/2}^{-(1- \varepsilon)\alpha+iT/2}e^{z^{2}/2}X^{z}\int_{T}^{2T}\frac{1}{\zeta(\tfrac{1} {2}+\alpha+it+z)^{k}\zeta(\tfrac{1}{2}+\alpha-it+z)^{k}}dt\frac{dz}{z}\] \[\qquad+O\big{(}e^{-T^{2}/8}XT^{O_{k}(1)}\big{)}.\] Furthermore, by the Cauchy-Schwarz inequality the second term above is \[\ll_{\varepsilon}\frac{X^{-(1-\varepsilon)\alpha}}{\alpha}\int_{ -T/2}^{T/2}e^{-z^{2}/2}\bigg{(}\int_{T}^{2T}|\zeta(\tfrac{1}{2}+\varepsilon \alpha+i(t+z))|^{-2k}dt\bigg{)}^{1/2}\] \[\qquad\qquad\qquad\qquad\qquad\times\bigg{(}\int_{T}^{2T}|\zeta( \tfrac{1}{2}+\varepsilon\alpha-i(t-z))|^{-2k}dt\bigg{)}^{1/2}dz\] \[\ll_{\varepsilon}\frac{X^{-(1-\varepsilon)\alpha}}{\alpha}T(\log \log T)^{k}(\log T)^{k^{2}}\ll_{\varepsilon}TX^{-(1-\varepsilon)\alpha}(\log T )^{k^{2}+1},\] in view of (1.3) and (1.6) in Theorems 1.2 and 1.3, and the fact that \(\frac{1}{\alpha}\ll\min\Big{\{}\frac{(\log T)^{\frac{1}{2k}}}{(\log\log T)^{ \frac{4}{k}+\varepsilon}},\frac{\log T}{\log\log T}\Big{\}}\). Thus, \[\int_{T}^{2T}|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}dt=\sum_{m,n=1} ^{\infty}\frac{\mu_{k}(m)\mu_{k}(n)}{(mn)^{1/2+\alpha}}W\Big{(}\frac{mn}{X} \Big{)}\int_{T}^{2T}\Big{(}\frac{m}{n}\Big{)}^{-it}dt \tag{8.3}\] \[\qquad\qquad\qquad+O\big{(}TX^{-(1-\varepsilon)\alpha}(\log T)^{ k^{2}+1}\big{)}+O\big{(}e^{-T^{2}/8}XT^{O_{k}(1)}\big{)}.\] We next consider the contribution of off-diagonal terms \(m\neq n\) on the right hand side of (8.3), which is \[\ll\sum_{m\neq n}\frac{d_{k}(m)d_{k}(n)}{(mn)^{1/2+\alpha}|\log m/n|}\Big{|}W \Big{(}\frac{mn}{X}\Big{)}\Big{|}.\] We note from (8.2) that \(W(x)\ll x^{-1}\) trivially and \[W(x)=1+O(x)+O\Big{(}\frac{e^{-T^{2}/8}}{xT}\Big{)}\] if \(x\leq 1\), by moving the contour to the \(-1\)-line, and so this contribution is bounded by \[\ll\sum_{\begin{subarray}{c}m\neq n\\ mn\leq X\end{subarray}}\frac{d_{k}(m)d_{k}(n)}{(mn)^{1/2+\alpha}|\log m/n|}+ \frac{e^{-T^{2}/8}X}{T}\sum_{\begin{subarray}{c}m\neq n\\ mn\leq X\end{subarray}}\frac{d_{k}(m)d_{k}(n)}{(mn)^{3/2+\alpha}|\log m/n|}\] \[\qquad\qquad+X\sum_{\begin{subarray}{c}m\neq n\\ mn>X\end{subarray}}\frac{d_{k}(m)d_{k}(n)}{(mn)^{3/2+\alpha}|\log m/n|}\] \[=E_{1}+E_{2},\] say, where \(E_{1}\) and \(E_{2}\) denote the sums with \(\frac{m}{n}\notin[1/2,2]\) and \(\frac{m}{n}\in[1/2,2]\), respectively. With \(E_{1}\), \(|\log m/n|\gg 1\) and we get \[E_{1} \ll\sum_{mn\leq X}\frac{d_{k}(m)d_{k}(n)}{(mn)^{1/2+\alpha}}+ \frac{e^{-T^{2}/8}X}{T}\sum_{mn\leq X}\frac{d_{k}(m)d_{k}(n)}{(mn)^{3/2+\alpha }}+X\sum_{mn>X}\frac{d_{k}(m)d_{k}(n)}{(mn)^{3/2+\alpha}}\] \[\ll X^{1/2-\alpha}(\log X)^{2k-1}+\frac{e^{-T^{2}/8}X}{T}. \tag{8.4}\] For \(E_{2}\), we use the fact that \[\frac{d_{k}(m)d_{k}(n)}{(mn)^{\sigma}}\ll\frac{d_{k}(m)^{2}}{m^{2\sigma}}+ \frac{d_{k}(n)^{2}}{n^{2\sigma}}, \tag{8.5}\] and we have \[E_{2} \ll\sum_{m\leq\sqrt{2X}}\frac{d_{k}(m)^{2}}{m^{1+2\alpha}}\sum_{ \begin{subarray}{c}n\neq m\\ m/2\leq n\leq 2m\end{subarray}}\frac{1}{|\log m/n|}+\frac{e^{-T^{2}/8}X}{T}\sum_{ m\leq\sqrt{2X}}\frac{d_{k}(m)^{2}}{m^{3+2\alpha}}\sum_{\begin{subarray}{c}n \neq m\\ m/2\leq n\leq 2m\end{subarray}}\frac{1}{|\log m/n|}\] \[\qquad\qquad+X\sum_{m>\sqrt{X/2}}\frac{d_{k}(m)^{2}}{m^{3+2\alpha }}\sum_{\begin{subarray}{c}n\neq m\\ m/2\leq n\leq 2m\end{subarray}}\frac{1}{|\log m/n|}\] \[\ll\log X\sum_{m\leq\sqrt{2X}}\frac{d_{k}(m)^{2}}{m^{2\alpha}}+ \frac{e^{-T^{2}/8}X\log X}{T}\sum_{m\leq\sqrt{2X}}\frac{d_{k}(m)^{2}}{m^{2+2 \alpha}}+X\sum_{m>\sqrt{X/2}}\frac{d_{k}(m)^{2}\log m}{m^{2+2\alpha}}\] \[\ll X^{1/2-\alpha}(\log X)^{k^{2}}+\frac{e^{-T^{2}/8}X\log X}{T}. \tag{8.6}\] We are left with the contribution of the diagonal terms \(m=n\) on the right hand side of (8.3). By (8.2) this is \[T\sum_{n=1}^{\infty}\frac{\mu_{k}(n)^{2}}{n^{1+2\alpha}}W\Big{(} \frac{n^{2}}{X}\Big{)}=\frac{T}{2\pi i}\int_{1-iT/2}^{1+iT/2}e^{z^{2}/2}X^{z} \sum_{n=1}^{\infty}\frac{\mu_{k}(n)^{2}}{n^{1+2\alpha+2z}}\frac{dz}{z}\\ =\frac{T}{2\pi i}\int_{1-iT/2}^{1+iT/2}e^{z^{2}/2}X^{z}\zeta(1+2 \alpha+2z)^{k^{2}}\prod_{p}\bigg{(}1-\frac{1}{p^{1+2\alpha+2z}}\bigg{)}^{k^{2} }\bigg{(}1+\sum_{j=1}^{\infty}\frac{\mu_{k}(p^{j})^{2}}{p^{(1+2\alpha+2z)j}} \bigg{)}\frac{dz}{z}.\] We move the contour to the \(-(1-\varepsilon)\alpha\)-line, crossing a simple pole at \(z=0\). In doing so, we get that this is equal to \[T\zeta(1+2\alpha)^{k^{2}}\prod_{p}\bigg{(}1-\frac{1}{p^{1+2 \alpha}}\bigg{)}^{k^{2}}\bigg{(}1+\sum_{j=1}^{\infty}\frac{\mu_{k}(p^{j})^{2}} {p^{(1+2\alpha)j}}\bigg{)}\\ +O\big{(}TX^{-(1-\varepsilon)\alpha}\alpha^{-(k^{2}+1)}\big{)}+O \big{(}e^{-T^{2}/8}XT^{O_{k}(1)}\big{)}.\] Thus, \[\int_{T}^{2T}|\zeta(\tfrac{1}{2}+\alpha+it)|^{-2k}dt=T\zeta(1+2 \alpha)^{k^{2}}\prod_{p}\bigg{(}1-\frac{1}{p^{1+2\alpha}}\bigg{)}^{k^{2}} \bigg{(}1+\sum_{j=1}^{\infty}\frac{\mu_{k}(p^{j})^{2}}{p^{(1+2\alpha)j}}\bigg{)} \\ +O\big{(}TX^{-(1-\varepsilon)\alpha}(\log T)^{k^{2}+1}\big{)}+O \big{(}e^{-T^{2}/8}XT^{O_{k}(1)}\log X\big{)}+O\big{(}X^{1/2-\alpha}(\log X) ^{k^{2}}\big{)},\] by combining the above with (8.3), (8.4) and (8.6). We choose \(X=T^{2}\), then the error terms above become \(T^{1-2(1-\varepsilon)\alpha}(\log T)^{k^{2}+1}\), and the conclusion follows after a relabeling of the \(\varepsilon\). ## 9. Proof of Theorem 1.5 ### Assuming RH: \(k\geq 1\) Following the arguments in [7, Chapter 17], it is standard from Perron's formula that \[\sum_{n\leq x}\mu_{k}(n)=\frac{1}{2\pi i}\int_{c-i[x]}^{c+i[x]}\frac{x^{s}}{ \zeta(s)^{k}}\frac{ds}{s}+O\big{(}(\log x)^{k}\big{)} \tag{9.1}\] with \(c=1+\frac{1}{\log x}\). We deform the contour by replacing the line segment \(c+it\), \(|t|\leq[x]\), with a piecewise linear path comprising of a number of horizontal and vertical line segments, \[\bigcup_{j=1}^{J}\big{(}V_{j}\cup H_{j}\big{)}\bigcup V_{0},\] where \[V_{0}:\quad s=\frac{1}{2}+\frac{(\log\log x_{0})^{\frac{8}{k}+ \varepsilon}}{(\log x_{0})^{\frac{1}{k}}}+it,\quad|t|\leq x_{0},\] \[V_{j}:\quad s=\frac{1}{2}+\frac{(\log\log(2^{j-1}x_{0}))^{\frac{ 8}{k}+\varepsilon}}{(\log(2^{j-1}x_{0}))^{\frac{1}{k}}}+it,\quad 2^{j-1}x_{0}\leq|t| \leq\min\{2^{j}x_{0},[x]\},\quad 1\leq j\leq J,\] \[H_{j}:\quad s=\sigma\pm i2^{j}x_{0},\quad\frac{1}{2}+\frac{(\log\log(2^{j}x_{0}))^ {\frac{8}{k}+\varepsilon}}{(\log(2^{j}x_{0}))^{\frac{1}{k}}}\leq\sigma\leq\frac{ 1}{2}+\frac{(\log\log(2^{j-1}x_{0}))^{\frac{8}{k}+\varepsilon}}{(\log(2^{j-1}x_ {0}))^{\frac{1}{k}}},\] \[1\leq j\leq J-1,\] \[H_{J}:\quad s=\sigma\pm i[x],\quad\frac{1}{2}+\frac{(\log\log(2^{J-1}x_{0}))^ {\frac{8}{k}+\varepsilon}}{(\log(2^{J-1}x_{0}))^{\frac{1}{k}}}\leq\sigma\leq 1 +\frac{1}{\log x}\] and \(x_{0}=\exp\big{(}(\log x)^{\frac{k}{k+1}}(\log\log x)^{\frac{k+8}{k+1}}\big{)}\), \(J=\lceil\log_{2}\frac{[x]}{x_{0}}\rceil\ll\log x\). We encounter no pole in doing so. We first consider the integral along \(V_{0}\). Let \[\alpha_{0}:=\frac{(\log\log x_{0})^{\frac{8}{k}+\varepsilon}}{(\log x_{0})^{ \frac{1}{k}}}\qquad\text{and}\qquad T_{0}:=\exp\Big{(}\frac{\log x_{0}}{(\log \log x_{0})^{8}}\Big{)}.\] For \(T_{0}\leq T\leq x_{0}\) we have \(\frac{1}{(\log T)^{\frac{1}{k}}}\ll\alpha_{0}\ll\frac{(\log\log T)^{\frac{8}{ k}+\varepsilon}}{(\log T)^{\frac{1}{k}}}\) and so \[\frac{1}{2\pi i}\int_{1/2+\alpha_{0}+iT/2}^{1/2+\alpha_{0}+iT} \frac{x^{s}}{\zeta(s)^{k}}\frac{ds}{s} \ll\frac{x^{1/2+\alpha_{0}}}{T}\int_{1/2+\alpha_{0}+iT/2}^{1/2+ \alpha_{0}+iT}\Big{|}\frac{ds}{\zeta(s)^{k}}\Big{|}\] \[\ll x^{1/2+\alpha_{0}}\exp\Big{(}\log T(\log\log T)^{-1+ \varepsilon}\Big{)},\] by (1.4). We hence get \[\frac{1}{2\pi i}\int_{1/2+\alpha_{0}+iT_{0}}^{1/2+\alpha_{0}+iT_ {0}}\frac{x^{s}}{\zeta(s)^{k}}\frac{ds}{s} \ll x^{1/2+\alpha_{0}}\exp\Big{(}\log x_{0}(\log\log x_{0})^{-1+ \varepsilon}\Big{)}\] \[=\sqrt{x}\exp\Big{(}\frac{\log x(\log\log x_{0})^{\frac{8}{k}+ \varepsilon}}{(\log x_{0})^{\frac{1}{k}}}+\log x_{0}(\log\log x_{0})^{-1+ \varepsilon}\Big{)}\] \[\ll\sqrt{x}\exp\Big{(}(\log x)^{\frac{k}{k+1}}(\log\log x)^{ \frac{7}{k+1}+\varepsilon}\Big{)}, \tag{9.2}\] by diving the segment of integration into dyadic intervals. The same bound holds for the integral along \(\int_{1/2+\alpha_{0}-iT_{0}}^{1/2+\alpha_{0}-iT_{0}}\). Furthermore, we note from Lemma 2.2 that \[|\zeta(s)|^{-1}\ll(|t|+2)^{(\frac{1}{2k}+\varepsilon)\frac{\log\log x_{0}}{ \log\log(|t|+2)}} \tag{9.3}\] for \(s\in V_{0}\). So \[\frac{1}{2\pi i}\int_{1/2+\alpha_{0}-iT_{0}}^{1/2+\alpha_{0}+iT_{ 0}}\frac{x^{s}}{\zeta(s)^{k}}\frac{ds}{s} \ll\sqrt{x}\exp\Big{(}\frac{\log x(\log\log x_{0})^{\frac{8}{k}+ \varepsilon}}{(\log x_{0})^{\frac{1}{k}}}+\frac{\log\log x_{0}\log T_{0}}{ \log\log T_{0}}\Big{)}\] \[\ll\sqrt{x}\exp\Big{(}(\log x)^{\frac{k}{k+1}}(\log\log x)^{\frac{ 7}{k+1}+\varepsilon}\Big{)}. \tag{9.4}\] Combining (9.2) and (9.4) we obtain that \[\frac{1}{2\pi i}\int_{V_{0}}\frac{x^{s}}{\zeta(s)^{k}}\frac{ds}{s}\ll\sqrt{x} \exp\Big{(}(\log x)^{\frac{k}{k+1}}(\log\log x)^{\frac{7}{k+1}+\varepsilon} \Big{)}.\] For the contribution from the vertical segments \(\cup_{j=1}^{J}V_{j}\), we deduce from (1.3) that it is bounded by \[\ll\sqrt{x}(\log x)^{\frac{k^{2}}{4}+\varepsilon}\sum_{j=0}^{J-1} \exp\Big{(}\frac{\log x(\log\log(2^{j}x_{0}))^{\frac{8}{k}+\varepsilon}}{(\log( 2^{j}x_{0}))^{\frac{1}{k}}}\Big{)}\] \[\ll\sqrt{x}\exp\Big{(}\frac{\log x(\log\log x)^{\frac{8}{k}+ \varepsilon}}{(\log x_{0})^{\frac{1}{k}}}\Big{)}\ll\sqrt{x}\exp\Big{(}(\log x) ^{\frac{k}{k+1}}(\log\log x)^{\frac{7}{k+1}+\varepsilon}\Big{)}. \tag{9.5}\] For \(s\in H_{j}\), \(1\leq j\leq J-1\), again like (9.3) we have \[|\zeta(s)|^{-1}\ll(2^{j}x_{0})^{\frac{1}{2k}+\varepsilon}.\] So the contribution to (9.1) from the horizontal segments \(\cup_{j=1}^{J-1}H_{j}\) is \[\ll\sqrt{x}\sum_{j=0}^{J-2}(2^{j}x_{0})^{-1/2+\varepsilon}\exp \Big{(}\frac{\log x(\log\log(2^{j}x_{0}))^{\frac{8}{k}+\varepsilon}}{(\log(2^ {j}x_{0}))^{\frac{1}{k}}}\Big{)}\] \[\ll\sqrt{x}x_{0}^{-1/2+\varepsilon}\exp\Big{(}\frac{\log x(\log \log x)^{\frac{8}{k}+\varepsilon}}{(\log x_{0})^{\frac{1}{k}}}\Big{)}\ll\sqrt{ x}. \tag{9.6}\] We are left with the integral along \(H_{J}\), which is bounded by \[\max_{\frac{1}{2}+\frac{(\log\log x)^{\frac{8}{k}+\varepsilon}}{(\log x)^{ \frac{1}{k}}}\leq\sigma\leq 1+\frac{1}{\log x}}\exp\Big{(}(\sigma-1)\log x-k \log|\zeta(\sigma\pm ix)|\Big{)}. \tag{9.7}\] For \(\frac{1}{2}+\frac{(\log\log x)^{\frac{8}{k}+\varepsilon}}{(\log x)^{\frac{1}{ k}}}\leq\sigma\leq\frac{1}{2}+o(\frac{1}{\log\log x})\), Lemma 2.2 implies that this is \[\ll\exp\bigg{(}(\sigma-1)\log x+\frac{k\log x}{2\log\log x}\log \frac{1}{1-(\log x)^{1-2\sigma}}\] \[\ll\exp\bigg{(}(\sigma-1)\log x+\frac{k\log x}{2\log\log x}\log \frac{1}{(2\sigma-1)\log\log x}+O_{k}\big{(}(2\sigma-1)\log x\big{)}\] \[\qquad\qquad\qquad\qquad\qquad+O_{k}\Big{(}\frac{\log x}{\log \log x}\Big{)}+O_{k}\Big{(}\frac{\log x}{(\log\log x)^{2}}\log\frac{1}{2\sigma -1}\Big{)}\bigg{)}, \tag{9.8}\] which is decreasing with repsect to \(\sigma\), and, hence, \[\ll\exp\bigg{(}-\frac{(8+k)\log x\log\log\log x}{2\log\log x}+O_{k}\Big{(} \frac{\log x}{\log\log x}\Big{)}\bigg{)}\ll 1. \tag{9.9}\] For \(1-\frac{1}{\log\log x}\leq\sigma\leq 1+\frac{1}{\log x}\), the second estimate in Lemma 2.2 leads to a bound of size \[\ll\exp\Big{(}(\sigma-1)\log x+k\log\log\log x+O_{k}(1)\Big{)}\ll(\log\log x) ^{k}. \tag{9.10}\] Finally, for \(\frac{1}{2}+O(\frac{1}{\log\log x})\leq\sigma\leq 1-\frac{1}{\log\log x}\), we use the last estimate in Lemma 2.2 to get the bound \[\ll\exp\bigg{(}(\sigma-1)\log x+\frac{(\log x)^{2-2\sigma}}{(1- \sigma)\log\log x}+\varepsilon\log x+O_{k}\bigg{(}\frac{(\log x)^{2-2\sigma}}{( 1-\sigma)^{2}(\log\log x)^{2}}\bigg{)}\bigg{)}\ll x^{\varepsilon}. \tag{9.11}\] Combining the estimates we obtain the first part of the theorem for \(k\geq 1\). ### Assuming RH: \(k<1\) The arguments are similar to the previous subsection. We replace the contour in (9.1) with \[\bigcup_{j=1}^{J}\big{(}V_{j}\cup H_{j}\big{)}\bigcup V_{0},\] where \[V_{0}: s=\frac{1}{2}+\varepsilon\frac{\log\log x_{1}}{\log x_{1}}+ it,\quad|t|\leq x_{1},\] \[V_{j}: s=\frac{1}{2}+\varepsilon\frac{\log\log(2^{j-1}x_{1})}{\log(2^{j-1 }x_{1})}+it,\quad 2^{j-1}x_{1}\leq|t|\leq\min\{2^{j}x_{1},[x]\},\quad 1\leq j \leq J,\] \[H_{j}: s=\sigma\pm i2^{j}x_{1},\quad\frac{1}{2}+\varepsilon\frac{\log \log(2^{j}x_{1})}{\log(2^{j}x_{1})}\leq\sigma\leq\frac{1}{2}+\varepsilon\frac {\log\log(2^{j-1}x_{1})}{\log(2^{j-1}x_{1})},\quad 1\leq j\leq J-1,\] \[H_{J}: s=\sigma\pm i[x],\quad\frac{1}{2}+\varepsilon\frac{\log\log(2^{J-1 }x_{1})}{\log(2^{J-1}x_{1})}\leq\sigma\leq 1+\frac{1}{\log x}\] and \(x_{1}=\exp(\sqrt{\varepsilon\log x}\log\log x)\), \(J=\lceil\log_{2}\frac{[x]}{x_{1}}\rceil\ll\log x\). Again we encounter no pole in doing so. For the integral along \(V_{0}\), let \[\alpha_{1}:=\varepsilon\frac{\log\log x_{1}}{\log x_{1}}\qquad \text{and}\qquad T_{1}:=\exp\Big{(}\frac{\log x_{1}}{\log\log x_{1}}\Big{)}.\] If \(T_{1}\leq T\leq x_{1}\), then \(\frac{1}{\log T}\ll\alpha_{1}\ll\frac{\log\log T}{\log T}\), and hence \[\frac{1}{2\pi i}\int_{1/2+\alpha_{1}+iT}^{1/2+\alpha_{1}+iT}\frac{x^{s}}{ \zeta(s)^{k}}\frac{ds}{s}\ll\frac{x^{1/2+\alpha_{1}}}{T}\int_{1/2+\alpha_{1}+ iT/2}^{1/2+\alpha_{1}+iT}\Big{|}\frac{ds}{\zeta(s)^{k}}\Big{|}\ll x^{1/2+ \alpha_{1}}\exp\Big{(}(\log\log T)^{1+\varepsilon}\Big{)},\] by (1.7). It follows that \[\frac{1}{2\pi i}\int_{1/2+\alpha_{1}+iT_{1}}^{1/2+\alpha_{1}+ix_ {1}}\frac{x^{s}}{\zeta(s)^{k}}\frac{ds}{s} \ll x^{1/2+\alpha_{1}}\exp\Big{(}(\log\log x_{1})^{1+ \varepsilon}\Big{)}\] \[=\sqrt{x}\exp\Big{(}\varepsilon\frac{\log x\log\log x_{1}}{\log x _{1}}+(\log\log x_{1})^{1+\varepsilon}\Big{)}\] \[\ll\sqrt{x}\exp\big{(}\varepsilon\sqrt{\log x}\big{)}, \tag{9.12}\] by relabelling \(\varepsilon\). The same bound holds for the integral along \(\int_{1/2+\alpha_{1}-iT_{1}}^{1/2+\alpha_{1}-iT_{1}}\). Also, by (9.3) we have \[\frac{1}{2\pi i}\int_{1/2+\alpha_{1}-iT_{1}}^{1/2+\alpha_{1}+iT_{1 }}\frac{x^{s}}{\zeta(s)^{k}}\frac{ds}{s} \ll\sqrt{x}\exp\Big{(}\varepsilon\frac{\log x\log\log x_{1}}{ \log x_{1}}+\Big{(}\frac{k}{2}+\varepsilon\Big{)}\frac{\log\log x_{1}\log T_{1} }{\log\log T_{1}}\Big{)}\] \[\ll\sqrt{x}\exp\big{(}\varepsilon\sqrt{\log x}\big{)}, \tag{9.13}\] by another relabelling of \(\varepsilon\). Combining (9.12) and (9.13) we obtain that \[\frac{1}{2\pi i}\int_{V_{0}}\frac{x^{s}}{\zeta(s)^{k}}\frac{ds}{s}\ll\sqrt{x} \exp\big{(}\varepsilon\sqrt{\log x}\big{)}. \tag{9.14}\] The same bound holds for the other integrals. Indeed, similar to (9.5) and (9.6) we have \[\sum_{j=1}^{J}\frac{1}{2\pi i}\int_{V_{j}}\frac{x^{s}}{\zeta(s)^{ k}}\frac{ds}{s} \ll\sqrt{x}(\log x)^{\frac{k^{2}}{4}+\varepsilon}\sum_{j=0}^{J-1} \exp\Big{(}\varepsilon\frac{\log x\log\log(2^{j}x_{1})}{\log(2^{j}x_{1})} \Big{)}\] \[\ll\sqrt{x}(\log x)^{\frac{k^{2}}{4}+1+\varepsilon}\exp\Big{(} \varepsilon\frac{\log x\log\log x_{1}}{\log x_{1}}\Big{)}\ll\sqrt{x}\exp\big{(} \varepsilon\sqrt{\log x}\big{)} \tag{9.15}\] and \[\sum_{j=1}^{J-1}\frac{1}{2\pi i}\int_{H_{j}}\frac{x^{s}}{\zeta(s )^{k}}\frac{ds}{s} \ll\sqrt{x}\sum_{j=0}^{J-2}(2^{j}x_{1})^{\frac{k}{2}-1+ \varepsilon}\exp\Big{(}\varepsilon\frac{\log x\log\log(2^{j}x_{1})}{\log(2^{j }x_{1})}\Big{)}\] \[\ll\sqrt{x}x_{1}^{-1/2+\varepsilon}(\log x)\exp\Big{(} \varepsilon\frac{\log x\log\log x_{1}}{\log x_{1}}\Big{)}\ll\sqrt{x}\exp\big{(} \varepsilon\sqrt{\log x}\big{)}. \tag{9.16}\] And similar to (9.7), (9.8), (9.9), (9.10) and (9.11) we get \[\frac{1}{2\pi i}\int_{H_{j}}\frac{x^{s}}{\zeta(s)^{k}}\frac{ds}{s }\ll\max_{\frac{1}{2}+\varepsilon\frac{\log\log x}{\log x}\leq\sigma\leq 1+ \frac{1}{\log x}}\exp\Big{(}(\sigma-1)\log x-k\log|\zeta(\sigma\pm ix)|\Big{)}\] \[\ll\begin{cases}\exp\Big{(}\frac{(k-1)\log x}{2}-\frac{k\log x \log\log\log x}{\log\log x}+O_{k}\big{(}\frac{\log x}{\log\log x}\big{)} \Big{)}&\text{if }\frac{1}{2}+\varepsilon\frac{\log\log x}{\log x}\leq\sigma\leq \frac{1}{2}+o(\frac{1}{\log\log x}),\\ (\log\log x)^{k}&\text{if }1-\frac{1}{\log\log x}\leq\sigma\leq 1+\frac{1}{ \log x},\\ x^{\varepsilon}&\text{if }\frac{1}{2}+O(\frac{1}{\log\log x})\leq\sigma\leq 1- \frac{1}{\log\log x},\end{cases}\] \[\ll x^{\varepsilon},\] by Lemma 2.2. This, together with (9.14), (9.15) and (9.16), establishes the first part of the theorem for \(k<1\). ### Assuming Conjecture 1.1 We replace the contour in (9.1) by \(H_{1}\cup V_{0}\), where \[V_{0}:\quad s=\frac{1}{2}+\alpha_{2}+it,\quad|t|\leq[x],\] \[H_{1}:\quad s=\sigma\pm i[x],\quad\frac{1}{2}+\alpha_{2}\leq \sigma\leq 1+\frac{1}{\log x}\] for some \(\frac{1}{\log x}\leq\alpha_{2}\leq 1\) to be chosen later. We encounter no pole in doing so. On one hand, by Conjecture 1.1 we get \[\frac{1}{2\pi i}\int_{V_{0}}\frac{x^{s}}{\zeta(s)^{k}}\frac{ds}{s}\ll\sqrt{x}x^ {\alpha_{2}}\Big{(}\frac{1}{\alpha_{2}}\Big{)}^{\frac{k^{2}}{4}}. \tag{9.17}\] On the other hand, we have \[\frac{1}{2\pi i}\int_{H_{1}}\frac{x^{s}}{\zeta(s)^{k}}\frac{ds}{s}\ll\max_{ \frac{1}{2}+\alpha_{2}\leq\sigma\leq 1+\frac{1}{\log x}}\exp\Big{(}(\sigma-1) \log x-k\log|\zeta(\sigma\pm i[x])|\Big{)}.\] Like in (9.8) and (9.9), this is \[\ll\exp\bigg{(}(\sigma-1)\log x+\frac{k\log x}{2\log\log x}\log \frac{1}{(2\sigma-1)\log\log x}+O_{k}\big{(}(2\sigma-1)\log x\big{)}\\ +O_{k}\Big{(}\frac{\log x}{\log\log x}\Big{)}+O_{k}\Big{(}\frac{ \log x}{(\log\log x)^{2}}\log\frac{1}{2\sigma-1}\Big{)}\bigg{)}\\ \ll\exp\bigg{(}-\frac{\log x}{2}+\frac{k\log x\log\frac{1}{ \alpha_{2}}}{2\log\log x}-\frac{k\log x\log\log\log x}{2\log\log x}\\ +O_{k}\big{(}\alpha_{2}\log x\big{)}+O_{k}\Big{(}\frac{\log x}{ \log\log x}\Big{)}+O_{k}\Big{(}\frac{\log x\log\frac{1}{\alpha_{2}}}{(\log \log x)^{2}}\Big{)}\bigg{)} \tag{9.18}\] for \(\frac{1}{2}+\alpha_{2}\leq\sigma\leq\frac{1}{2}+o(\frac{1}{\log\log x})\), and as in (9.10) and (9.11), it is \[\ll(\log\log x)^{k}+x^{\varepsilon}\] for \(\frac{1}{2}+O(\frac{1}{\log\log x})\leq\sigma\leq 1+\frac{1}{\log x}\). In view of (9.18), we choose \[\alpha_{2}=\left\{\begin{array}{ll}\frac{1}{\log x}&\mbox{if $k\leq 2$},\\ \frac{1}{(\log x)^{\frac{2}{k}}(\log\log x)^{1-\varepsilon}}&\mbox{if $k>2$}, \end{array}\right.\] and then \[\frac{1}{2\pi i}\int_{H_{1}}\frac{x^{s}}{\zeta(s)^{k}}\frac{ds}{s}\ll\sqrt{ x}.\] Combining with (9.17) we obtain the second part of the theorem.
2301.06468
Msanii: High Fidelity Music Synthesis on a Shoestring Budget
In this paper, we present Msanii, a novel diffusion-based model for synthesizing long-context, high-fidelity music efficiently. Our model combines the expressiveness of mel spectrograms, the generative capabilities of diffusion models, and the vocoding capabilities of neural vocoders. We demonstrate the effectiveness of Msanii by synthesizing tens of seconds (190 seconds) of stereo music at high sample rates (44.1 kHz) without the use of concatenative synthesis, cascading architectures, or compression techniques. To the best of our knowledge, this is the first work to successfully employ a diffusion-based model for synthesizing such long music samples at high sample rates. Our demo can be found https://kinyugo.github.io/msanii-demo and our code https://github.com/Kinyugo/msanii .
Kinyugo Maina
2023-01-16T15:18:26Z
http://arxiv.org/abs/2301.06468v1
# Msanii: High Fidelity Music Synthesis on a Shoestring Budget ###### Abstract In this paper, we present Msanii, a novel diffusion-based model for synthesizing long-context, high-fidelity music efficiently. Our model combines the expressiveness of mel spectrograms, the generative capabilities of diffusion models, and the vocoding capabilities of neural vocoders. We demonstrate the effectiveness of Msanii by synthesizing tens of seconds (_190 seconds_) of _stereo_ music at high sample rates (_44.1 kHz_) without the use of concatenative synthesis, cascading architectures, or compression techniques. To the best of our knowledge, this is the first work to successfully employ a diffusion-based model for synthesizing such long music samples at high sample rates. Our demo can be found here and our code here. ## 1 Introduction Music is a universal language that elicits emotions and connects people from diverse cultures, and is an integral part of society. For decades, researchers have been investigating whether computers can capture the creative process behind music creation and the potential implications for music and artificial intelligence. In recent years, the field of generative modeling has seen significant growth with various techniques, including generative adversarial networks (GANs) Goodfellow et al. (2020), variational autoencoders (VAEs) Kingma and Welling (2013), normalizing flows Rezende and Mohamed (2015), autoregressive models Dhariwal et al. (2020), and diffusion models Sohl-Dickstein et al. (2015); Ho et al. (2020); Kingma et al. (2021), driving progress in various fields. These techniques have achieved human-level performance in tasks such as image generation Rombach et al. (2022); Karras et al. (2020); Dhariwal and Nichol (2021); Saharia et al. (2022); Ramesh et al. (2022), speech generation Kong et al. (2020); Shen et al. (2017), and text generation Brown et al. (2020); Scao et al. (2022), as well as progressed music generation Dhariwal et al. (2020); Rouard and Hadjeres (2021); Engel et al. (2019); Marafioi et al. (2019) and other areas. However, efficient high-fidelity music synthesis remains a challenging task in machine learning due to the high dimensionality of audio signals, which makes it difficult for models to learn the long-range structure of music Dhariwal et al. (2020); Dieleman et al. (2018). To address this issue, it is common to learn a lower-dimensional representation of the audio signal, which can reduce computational complexity and allow models to better capture the salient features of music related to fidelity Dhariwal et al. (2020); Dieleman et al. (2018). As an alternative to learning a lower-dimensional representation, Time-Frequency (TF) representations, such as mel spectrograms, provide a powerful and intuitive way to represent features in audio signals. Mel spectrograms, which are a type of TF feature, have been widely used in various applications, including natural language processing Shen et al. (2017), voice conversion Hwang et al. (2020), and singing voice synthesis Liu et al. (2022). They have also been applied success fully to music synthesis Engel et al. (2019); Marafioti et al. (2019); Pasini and Schluter (2022). Mel spectrograms are particularly appealing for music synthesis due to their low resolution, which allows them to capture important musical characteristics while minimizing computational complexity. Autoregressive models and GANs are popular choices for music synthesis, but they each have their own challenges. Autoregressive models, which have been widely used in the raw waveform domain Dhariwal et al. (2020); Dieleman et al. (2018), are often slow at inference. GANs, which have frequently been employed for music synthesis using TF representations Engel et al. (2019); Marafioti et al. (2019); Pasini and Schluter (2022), can suffer from unstable training and the adversarial training of multiple networks, leads to low sample diversity, in addition to being computationally expensive. In contrast, diffusion-based models offer fast inference compared to autoregressive models, a simple training procedure, and have recently outperformed GANs in terms of quality Dhariwal and Nichol (2021). This makes diffusion-based models an attractive choice for music synthesis. In this paper, we propose a novel approach for music synthesis using mel spectrograms that leverages the benefits of diffusion-based modeling. By combining the expressiveness of mel spectrograms with our novel U-Net architecture and diffusion models, we are able to synthesize minutes of high-fidelity music at a high sample rate. Our method represents a significant advance in the field of music synthesis, as it generates long samples of high-quality music without relying on concatenative synthesis, cascading architectures, or compression techniques. Additionally, we show that our model, Msanii, can be used to solve other audio tasks, such as audio inpainting and style transfer, without the need for retraining. The main contributions of this paper are: * We introduce Msanii, a novel diffusion-based model for long-context, high-fidelity music synthesis in the mel spectrogram domain. To the best of our knowledge, this is the first work to successfully employ a diffusion-based model for synthesis of minutes of audio at high sample rates (44.1 kHz) in the TF domain. * We demonstrate the effectiveness of Msanii by synthesizing tens of seconds (_190 seconds_) of _stereo_ music at a high sample rate (_44.1 kHz_). * We show that Msanii can be used to solve other audio tasks, such as interpolation, style transfer, inpainting and outpainting, without the need for retraining. The rest of the paper is organized as follows: in Section 2 we review related work in music synthesis and diffusion-based models. In Section 3 we describe our proposed method, including the architecture and training of Msanii. In Section 4 we present our experimental setup. In Section 5 we present our results. In Section 6 we discuss potential future work. And Finally, in Section 7 we summarize our contributions. **DISCLAIMER:** This paper is a work in progress and has not been finalized. The results presented in this paper are subject to change and should not be considered final. ## 2 Background The high dimensionality of audio signals presents a significant challenge for music synthesis in the raw waveform domain. To accurately represent an audio sample, it is necessary to discretize the continuous signal into a large number of samples, which requires a high sample rate (thousands of samples per second). For music at CD quality, this sample rate is typically 44.1 kHz. This means that a \(\sim\)3 minute long audio sample will consist of approximately \(\sim\)8 million samples. This complexity increases with the number of channels, as the total number of samples \(T\) becomes \(T=duration\times sample\ rate\times channels\). The computational demands of synthesizing long audio samples are further compounded by the need to capture a wide range of musical structures, such as timbre, harmony, and melody, as well as to ensure global coherence in terms of form and texture. To address these challenges, it is common to use a lower-dimensional representation of the audio signal that captures important musical features while minimizing computational complexity. Additionally, it is necessary to employ an expressive but efficient generative model. ### Mel Spectrograms To address the computational complexity of our task, we propose the use of a lower-dimensional yet expressive representation of audio: Mel Spectrograms. Mel spectrograms are a popular representation of audio used in tasks such as speech synthesis Shen et al. (2017), voice conversion Hwang et al. (2020), and music synthesis Vasquez and Lewis (2019). They are derived from the magnitude spectrogram of the Short-Time Fourier Transform (STFT) and encode frequency in a way that is more perceptually relevant to human hearing. However, the conversion from raw audio to mel spectrograms is not perfectly invertible due to the loss of phase information in the magnitude STFT spectrogram. To reconstruct audio from mel spectrograms, neural vocoders such as MelGAN Kumar et al. (2019), ISTFTNet Kaneko et al. (2022), MCNN Arik et al. (2018) and Phase Gradient Di Giorgi et al. (2022) have been developed to approximate both the magnitude and phase from the mel spectrogram. However, designing a vocoder that reconstructs both the magnitude and phase while remaining lightweight can be challenging. As an alternative, we propose reconstructing only the magnitude spectrogram and approximating the phase using traditional methods. This approach has been demonstrated in previous work such as Adversarial Audio Synthesis Marafioti et al. (2019) and MelNet Vasquez and Lewis (2019). In particular, we use the Griffin-Lim algorithm Griffin and Lim (1984); Perraudin et al. (2013). To reconstruct the magnitude spectrogram, we use a combination of the Spectral Convergence Loss and Log-Magnitude Loss. The Spectral Convergence Loss is defined as: \[\frac{\||STFT(s)|-|STFT(s)|\|_{F}}{\||STFT(s)||_{F}} \tag{1}\] where \(|.|\) represents the magnitude, \(|.|_{F}\) is the Frobenius norm, and \(s\) and \(\hat{s}\) are the ground truth and predicted magnitude spectrograms, respectively. This loss focuses on the large magnitude components of the spectrograms. The Log-Magnitude Loss is defined as: \[\|log(|STFT(s)|+\epsilon)-log(|STFT(\hat{s})|+\epsilon)\|_{1} \tag{2}\] where \(\epsilon\) is a small constant value added to prevent taking the logarithm of zero, and \(|.|_{1}\) is the \(L1\) norm. This loss focuses on the small magnitude components of the spectrograms. ### Diffusion Inspired by the recent successes of diffusion models Sohl-Dickstein et al. (2015); Ho et al. (2020); Kingma et al. (2021) in solving audio tasks Rouard and Hadjeres (2021); Kong et al. (2020), we chose to employ them for the task of synthesizing mel spectrograms. Diffusion models can be thought of as a Markovian Hierarchical Variational Autoencoder Luo (2022). They define a markov chain of steps to slowly add random noise to the data and then learn the reverse process to synthesize data samples from noise. #### 2.2.1 Forward Process Given a real data distribution \(q(\mathbf{x}_{0})\), we draw a sample \(\mathbf{x}_{0}\) from the distribution, \(\mathbf{x}_{0}\sim q(\mathbf{x})\). Then, we define a forward noising process \(q(\mathbf{x}_{t}|\mathbf{x}_{t-1})\) that gradually adds Gaussian noise to the sample according to a predefined schedule \(\beta_{t}\in(0,1)_{t=1}^{T}\), where \(\beta_{1}<\beta_{2}<\cdots<\beta_{T}\). Specifically, the sample distribution at each time step is given by: \[q(\mathbf{x}_{t}|\mathbf{x}_{t-1})=\mathcal{N}(\mathbf{x}_{t};\sqrt{1-\beta_{t}}\mathbf{x}_{t- 1},\beta_{t}\mathbf{I}) \tag{3}\] And the distribution over the entire sequence of samples \(\mathbf{x}_{1:T}\) given the initial sample \(\mathbf{x}_{0}\) is given by: \[q(\mathbf{x}_{1:T}|\mathbf{x}_{0})=\prod_{t=1}^{T}q(\mathbf{x}_{t}|\mathbf{x}_{t-1}) \tag{4}\] As \(T\rightarrow\infty\), the distribution of \(\mathbf{x}_{T}\) approaches the standard Gaussian distribution. To sample from the forward distribution, we can draw a sample \(\mathbf{x}_{t}\) at each time step \(t\) from a conditional Gaussian with mean \(\mu_{t}=\sqrt{1-\beta_{t}}\mathbf{x}_{t-1}\) and variance \(\sigma^{2}=\beta_{t}\) as follows: \[\mathbf{x}_{t}=\sqrt{1-\beta_{t}}\mathbf{x}_{t-1}+\sqrt{\beta_{t}}\mathbf{\epsilon}\quad \text{where }\mathbf{\epsilon}\sim\mathcal{N}(\mathbf{0},\mathbf{I}) \tag{5}\] The forward noising process in diffusion models has the property that, using the reparameterization trick, we can sample \(\mathbf{x}_{t}\) at any arbitrary timestep. This property is described in Song et al. (2020); Luo (2022); Weng (2021). Let \(\alpha_{t}=1-\beta_{t}\) and \(\bar{\alpha}_{t}=\prod_{i=1}^{t}\alpha_{i}\). The process can be expressed as follows: \[\begin{split}\mathbf{x}_{t}&=\sqrt{\alpha_{t}}\mathbf{x}_ {t-1}+\sqrt{1-\alpha_{t}}\mathbf{\epsilon}_{t-1}\\ &=\sqrt{\alpha_{t}\alpha_{t-1}}\mathbf{x}_{t-2}+\sqrt{1-\alpha_{t} \alpha_{t-1}}\bar{\mathbf{\epsilon}}_{t-2}\\ &=\cdots\\ &=\sqrt{\alpha_{t}}\mathbf{x}_{0}+\sqrt{1-\alpha_{t}}\mathbf{\epsilon} \end{split} \tag{6}\] where, \(\mathbf{\epsilon}_{t-1},\mathbf{\epsilon}_{t-2},\cdots\sim\mathcal{N}(\mathbf{0}, \mathbf{I})\). \(\bar{\mathbf{\epsilon}}_{t-2}\) is a combination of two Gaussian distributions with different variances, \(\mathcal{N}(\mathbf{0},\sigma_{1}^{2}\mathbf{I})\) and \(\mathcal{N}(\mathbf{0},\sigma_{2}^{2}\mathbf{I})\), such that: \[\bar{\mathbf{\epsilon}}_{t-2}=\sqrt{(1-\alpha_{t})+\alpha_{t}(1-\alpha_{t-1})}= \sqrt{1-\alpha_{t}\alpha_{t-1}} \tag{7}\] #### 2.2.2 Reverse Process To generate a sample from a Gaussian noise input \(\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\), we reverse the forward process by sampling from \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t})\). However, \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) depends on the entire dataset, so we learn an approximate model \(p_{\theta}\): \[p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t}) =\mathcal{N}(\mathbf{x}_{t-1};\mu_{\theta}(\mathbf{x}_{t},t),\Sigma_{\theta }(\mathbf{x}_{t},t)) \tag{8}\] \[p_{\theta}(\mathbf{x}_{0:T}) =p(\mathbf{x}_{T})\prod_{t=1}^{T}p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t}) \tag{9}\] The reverse conditional probability becomes tractable when it is conditioned on \(\mathbf{x}_{0}\): \[q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{t-1};\tilde{\mu}_{t} (\mathbf{x}_{t},\mathbf{x}_{0}),\tilde{\Sigma}_{t}(\mathbf{x}_{t},\mathbf{x}_{0})) \tag{10}\] Similar to Ho et al. (2020), we fix the variance to: \[\Sigma_{\theta}(\mathbf{x}_{t},\mathbf{x}_{0})=\sigma_{t}^{2}\mathbf{I}=\tilde{\beta}_ {t}\mathbf{I} \tag{11}\] and learn only the mean \(\mu_{\theta}\). This gives us: \[q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{t-1};\tilde{\mu}_{t }(\mathbf{x}_{t},\mathbf{x}_{0}),\tilde{\beta}_{t}\mathbf{I}) \tag{12}\] Using Bayes' rule, we can derive the following equations: \[\tilde{\mu}_{t}(\mathbf{x}_{t},\mathbf{x}_{0}) =\frac{\sqrt{\alpha_{t}}(1-\bar{\alpha}_{t-1})}{1-\bar{\alpha}_{ t}}\mathbf{x}_{t}+\frac{\sqrt{\bar{\alpha}_{t-1}\beta_{t}}}{1-\bar{\alpha}_{t-1}} \mathbf{x}_{0} \tag{13}\] \[\tilde{\beta}_{t} =\frac{1-\bar{\alpha}_{t-1}}{1-\bar{\alpha}_{t}}\cdot\beta_{t} \tag{14}\] By substituting \(\mathbf{x}_{0}=\frac{1}{\sqrt{\bar{\alpha}_{t}}}(\mathbf{x}_{t}-\sqrt{1-\bar{\alpha}_{ t}}\mathbf{\epsilon}_{t})\) into the above equation for \(\tilde{\mu}_{t}(\mathbf{x}_{t},\mathbf{x}_{0})\), we get: \[\tilde{\mu}_{t}(\mathbf{x}_{t},\mathbf{x}_{0})=\frac{1}{\sqrt{\bar{\alpha}_{t}}}(\mathbf{ x}_{t}-\frac{1-\alpha_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\mathbf{\epsilon}_{t}) \tag{15}\] This equation allows us to compute the mean of the distribution \(q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})\) given \(\mathbf{x}_{t}\) and \(\mathbf{x}_{0}\). The variance of this distribution is fixed to \(\tilde{\beta}_{t}\mathbf{I}\), where \(\tilde{\beta}_{t}\) is given by the equation \(\tilde{\beta}_{t}=\frac{1-\bar{\alpha}_{t-1}}{1-\bar{\alpha}_{t}}\cdot\beta_{t}\). We can then sample from this distribution to obtain a sample of \(\mathbf{x}_{t-1}\) given \(\mathbf{x}_{t}\) and \(\mathbf{x}_{0}\). This process can be repeated to generate samples of \(\mathbf{x}_{t-2},\mathbf{x}_{t-3},\dots,\mathbf{x}_{0}\) given \(\mathbf{x}_{T}\). #### 2.2.3 Loss Function As \(\mathbf{x}_{t}\) is available during training, we can use it to reparameterize the Gaussian mean term and predict \(\mathbf{\epsilon}_{t}\) given \(\mathbf{x}_{t}\) and \(t\). This is done with the following equation: \[\tilde{\mu}_{t}(\mathbf{x}_{t},\mathbf{x}_{0})=\frac{1}{\sqrt{\bar{\alpha}_{t}}}(\mathbf{ x}_{t}-\frac{1-\alpha_{t}}{\sqrt{1-\bar{\alpha}t}}\mathbf{\epsilon}_{\theta}(\mathbf{x}_ {t},t)) \tag{16}\] We then use the simplified loss term as described by Ho et al. (2020). Resulting in the following equations: \[L_{t} =\mathbb{E}_{t\sim[1,T],\mathbf{x}_{0},\mathbf{\epsilon}_{t}}[\|\mathbf{ \epsilon}_{t}-\mathbf{\epsilon}_{\theta}(\mathbf{x}_{t},t)\|_{2}^{2}]\] \[=\mathbb{E}_{t\sim[1,T],\mathbf{x}_{0},\mathbf{\epsilon}_{t}}[\|\mathbf{ \epsilon}_{t}-\mathbf{\epsilon}_{\theta}(\sqrt{\alpha_{t}}\mathbf{x}_{0}+\sqrt{1- \alpha_{t}}\mathbf{\epsilon}_{t},t)\|_{2}^{2}] \tag{17}\] ## 3 Architecture Motivated by the success of the patch embedding tokenization scheme in the Vision Transformer (VIT) Dosovitskiy et al. (2020), we propose a similar tokenization scheme for audio based on mel spectrograms. Specifically, we view mel spectrograms as a sequence of tokens, where the sequence length is equal to the time frames and the dimensionality of each token is equal to the number of mel frequencies. This is similar to taking a patch along the frequency dimension, with a patch size equal to the number of mel frequencies. In contrast to other methods that treat mel spectrograms as images, our approach offers efficiency gains by reducing the context size. For a mel spectrogram with dimensions \(channels\times frequencies\times time\)\(frames\), our context size is reduced to \(channels\times time\)\(frames\). Furthermore, we process channels independently, allowing our method to be applicable to an arbitrary number of channels. These considerations have been incorporated into the design of our U-Net (see Section 3.1) and Neural Vocoder (see Section 3.2). ### U-Net U-Nets Ronneberger et al. (2015) are a popular choice for image segmentation tasks due to their ability to accurately model fine details and localize features through the use of skip connections. These neural networks have also been greatly applied to diffusion modeling Nichol and Dhariwal (2021); Song et al. (2020); Ho et al. (2020). In this work, we propose a U-Net architecture that combines the strengths of U-Nets with those of transformers Vaswani et al. (2017), which are known for their ability to capture long-range dependencies through self-attention mechanisms. The resulting model is able to capture both local and global context while remaining efficient (see Figure 1). We present components of the U-Net below. #### 3.1.1 Input and Output Layers The input layer (tokenization layer) receives a 2D spectrogram \(\mathbf{X}\in\mathbb{R}^{c\times f\times l}\), where \(c\) is the channel dimension (e.g., mono, stereo), \(f\) is the frequency dimension, and \(l\) is the time frames dimension. It reshapes and linearly projects it along the frequency dimension to obtain \(\mathbf{H}\in\mathbb{R}^{d\times c\times t}\), where \(\mathbf{H}\) is the output, \(d\) is the frequency dimension after projection. The output layer (detokenization layer) expects an input \(\mathbf{H}\in\mathbb{R}^{d\times c\times l}\) and performs the inverse actions of the input layer, linearly projecting the input and reshaping it into \(\mathbf{Y}\in\mathbb{R}^{c\times f\times l}\), where \(\mathbf{Y}\) is the output (see Figure 1). Figure 1: Proposed Msnii Architecture. #### 3.1.2 Residual Block The input to our residual block is a 2D latent feature \(\mathbf{H}\in\mathbb{R}^{d\times c\times l}\) and a timestep embedding feature \(\mathbf{T}\in\mathbb{R}^{k\times 1\times 1}\), where \(d\) is the frequency dimension, \(c\) is the channel dimension, \(l\) is the time frames dimension and \(k\) is the timestep. We first apply a pre-normalization layer to the input features, then perform a \(3\times 3\) padded convolution without changing the number of features to ensure that the input and output features have the same dimensions. While the U-Net is not sensitive to the specific type of normalization used, we have found that feature normalization performs better than weight normalization. This may be because the use of feature normalization forces the pre-activations to be Gaussian, resulting in output that is also Gaussian, which is useful for modeling noise that is typically Gaussian. We chose to use Instance Norm Ulyanov et al. (2016) in our implementation, as it has been observed to produce audio with fewer metallic artifacts in our Neural Vocoder 3.2. After normalizing and projecting the input features, we project the timestep embedding features using a \(1\times 1\) convolution to match the feature dimensions of the latent features. We then sum the timestep embedding features and latent features and feed them into a Multi-Layer Perceptron (MLP). Our MLP is based on ConvNext Liu et al. (2022b) with a minor modification: we apply the GELU non-linearity before the first convolution layer. Finally, we apply the residual connection to the outputs of the MLP, and optionally project the residual features to match the dimensions of the MLP outputs. #### 3.1.3 Linear Attention The input to our linear attention block is a 2D latent feature tensor \(\mathbf{H}\in\mathbb{R}^{d\times c\times l}\). We first apply Instance Normalization to the input tensor before performing the attention operations. While the choice of attention mechanism does not significantly affect the performance of the U-Net, we have found that Linear Attention Katharopoulos et al. (2020) performs well due to its linear computational complexity. Unlike some transformers Vaswani et al. (2017); Dosovitskiy et al. (2020); Katharopoulos et al. (2020); Liu et al. (2022b), we have observed that placing the attention layer after the residual block leads to better global coherence. Therefore, we adopt the post-attention mechanism for all of our tasks. Note that the U-Net is not sensitive to the specific attention mechanism used, which may be due to the fact that attention layers are used in the deeper layers where the context size is already small enough for global context to be easily captured. #### 3.1.4 Downsampling and Upsampling Layers For downsampling the input features, we use a single \(1\times 3\) convolution layer with a stride of \(1\times 2\) and padding of \(0\times 1\). This effectively reduces the time dimension of the input features by a factor of 2 while preserving the other dimensions. For upsampling, we use a single \(1\times 4\) transposed convolution layer with a stride of \(1\times 2\) and padding of \(0\times 1\). This effectively increases the time dimension of the input features by a factor of 2 while preserving the other dimensions. ### Neural Vocoder Our neural vocoder design is inspired by ISTFTNet Kaneko et al. (2022) and features a simple structure. It takes in an input mel spectrogram \(\mathbf{X}\in\mathbb{R}^{c\times f_{m}\times l}\), where \(f_{m}\) is the frequency dimension of the mel spectrogram, and passes it through an input layer (see Section 3.1.1). The mel spectrogram is then processed through a single residual block (see Section 3.1.2), without the timestep embedding features, and finally through an output layer (see Section 3.1.1). This produces an STFT spectrogram \(\mathbf{Y}\in\mathbb{R}^{c\times f_{s}\times l}\), where \(f_{s}\) is the frequency dimension of the magnitude STFT spectrogram. After the output layer, we apply an exponential activation to transform the magnitude spectrogram from log-space to linear-space (see Figure 1). ## 4 Experiments ### Dataset The dataset used for this work is the POP909 dataset Wang* et al. (2020), which consists of 909 MIDI files of popular pop songs. We synthesize 44.1kHz, stereo audio from the MIDI files using FluidSynth Newmarch (2017). ### Training Details #### 4.2.1 Data Preprocessing To synthesize mel spectrograms from the raw audio, we use an STFT window size of 2048 with a hop size of 1024, and apply 128 mel filterbanks to the resulting mel spectrogram. Since diffusion models typically operate on data in the range \([-1,1]\) and expect the data to be Gaussian, we propose learning data-specific preprocessing techniques such as moving average parameters for standard scaling (see Algorithm 1) and min-max scaling (see Section A for more details). #### 4.2.2 Model Training For our U-Net model, we set the width to 256 and use 2 U-Net blocks per resolution before applying downsam pling/upsampling layers. This results in a total of 14 U-Net blocks in the encoder and decoder, yielding a model with 49.8 million parameters. We set the timestep dimensionality to 128 and train the model for 110,000 steps using the Adam optimizer with a linear warmup of 500 steps. The value of \(\beta_{1}\) for the Adam optimizer is set to 0.5, and the learning rate is set to 0.0002. We also train an exponential moving average version of the U-Net. The audio is limited to a length of 8,387,584 samples (190 seconds), chosen so that the resulting spectrogram is divisible by the number of downsampling layers. We use a batch size of 4 and train the model on a single GPU with 16 GB of memory. The specific GPU used may vary depending on availability. For the Neural Vocoder model, we set the width to 256 and use a single residual block between the input and output layers. This results in a model with 1.4 million parameters. We train the model for 40,000 steps using the Adam optimizer with a linear warmup of 500 steps. The value of \(\beta_{1}\) for the Adam optimizer is set to 0.5, and the learning rate is set to 0.0002. The audio is limited to a length of 523,264 samples (11 seconds), chosen so that the resulting spectrogram is divisible by the number of downsampling layers. We use a batch size of 8 and train the model on a single GPU with 16 GB of memory. The specific GPU used may vary depending on availability. Both of our models are trained using 16-bit floating point precision. For diffusion, we use the diffusers library von Platen et al. (2022) implementation of the DDIM algorithm Song et al. (2020) with 1000 timesteps and the cosine noise schedule from Glide Nichol et al. (2021) (see Section B for more details). ### Evaluation Our model is currently under active development, so we have performed manual evaluation of the samples. In this evaluation, we have focused on the long-term coherence and harmony of the generated samples. We have randomly generated samples with different seeds, rather than cherry-picking specific samples. However, we have not yet implemented any quantitative metrics for evaluating the performance of the model. ## 5 Results ### Sampling To evaluate the quality of the generated samples, we use subjective evaluations by human listeners. We generate our samples using 200 steps of DDIM and 200 iterations of GriffinLim. Overall, we observe that the generated samples have good long-term coherence, with the ability to maintain coherence for approximately 3 minutes. The samples also display diverse structures, including repeating patterns throughout the entire song. However, the generated samples do exhibit some degradation in quality compared to human-generated music, particularly in terms of realism and naturalness. This may be due in part to the use of GriffinLim for phase reconstruction. One particularly notable strength of the generated samples is their diversity, despite being trained on a relatively small dataset. This suggests that the model is able to learn generalizable patterns from the training data. We also observe that the model struggles with global coherence early on in training, and that the loss does not show significant improvement as training progresses. However, we do notice that with longer training, the model is able to achieve better global coherence and overall sample quality improves. We do not observe any significant improvements by increasing the number of DDIM sampling steps. This suggests that training the model with a shorter noise schedule may lead to faster sampling. Further experimentation will be necessary to confirm this possibility. ### Audio-to-Audio (Style Transfer) Starting with a mel spectrogram \(\mathbf{x}_{0}\), we use Equation (6) to add noise at a desired timestep \(t\), resulting in a noised mel spectrogram \(\mathbf{x}_{t}\). We then employ the reverse diffusion process (described in Subsection 2.2.2) to generate variations of the original audio that are more similar to our training data. Our findings indicate that at low noise levels, the generated audio maintains the structure of the original audio while adapting the instruments and vocals to match those in the training data. However, the generated audio is too noisy at these low noise levels. On the other hand, when Figure 2: Illustrated from left to right are generated samples generated with 200 DDIM steps and 200 GriffinLim steps. the noise level is higher, the generated audio more closely resembles the training data, but the structure of the original audio is largely lost. We also note that percussive sounds, such as drums, are less sensitive to increases in noise levels. Even at high noise levels, the overall structure of the generated audio is preserved. ### Interpolation To perform audio interpolation using diffusion models, we start with two mel spectrograms \(\mathbf{x}_{0}^{1}\) and \(\mathbf{x}_{0}^{2}\) and use Equation (6) to add noise at the desired time step \(t\), resulting in noised mel spectrograms \(\mathbf{x}_{t}^{1}\) and \(\mathbf{x}_{t}^{2}\). We then interpolate the two noised spectrograms as follows: \(\mathbf{x}_{t}=\gamma\mathbf{x}_{t}^{1}+(1-\gamma)\mathbf{x}_{t}^{2}\), where \(\gamma\in[0,1]\) is the interpolation ratio. We apply the reverse diffusion process (described in Subsection 2.2.2) to generate interpolations of the two audio sources that are more similar to our training data. We find that percussive sounds tend to be more prominent in the generated audio, even when the interpolation ratio is low. Similar to the results of the audio-to-audio task (see Subsection 5.2), we observe that at low noise levels the musical structure of the original audio is preserved, but the generated audio is noisy. On the other hand, at high noise levels, the musical structure of the generated audio closely resembles that of the training data. ### Inpainting In inpainting, we use a binary mask to identify the sections of an audio signal that should be kept and which should be removed. We then apply the Repaint algorithm Lugmayr et al. (2022), as implemented by von Platen et al. (2022), to fill in the masked sections. However, our results show that the inpainted sections often lack rhythm and do not accurately capture the melody and harmony of the original audio. The inpainted audio sounds like a completely novel sample and does not resemble the original audio at all. This leads to a sudden change in the musical structure of the song in the inpainted sections. We are not sure why this issue occurs. Further experimentation will be necessary to investigate and address this problem. Figure 4: Original samples for interpolation, featuring a piano sample on the left and a drum loop on the right. Figure 5: Illustrated, from left to right, are interpolation samples generated with a fixed noise timestep of \(t=200\) and varying ratios of the original samples: 0.1, 0.5, and 0.9 representing the proportion of the first original sample in each interpolated sample, respectively Figure 3: Illustrated, from left to right, are audio-to-audio samples of a drum loop with increasing noise timesteps, \(t\): \(t=0\), \(t=100\), \(t=500\), and \(t=900\). Figure 6: Illustrated, from left to right, are interpolation samples generated with a fixed ratio of 0.5 and increasing noise timesteps \(t\): \(t=100\), \(t=500\), and \(t=900\), respectively. Figure 7: Illustrated from left to right: an original drum loop sample, and three inpainted versions generated using different seeds. The inpainted section in each sample is highlighted by the red box. ### Outpainting Outpainting involves extending the audio beyond the original recording by filling in additional sections with synthesized audio. To perform outpainting, we use the same algorithm as for inpainting (see Subsection 5.4). We take half of the original audio and concatenate it with an empty spectrogram, then specify a mask to inpaint the empty part. Unfortunately, our results are sub-optimal. The outpainted sections often sound different from the original audio, often lacking rhythm. This is more pronounced because the different spans of audio that are outpainted result in regions of sudden change, making the audio sound like multiple different sources concatenated together. We are not sure why this issue occurs. Further experimentation will be necessary to investigate and address this problem. ## 6 Future Work There are several directions in which this work could be extended to improve the quality and usefulness of the approach in a music production setting. Some potential avenues for future research include: ### Conditional Generation In this work, we have focused on unconditional generation, meaning that there is little to no control over the synthesized samples. However, in a music production setting, it is important to allow some control over the generated music. Possible approaches for adding controllability to the synthesis process include: * Conditioning the model on factors such as lyrics, mood, or MIDI input. This could allow users to specify the content or style of the generated music. * Allowing users to provide feedback during synthesis, either through explicit input or by interacting with the generated audio. This could allow users to shape the output in real-time or guide the synthesis process towards their desired results. ### Evaluation in Realistic Settings Our model has not yet been evaluated in a realistic setting, so it is important to determine its usefulness and effectiveness in music production. Possible approaches for evaluating the model include: * Conducting user studies or case studies to assess the usefulness and effectiveness of the techniques in a music production setting. This could involve gathering feedback from music producers, musicians, or other industry professionals. * Comparing the model's output to that of human musicians or existing music production tools, using metrics such as subjective quality or task-specific performance. This could help to identify areas where the model excels or falls short compared to human or established standards. ### Generalization to Other Audio Tasks While this work has focused on music synthesis, we believe that the model could potentially be applied to other audio processing tasks as well. Some possible areas for exploration include: * Classification tasks, such as genre classification or instrument recognition. This could help to determine the model's ability to learn and represent musical concepts. * Audio restoration tasks, such as noise reduction or audio enhancement. This could help to evaluate the model's ability to manipulate and improve audio signals. ### Improvements to Model Components There are several ways in which the performance of the model and its components could be improved: * Addressing the suboptimal performance of inpainting and outpainting for interactive audio design. This could involve redesigning the inpainting algorithm or incorporating new techniques for synthesizing audio. * Scaling the model to improve its performance. This could help evaluate the scaling laws of the model. * Incorporating new advancements in diffusion model sampling to allow for near real-time synthesis. This could help to make the model more responsive and interactive in a music production setting. Figure 8: Illustrated from left to right: an original drum loop sample, and three outpainted versions generated using different seeds. The start of the outpainted section in each sample is highlighted by the red line. ### Expanding the Range of Generated Music In this work, we have focused on generating music from a single instrument. However, there are many ways in which the model could be expanded to generate a wider range of music: * Examining how the model handles more complex data with multiple instruments and vocals. This could allow the model to generate a wider range of musical styles and textures. * Exploring the use of multi-track or stem generation, allowing users to have explicit control over each generated instrument. This could be particularly useful in a music production setting. * Investigating the use of the model for generating music in different styles or genres, such as electronic, classical, or world music. This could help to assess the model's ability to learn and represent diverse musical traditions. Overall, there is significant potential for further research on our approach and its applications in the field of music synthesis AI, and we believe that these techniques have the potential to have a significant impact in the field. ## 7 Conclusion In this work, we have introduced Msanii, a novel diffusion-based model for synthesizing long-context, high-fidelity music efficiently. By combining the expressiveness of mel spectrograms, the generative capabilities of diffusion models, and the vocoding capabilities of neural vocoders, Msanii is able to generate high quality audio. Our results demonstrate Msanii's capabilities for generating minutes of coherent audio efficiently, and we have also discussed the potential for Msanii to be used in various applications in the field of music production. With its strong performance and versatility, we believe that Msanii has significant potential for further research and development. Overall, Msanii shows promise as a powerful tool for music synthesis and production. ## 8 Acknowledgement We would like to express our gratitude to OpenAI for providing access to ChatGPT, which has been instrumental in revising our paper. The use of ChatGPT has greatly improved the clarity and readability of our work, and we are grateful for the assistance it has provided.
2304.05967
Angler: Helping Machine Translation Practitioners Prioritize Model Improvements
Machine learning (ML) models can fail in unexpected ways in the real world, but not all model failures are equal. With finite time and resources, ML practitioners are forced to prioritize their model debugging and improvement efforts. Through interviews with 13 ML practitioners at Apple, we found that practitioners construct small targeted test sets to estimate an error's nature, scope, and impact on users. We built on this insight in a case study with machine translation models, and developed Angler, an interactive visual analytics tool to help practitioners prioritize model improvements. In a user study with 7 machine translation experts, we used Angler to understand prioritization practices when the input space is infinite, and obtaining reliable signals of model quality is expensive. Our study revealed that participants could form more interesting and user-focused hypotheses for prioritization by analyzing quantitative summary statistics and qualitatively assessing data by reading sentences.
Samantha Robertson, Zijie J. Wang, Dominik Moritz, Mary Beth Kery, Fred Hohman
2023-04-12T16:43:39Z
http://arxiv.org/abs/2304.05967v1
# Angle: Helping Machine Translation Practitioners Prioritize Model Improvements ###### Abstract. Machine learning (ML) models can fail in unexpected ways in the real world, but not all model failures are equal. With finite time and resources, ML practitioners are forced to prioritize their model debugging and improvement efforts. Through interviews with 13 ML practitioners at Apple, we found that practitioners construct small targeted test sets to estimate an error's nature, scope, and impact on users. We built on this insight in a case study with machine translation models, and developed Angler, an interactive visual analytics tool to help practitioners prioritize model improvements. In a user study with 7 machine translation experts, we used Angler to understand prioritization practices when the input space is infinite, and obtaining reliable signals of model quality is expensive. Our study revealed that participants could form more interesting and user-focused hypotheses for prioritization by analyzing quantitative summary statistics and qualitatively assessing data by reading sentences. ## CCS Concepts * **Human-centered computing \(\rightarrow\) Visualization systems and tools; Interactive systems and tools: \(\bullet\) Computing methodologies \(\rightarrow\)_Machine learning_; _Artificial intelligence_. ## Keywords Model evaluation, machine translation, visual analytics ## 1. Introduction In machine learning (ML), out-of-sample evaluation metrics are used to approximate how well a model will perform in the real world. However, numerous high-profile failures have demonstrated that aggregate performance metrics only estimate how a model will perform _most of the time_ and obscure harmful failure modes (e.g. (12; 29; 58; 81)). In response, researchers have explored how to anticipate model failures before they impact end users. For example, disaggregated error analysis has helped identify errors that impact people with marginalized identities. Prominent examples include facial recognition models failing to recognize women with dark skin tones (14) or translation models perpetuating gender stereotypes (118). However, subgroups where a model fails can be highly contextual, specific, and may not match any social category (i.e., "men wearing thin framed glasses" (18) or "busy/cluttered workspace" (30)). It remains an open challenge for ML practitioners to detect _which_ specific use case scenarios are likely to fail out of a possibly infinite space of model inputs --and prioritize _which_ failures have the greatest potential for harm (7; 43). With finite time and resources, where should machine learning practitioners spend their model improvement efforts?**In this work, we aim to help practitioners detect and prioritize underperforming subgroups where failures are most likely to impact users**. Towards this goal, we contribute the following research: * **A formative interview study with 13 ML practitioners at Apple** to understand their process for prioritizing and diagnosing potentially under-performing subgroups (SS 3). Practitioners rely on the model type, usage context, and their own values and experiences to judge error importance. To test _suspected_ issues, practitioners collect similar data to form **challenge sets**. Using a challenge set, practitioners rely on a combination of signals from model performance and usage patterns to gauge the prevalence and severity of a failure case. The most common fix for an under-performing subgroup is dataset augmentation to increase the model's **coverage** for that subgroup. * **Angler** (Fig. 1), an open-source1 interactive visualization tool for supporting error prioritization for machine translation (MT) (SS 4). Since our research centers on the issue of _ prioritization_ (rather than specific error identification) we chose an ML domain where practitioners cannot directly observe model errors. MT developers do not speak all the languages that their translation models support. They rely on proxy metrics like BLEU (83) to estimate model performance but ultimately depend on human expert translators to obtain any ground-truth. Since gathering feedback from human translators is expensive and time-consuming, careful allocation of annotation resources is crucial. To help MT practitioners prioritize suspected error cases that most align with user needs, we worked with an industry MT product team to develop Angler. By adapting familiar visualization techniques such as _overview + detail_, _brushing + linking_, and _animations_, Angler allows MT practitioners to explore and prioritize potential model performance issues by combining multiple model metrics and usage signals. Footnote 1: Angler code: [https://github.com/apple/ml-translate-vis](https://github.com/apple/ml-translate-vis). * **A user study of 7 MT practitioners using Angler** to assess the relative importance of potentially under-performing subgroups (SS 5). MT practitioners completed a realistic exercise to allocate a hypothetical budget for human translators. Observing MT practitioners using Angler revealed how they use their intuition, values, and expertise to prioritize model improvements. Direct inspection of data showed the potential to encourage more efficient allocation of annotation resources than would have been possible by solely relying on quantitative metrics. While rule-based error analysis allowed participants to more successfully find specific model failure patterns, exploring data grouped by topic encouraged practitioners to think about how to improve support for specific use cases. The study also prompted discussion for future data collection and helped practitioners imagine new features for translation user experiences. No model is perfect, and large production models have a daunting space of potential error cases. Prioritization of subgroup analysis is a practical challenge that impacts model end users. By exploring prioritization in the context of MT, where there are no reliable quality signals for previously unseen model inputs, we highlight the value of flexible visual analytics systems for guiding choices and trade-offs. Our findings support the potential for mixed-initiative approaches: where automatic visualizations & challenge sets help reveal areas of model uncertainty, and human ML practitioners use their judgment to decide where to spend time and money on deeper investigation. ## 2. Related Work This research builds on substantial prior work across general ML evaluation practices, visualization tooling for ML error analysis, and a broad body of work from our target domain, machine translation. ### ML Evaluation and Error Analysis First, we review standard evaluation practices in ML, and discuss how visualization tools can support ML error discovery. #### 2.1.1. How do Practitioners Evaluate ML Models? Standard practice in machine learning is to evaluate models by computing aggregate performance metrics on held-out test sets before using them in the real world (offline evaluation) (Beng et al., 2015; Chen et al., 2016). The goal of using held-out test sets, i.e., data that was not used during model development, is to estimate how well the model will generalize to real world use cases. However, offline evaluations are limited. For example, held-out datasets can be very different from real usage data (Rao et al., 2016; Wang et al., 2017), as data in the wild is often noisy (Rao et al., 2016) and the real world is ever-changing (Wang et al., 2017). Held-out datasets tend to contain the same biases as the training data so they cannot detect potentially harmful behaviors of the model (Wang et al., 2017; Wang et al., 2017). While summarizing a model's performance in aggregate metrics is undeniably useful, it is insufficient for ensuring model quality. To overcome these limitations, researchers have proposed additional approaches to help discover model weaknesses (e.g., (Beng et al., 2015; Chen et al., 2016; Wang et al., 2017)). For example, practitioners can apply subgroup analysis to discover fairness issues (Krause et al., 2016), use perturbed adversarial examples to evaluate a model's robustness to noise (Chen et al., 2016; Chen et al., 2016; Wang et al., 2017), create rule-based unit tests to detect errors (Chen et al., 2016; Wang et al., 2017), and conduct interactive error analysis to expand known failure cases (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). ML practitioners also continuously monitor a deployed model's performance and distribution shifts over time (Chen et al., 2016). We build on this work by focusing on the question of _prioritization_: how ML practitioners judge where to spend their time and resources among many possible model failure cases. This understanding can help inform the design of future tooling and techniques for surfacing model issues that are more attuned to urgency or severity. #### 2.1.2. Visualization Tools for Supporting Error Discovery Interactive visualization is a powerful method for helping ML developers explore and interpret their models (Chen et al., 2016; Wang et al., 2017). While many visualizations have been built to help practitioners evaluate models over time, one area of recent work has focused on designing and developing analytic tools for ML error discovery (e.g., (Beng et al., 2015; Chen et al., 2016; Wang et al., 2017; Wang et al., 2017)). For example, FairVis (FairVis, 2017) uses visualizations to help ML developers discover model bias by investigating known subgroups and exploring similar groups in the tabular data. Similarly, Visual Autofor (Wang et al., 2017) automatically surfaces underperforming subgroups and leverages graph visualizations to help practitioners interpret the relationships between subgroups and discover new errors. For image data, exptAlner(Beng et al., 2015) combines interactive visualization and post-hoc ML explanation techniques (e.g., (Beng et al., 2015; Chen et al., 2016)) to help practitioners diagnose problems with image classifiers. For text data, Seq2seq-Vis(FairVis, 2017) helps practitioners debug sequence-to-sequence models by visualizing the model's internal mechanisms throughout each of its inference stages. The success of these recent visual ML diagnosis systems highlights the outstanding potential of applying visualization techniques to help ML developers detect errors. Instead of visualizing a model's internals (e.g., (Beng et al., 2015; Chen et al., 2016)), we treat ML models as black-box systems and focus on probing their behaviors on different data subsets. While prior systems have focused on ML applications like image captioning where errors are directly observable (FairVis, 2017; FairVis, 2017), we designed Angler for the more challenging modeling domain where practitioners cannot always spot-check errors and must rely on proxy metrics to estimate the likelihood of an error. ### Evaluating Machine Translation Models Evaluating translation quality is extremely nuanced and difficult (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). Language can mean different things and be written in different ways by different people at different times. There are also often multiple "correct" translations of the same input (Wang et al., 2017). The gold standard for machine translation evaluation is to have professional translators directly judge the quality of model outputs, for instance, by rating translation fluency and adequacy (Fair, 2017). There are also automatic metrics for machine translation--such as BLEU (Wang et al., 2017), ChrF (Wang et al., 2017) and METEOR (Chen et al., 2016)--which measure the similarity between a candidate text translation (model output) and one or more reference translations. Intuitively, these metrics apply different heuristics to measure token overlap between two sentences. While these metrics are less reliable and nuanced than human judgment (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), they are intended to correlate as much as possible with human judgments and are widely used for comparing the aggregate performance of different MT models. An overarching challenge in MT evaluation is that it is especially resource intensive. Both human and automatic evaluation depends on the expertise of human translators, to either directly judge translation quality, or generate reference translations. Since translators have high levels of expertise and are often difficult to find for rare language pairs (Wang et al., 2017; Wang et al., 2017), it is expensive to evaluate translation quality if the input data does not already have a reference translation (e.g., users' requests to a model). In addition, it is difficult to maintain consistent quality within and across human evaluators (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). Since evaluating the quality of translations from real world use cases requires human annotation, online monitoring and debugging MT models presents a resource allocation problem. In this work, we explore how interactive visualization of online model usage might help MT practitioners prioritize data for human evaluation. #### 2.2.1. Subgroups in Machine Translation More recently, researchers have explored how to systematically identify specific kinds of errors in MT models (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). Many of these are language-dependent challenge sets to probe the syntactic competence of MT models (Chen et al., 2016; Chen et al., 2016; Wang et al., 2017). For example, Isabelle et al. introduces a dataset of difficult sentences designed to test linguistic divergence phenomena between English and French. Stanovsky et al. analyzed sentences with known stereotypical and non-stereotypical gender-role assignments in MT, which falls in a broader body of work on detecting gender bias in MT (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). While these approaches deepen our understanding of specific model failure modes, it is unclear how different errors impact end users of MT models. As a recent survey suggests (Wang et al., 2017), most of these challenge test sets are created either manually by MT experts or automatically with linguistic rule-based heuristics (e.g., (Beng et al., 2015; Chen et al., 2016; Wang et al., 2017)). An alternative approach has been to examine the performance of MT models in specific domains like law (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017) or healthcare (Wang et al., 2017; Wang et al., 2017). These domain-specific challenge sets are deeply informed by knowledge of a particular use case, but are limited in scope. It is difficult for researchers to develop broader challenge sets guided by real users' needs because we lack a clear understanding of how people use MT models, and how they can be impacted by errors. In this work we strive to narrow this gap by working with an industry MT team to understand how practitioners might prioritize model improvements based on their users' needs. #### 2.2.2. Visualization Tools for Evaluation in Machine Translation There is a growing body of research focused on designing visual analytics tools for MT researchers and engineers (e.g., (61; 84); 61). For example, with the Chinese Room system MT practitioners can interactively decode and correct translations by visualizing the source and translation tokens side by side (1). Similarly, NMTVis (78), SoftAlignments(105), and NeuralMonkey(41) use interactive visualization techniques such as parallel coordinate plots, node-link diagrams, and heatmaps to help MT practitioners analyze attention weights and verify translation results. MT researchers also use visual analytics tools (e.g., (56; 80; 131)) to better understand MT evaluation metrics such as BLEU, ChrF, and METEOR scores. For example, iBLEU (72) allows researchers to visually compare BLEU scores between two MT models at both corpus and sentence levels. VEMV (119) uses an interactive table view and bar charts to help researchers compare MT models across various metric scores. In contrast, our work focuses on evaluation based on _usage_ of a deployed model. We use interactive visualization as a probe to understand how practitioners prioritize model evaluation when reliable quality signals are expensive to obtain. Our findings provide insight into what kind of information practitioners need to assess potential model failures with respect to their impact on users. ## 3. Interview Study: Prioritizing Model Improvements This work began in close collaboration with an industry machine translation team, with the goal of helping them prioritize model debugging and improvement resources on problems that had the greatest potential to impact end users. From initial conversations with team members, we learned that their existing process for identifying and addressing problems was largely driven by specific errors (e.g., bug reports, or biases surfaced in academic research), or based on random sampling of online requests. Further, this process was limited to team members with the technical expertise to conduct one-off data analyses. To gain insight into a broader range of existing approaches to prioritization, we turned to practitioners across other ML domains. We conducted a semi-structured interview study with 13 ML practitioners at Apple. In this section, we describe how practitioners identify and solve specific issues with ML models that impact the user experience. First, we discuss how practitioners navigate a large space of possible failure cases (SS 3.2). Next, we describe how they build challenge sets to assess the cause, scope and severity of an issue, which then informs which issues they address and how they fix them (SS 3.3). At each stage, we highlight how practitioners bring a range of approaches and perspectives to the task of prioritization. We synthesize these findings into four design implications for tooling to support prioritization in model debugging (SS 3.4). ### Data Collection and Analysis We recruited practitioners from both internal mailing lists related to ML and snowball sampling at Apple. Each interview lasted between 30 to 45 minutes. We recorded the interviews when participants gave permission, and otherwise took detailed notes. The study was approved by our internal IRB. We recruited practitioners who have worked on developing and/or evaluating models that are embedded in user-facing tools and products. Incorrect, offensive, or misleading model predictions are detrimental to users' experiences with these models. Therefore, engineers and data scientists that are working on evaluating and improving user-facing ML models are more likely to consider how their models shape users' experiences than other kinds of ML practitioners. Indeed, we found in our interviews that participants often considered how different kinds of model failures may impact end users. An overview of the participants' primary ML application is shown in Table 1. Two authors conducted inductive qualitative analysis of the interview data. One author conducted three rounds of open coding, synthesizing and combining codes each round (75). Next, a second author took the code book in development and independently coded two interviews, adding new codes where relevant and noting disagreements. These two authors then discussed these transcripts and codes to ensure mutual understanding and shared interpretation of the codes, and converged on a final code book. Lastly, they used this code book to code half of the transcripts each. ### Sourcing Potential Issues Out of the many ways an ML model could fail, we found that practitioners want to prioritize those that are most consequential for end users. Participants discussed three approaches to find such issues: (1) analyzing errors reported by users; (2) brainstorming potential errors in collaboration with domain experts; and (3) comparing usage patterns against model training data to find areas where the distributions of these two data sources differ. #### 3.2.1. User Testing and User-Reported Errors Six participants discussed identifying potential issues through direct feedback from users or from user testing (P3, P4, P5, P7, P10, P13). Even _ad hoc_ testing with a small sample of people can reveal issues that are not surfaced in standard offline evaluations. For example, P5 once _"just showed the [model-driven] app to other people"_ and found a _"weird edge case"_ where the model always (and often erroneously) classified images containing hands as a certain output class. This was an error that was not surfaced in offline model evaluations because the test data was drawn from the same distribution as the \begin{table} \begin{tabular}{c l l} **Participant** & **ML Application** & **Role** \\ \hline P1 & Business forecasting & Data Scientist \\ P2 & Multiple NLP tasks & Data Scientist \\ P3 & Image segmentation & ML Engineer \\ P4 & ML Tooling & ML Engineer \\ P5 & Image classification & Data Scientist \\ P6 & Image classification & Research Scientist \\ P7 & Various CV tasks & ML Manager \\ P8 & Image classification & ML Engineer \\ P9 & Resource use forecasting & ML Engineer \\ P10 & Image segmentation & Research Scientist \\ P11 & Recommender systems & ML Manager \\ P12 & Image captioning & Robustness Analyst \\ P13 & Gesture recognition & Research Scientist \\ \end{tabular} \end{table} Table 1. Primary type of machine learning application that each interview participant works on. training data, and both contained a spurious correlation between images with hands and this particular output class. "_That's why you do your user testing_" (P5). User feedback outside of user testing can be difficult to source. In some settings, failures are detectable from signals in usage data, e.g., whether a user accepts or rejects a suggested word from a predictive text model [P5]. More often, real users need to take additional steps to report errors, which they are unlikely to do for every error they encounter: "_T think it takes a lot of [effort] and willingness to go and file these things_" [P4]. In the most difficult case, users are not _able_ to assess prediction quality themselves (e.g., if someone is using MT to translate to or from a language they do not know). In such contexts, direct user feedback is particularly rare. #### 3.2.2. Brainstorming with Domain Experts Another approach is to brainstorm potential failure modes with people who hold specialized knowledge of what may be both _important_ to users and _challenging_ for the model [P1, P4, P9, P12, P13]. "_Sometimes we involve partners, like other teams or providers who are specialized in the area, to attack the model make the model fail._" - P4 P13 works on gesture recognition models and followed this approach of brainstorming potential errors. P13 had a deep understanding of how their model worked, and thus what kinds of inputs might be difficult to classify accurately. They then collaborated with designers and accessibility experts, who have a deep understanding of users' needs, to identify how the model's weaknesses lined up with _realistic_ and important use cases. Sometimes ML practitioners have built this kind of expertise themselves over years of working with a similar model type [P1], or through precedent with prior reported model failures [P7]. P4 and P12 pointed to academic research (e.g., work published at venues focused on fairness and ethics in AI), the press, and social media as additional helpful sources of potential failure cases. These sources, while not necessarily directly related to any specific model they are working on, can help practitioners understand patterns in ML failures more systematically, and anticipate high-stakes failures. Brainstorming is particularly useful for identifying _types_ of failures that could impact users [102, 104]. However, it is difficult to translate these types into actual failure cases and keep them up to date [102, 10]. "_We can go with things like what's known as the OVS list--offensive, vulgar, slur. Those are quite obvious, but things can be more subtly offensive... Frankly, there are ways to be offensive that we just simply probably haven't anticipated and language evolves and slang becomes apparent, and even the global situation changes and things that weren't offensive before could become offensive._" - P7 #### 3.2.3. Identifying Usage Patterns with Low Training Data Coverage A third approach is to identify suspected areas of weakness for the model by looking for differences between how users are interacting with a model and the data with which that model was trained. We use the term **coverage** to describe how well a model's training data "covers" the space of inputs that the model receives after deployment (i.e., use cases). The coverage of a particular use case is a measure of how much the model "knows" about those kinds of inputs, and can be used as a proxy for model performance when other quality signals are not available. "_We know that the [training] datasets that we have, however large they are, they don't cover the entire space. So wherever we don't have coverage we don't expect the [model] performance to be that great._" - P3 To detect coverage issues, practitioners monitor online data to see how people are using a model, and compare that against their training data: "_If it is a classification task, you were expecting to have a very balanced dataset, but online [you see] that almost 90% of the traffic is coming for 1 class. That means your offline [data] was not representative of what is going to happen in an online setting. So, by monitoring and looking at all data distributions, you will get a sense of those discrepancies._" - P2 Considering coverage allows practitioners to move beyond the kinds of failures that they already know of or suspect based on past experience and identify new failure modes that they were not previously aware of [P7]. ### Creating Challenge Sets to Validate and Evaluate Issues When practitioners identify reported or suspected failures, they still need to determine whether this is a systematic problem with the model and, if so, assess the scope and severity of the error. Participants first wanted to understand if a potential failure is a one-off error (as can be expected given ML is probabilistic) or a more systematic problem [P4, P7, P12]. We found that practitioners shared a common general approach of: "_Collecting more similar data and testing the model behavior and seeing if it's systemically failing._" - P7 Practitioners in our sample referred to these curated data subsets as _aggressor sets_, _golden test sets_, and _stress tests_. In the remainder of this paper, we refer to these kinds of datasets as **challenge sets**. Challenge sets differ from standard test sets in ML because they are designed to target a specific failure case, and are thus often more reflective of how people really use a model in practice. As P6 described, "_that kind of test while we call it stress test is probably closer to what happens in reality than when you do random sampling for testing._" Creating these sets can be challenging. In particular, it might not be immediately clear what kind of "similar" data will replicate a failure mode, and the axis of similarity that matters might not be annotated explicitly in the data. "_The length of the beard seemed to play a role [in a failure mode]. It [the dataset] was just annotated as has beard or not, and not so much the length._" - P6 Once practitioners have built a challenge set and determined that a suspected failure case is indeed a systematic problem with a model, they can then conduct quantitative and qualitative analyses of the challenge set to deeply understand the cause of the issue and how it might impact users. This understanding is critical to prioritizing issues to solve and informs the choice of solution. #### 3.3.1. Assessing the Cause of the Problem Practitioners look for patterns in challenge sets to understand the potential cause of a problem. A first step is usually to compare the challenge set to the model's training data to identify coverage issues or other data problems, e.g., spurious correlations. This analysis could be a simple process of _"manually going through_ [the challenge set] _and looking for any general trends_" [P5], although models with larger output spaces or high dimensional data may require more sophisticated techniques like embedding space visualization and dimensionality reduction techniques [P7]. #### 3.3.2. Assessing the Impact of the Problem on Users Practitioners also want to assess the impact of the problem on users to judge its urgency. Model failures might be prioritized if they impact many users, happen frequently [P1], or if they produce a negative user experience [P7, P11, P12]. In this way, prioritization is _"not just a pure data science question"_[P1], but involves considering different and possibly conflicting perspectives and values. For example, in P7's work, _"certain mispredictions could be more offensive than others,"_ so when gathering feedback from quality annotators, they ask annotators to _"exercise some judgment,"_ and specifically flag anything they feel are _"potentially offensive or egregious mistakes."_ Practitioners might also prioritize improvements to ensure no subpopulation of users is experiencing particularly poor performance compared to others: * _"What we want to do is, reduce the length of the tail end of users that have poor experience and talk more about, how can we bring these people up and what is it about_ [their use context] _that causes the models to perform poorly."_ -- P11 #### 3.3.3. Assessing Potential Solutions The choice of appropriate solution depends on understanding the scope and nature of the problem, and discussing these with reference to how the issues impact end users. Often, problems are the result of poor coverage and can be addressed by increasing training data in a specific area and retraining the model [P9]. For some participants, this was the default approach: _"the answer is pretty much always going to be more representative data across all classes."_[P5]. However, our findings highlight a wider range of approaches that practitioners can take when they deeply understand the nature and stakes of a problem. For example, three participants talked about strategies to augment the model's output space. This could mean adding or removing classes from a classification taxonomy [P4], preventing specific outputs using a block list or an additional classification model, or hard-coding outputs for certain inputs using a lookup table [P4, P7, P12]. Other approaches included improving annotation quality [P7, P10], removing problematic data from the training set [P5], changing the user interaction with the model to control the input environment in production [P8], adjusting the model architecture or loss function [P12], or adding additional data pre-processing steps [P5]. These approaches differ in complexity, cost, and effectiveness. The choice of solution is not solely based on technical and resource constraints, but could involve negotiating trade-offs, considering conflicting values, and accounting for the urgency of the error. For example, practitioners might select a fix that is faster to implement if an error impacts many users or is particularly offensive. Such decisions require input from stakeholders with a broad range of expertise. Therefore, ML practitioners must be able to discuss problems with reference to business metrics and user experience and in terms that are accessible to stakeholders without ML expertise [P1]. ### Design Implications Our findings demonstrate how challenge sets allow practitioners to develop a deep understanding of a problem's cause, scope, and impact on users. This understanding is necessary to effectively prioritize resources on the most egregious and urgent model failures. Existing tooling for model evaluation and debugging have largely focused on _identifying_ model weaknesses rather than _prioritizing resources_ on weaknesses with the greatest potential impact on users. Based on the practices uncovered through our interview study, we developed four design implications for tooling to support prioritization: * Compare usage patterns to training data to support exploration of suspected model weaknesses in addition to known errors. * Build collections of similar data (challenge sets) to assess and prioritize problems, and allow users to compare challenge sets. * Provide information about model performance and usage patterns to surface issues that matter most to users. * Since prioritization is not solely a technical question and does not have a singular solution, account for prioritization subjectivity, and make the tools easy to use for stakeholders with diverse backgrounds. Interactive visualizations have successfully helped ML practitioners discover ML errors (SS 2.1.2) and understand model behaviors (SS 2.2.2). Interactive visualization techniques are especially useful for exploring data to support hypothesis generation and serendipitous discoveries (D1), comparing and contrasting slices of data (D2), analyzing data from multiple perspectives (D3), and supporting collaborative interpretation of data among stakeholders with diverse skill sets (D4). For these reasons, visual analytics is a promising choice for supporting prioritization. The remainder of this paper focuses on ANGLEr (Fig. 1), a visual analytics tool to support prioritization in the context of machine translation (MT). While our interview study revealed common practices across ML domains, prioritization depends on measures of prediction quality and insight into usage patterns, both of which are extremely specific to a particular model. Therefore, to understand these practices more deeply and begin to explore what tooling support for prioritization might look like, it is useful to choose a specific ML task as a case study. We chose MT because it poses unique challenges that make prioritization both especially important and especially difficult: it is difficult and expensive to attain reliable measures of prediction quality [(21)]; MT models accept open-ended input from users, opening a vast space of possible failures; and we know relatively little about how people use MT models in the real world [(64)]. ## 4. Designing Angler: Exploring Machine Translation Usage with Challenge Sets Given the design implications (D1-D4) described in SS 3.4, we present Angler (Fig. 1), an interactive visualization tool that helps MT developers prioritize model improvements by exploring and curating challenge sets. Angler leverages both usage logs and training data to help users discover model weaknesses (D1, SS 4.1). Angler introduces two novel techniques to automatically surface challenge sets and expand challenge sets with similar data (D2, SS 4.1). Angler uses the _overview + detail_ design pattern (Brock et al., 2016) to tightly integrate two major components: the _Table View_ that summarizes challenge sets as table rows (SS 4.2) and the _Detail View_ that enables users to explore one challenge set in depth with different attributes over time (D3, SS 4.3). Finally, to lower the barrier for different stakeholders to easily prioritize model improvements (D4), Angler allows users to conduct quantitative and qualitative analyses without needing to write custom code and manipulate complex data and model pipelines. We develop Angler with modern web technologies so that anyone can access it without installation overhead, and we open-source our implementation (SS 4.4). We designed and developed Angler in conversation with an industry MT product team. To contextualize our design in the team's practices and gain iterative feedback, we used one of the team's MT models (English \(\rightarrow\) Spanish), with a sample of their training data and usage logs. Usage logs are only available from users who have opted-in. For privacy and security reasons, members of our research team required special permissions and security protocols to access this data. Therefore, we cannot show Angler with the original model or dataset from our design process. Moreover, to demonstrate how Angler can support many different MT models and language pairs, we instead describe the Angler interface in this section using a public MT model (English \(\rightarrow\) Simplified Chinese) and public datasets (SS 4.4). ### Subgroup Analysis through Challenge Sets In Angler, we introduce two novel techniques to surface interesting subsets from the usage logs and training data. We automatically extract _challenge sets_ by sampling data that either fails our model performance unit tests (SS 4.1.1) or involves topics the model is less familiar with (SS 4.1.2). #### 4.1.1. Unit Test Failures The current state-of-the-art approach to building challenge sets for machine translation is to build rule-based unit tests (2.1). In line with this practice, the first type of challenge sets that we include in Angler extends the team's existing suite of unit tests to identify unexpected model behavior (Fig. 2). These unit tests use regex search to find patterns in a source-translation pair and verify that each match meets some pre-defined rules. For example, when a source includes an emoji, we expect the translation to have the same emoji. Similarly, when a source does not contain offensive, vulgar, or slur (OVS) words, we expect the translation not to include OVS words either. Some unit tests are language-specific: consider translating English to Spanish; if a source is a question, we expect the translation output to have both "\(\sim\)" and "\(\sim\)" characters. For simplified Chinese, however, we would expect the translation output to end with the "? " unicode instead. For our English \(\rightarrow\) Chinese demo in this section, we apply 5 unit tests to both usage logs and training data (D1), and collect data samples that fail any unit test into challenge sets (Fig. 2). Each unit test corresponds to one challenge set. #### 4.1.2. Unfamiliar Topics While unit tests can reveal some straightforward errors, they do not offer insight into issues of **coverage**, which our interview participants highlighted as critical to identifying failure modes that are highly consequential for end users (SS 3.2.3). In the context of MT, coverage refers to how much a translation model "knows" from its training data about a particular topic or way of speaking. For example, coverage is a major concern with _domain-specific language_: e.g., doctors use domain-specific phrases to talk about medicine, while video game players use language specific to their game. Coverage can be improved by collecting more training data to give the model more exposure to that particular language pattern. Extending existing techniques for building challenge sets in MT, we sought to help MT practitioners prioritize _which_ domains may need better coverage based on what their users are requesting. To identify topics that are not well represented in the training data, we first use a sentence transformer (Krizhevsky et al., 2017) to extract high-dimensional latent representations of sentences in the training data. This latent representation is trained to cluster sentences with similar meaning close together in high-dimensional space. We then apply UMAP (Wang et al., 2018), a dimensionality reduction method, to project the latent representation into 2D. We choose to use UMAP instead of other dimensionality reduction techniques, such as PCA (Petersson et al., 2019) and t-SNE (Srivastava et al., 2017), because UMAP is faster and has better preservation of the data's global structure (Wang et al., 2018). We use the cosine similarity to measure the distance between two samples in the high-dimensional space, as previous works have shown that cosine distance provides better and more consistent results than the Euclidean distance (Srivastava et al., 2017). Following the suggested practice in applying UMAP (Wang et al., 2018), we fine-tune UMAP's hyperparameters n_neighbors and min_dist through a grid search of about 200 parameter combinations; we choose hyperparameters that spread out the training samples in the 2D space while maintaining local clusters. With about 47,000 training sentences and a latent representation size of 768, it takes about 50 seconds on average to fit a UMAP model with one parameter combination on a MacBook Pro laptop. We use Kernel Density Estimation (KDE) (Krizhevsky et al., 2017) to estimate the training data's distribution. For the KDE, we choose a standard multivariate Gaussian kernel with a Silverman bandwidth H (Srivastava et al., 2017). It only takes about 1 second to fit a KDE model on 47,000 training sentences' 2D representations. Then, we use this trained KDE model to compute the _familiarity score_ (FA) (Zhu et al., 2018) for each sentence from the usage logs. We define the familiarity score (Equation 1) of a sentence from the usage log as the log-likelihood of observing that sentence's UMAP 2D coordinate \((x,y)\) under the training data's UMAP distribution \([(x_{1},y_{1}),(x_{2},y_{2}),\ldots,(x_{i},y_{i})]\). This concept of familiarity can be generalized to other data types and ML domains, and has shown to be a powerful tool for debugging data (Zhu et al., 2018). \[\text{FA}(x,y)=\text{log}\left(\frac{1}{n}\sum_{i=1}^{n}\frac{\exp\left(-\frac{1}{2} \left[x-x_{i}\quad y-y_{i}\right]\text{H}^{-1}\left[\begin{matrix}x-x_{i}\\ y-y_{i}\end{matrix}\right]\right)}{2\pi\sqrt{|\,|\,}}\right) \tag{1}\] Computing FA is slow when the training data is large (i.e., \(n\) is large in Equation 1), because the algorithm needs to iterate through all \(n\) points in the training data for each sentence from the usage logs. Therefore, to accelerate FA computation, we apply a 2D binning approximation approach. We first pre-compute the log-likelihoods over a 2D grid of training data's UMAP 2D space \(\mathbf{F}\in\mathbb{R}^{200\times 200}\), constrained by the range of the training data's UMAP coordinates. Then, to approximate the FA of a sentence, we only need to (1) locate the cell \(\mathbf{F}_{i,j}\) in the grid that the sentence falls into, and (2) look up the pre-computed log-likelihood associated with that cell \(\mathbf{F}_{i,j}\). If a sentence falls out of the 2D grid, we extrapolate its FA by using the log-likelihood associated with the closest grid cell. Note that one can choose a different grid density other than \(200\times 200\); we tune the grid density (\(d=200\)) to balance the computation time and the approximation accuracy. Our binning approximation is scalable to large usage logs (\(m\)) and training data (\(n\)), as it decreases the FA computation's time complexity from a quadratic time \(\mathcal{O}(kmn)\) to a linear time \(\mathcal{O}(m+d^{2}kn)\), where \(k\) is the dimension of the UMAP space (\(k=2\) in our case), and \(d\) is the grid density (\(d=200\)). In addition, we use the KDE implementation from _Scikit-Learn_(Krizhevsky, 2012), which leverages KD Tree (Krizhevsky, 2012) for more efficient distance computation. With a tree-based KDE, our FA computation method has a logarithmic time complexity \(\mathcal{O}(m+d^{2}k\text{log}\left(n\right))\) on average and a linear time complexity \(\mathcal{O}(m+d^{2}kn)\) in the worst case. After estimating the FAs for all sentences in the usage logs, we use BERTopic (Krizhevsky, 2012) to build a topic model on a sample of 50,000 sentences from the usage logs with the lowest FA and select the 100 largest topics from this model. To estimate the model's performance on these topics, we need labeled training data. Therefore, we extend each extracted topic set with a sample of training sentences that are close to the topic set in the high-dimensional space (D2). To reduce the computational cost of this search, we randomly sample 15 "seed sentences" from each topic and add any sentences from the training data that are close to at least one of the "seed sentences" in the high-dimensional space (threshold \(\ell_{2}<0.6\) selected through manual inspection). We have tuned the number of "seed sentences" to balance the computational cost and the number of close training sentences that we can find. Finally, we have controlled random seeds for random sampling, UMAP computation, and BERTopic, so that our topic results are reproducible. #### 4.1.3. Limitations Identifying new model failure modes and collecting examples to replicate the failure is extremely challenging. Developing automatic, expert-driven, and crowd-sourcing methods for identifying failures is an active area of research in machine learning and human-computer interaction (Krizhevsky, 2012; Krizhevsky, 2012; Krizhevsky, 2012; Krizhevsky, 2012; Krizhevsky, 2012; Krizhevsky, 2012). Compared to prior research, it is especially difficult to automatically identify MT model failures because there are no explicit, interpretable features or metadata on which to slice data into subgroups, and automatic evaluation metrics are very noisy. Further, prior work largely focuses on identifying failure modes by comparing predictions to ground-truth labels (e.g. (Krizhevsky, 2012; Krizhevsky, 2012), which does not give practitioners insight into failure modes that impact end-users but are not yet represented in offline, labeled datasets. Our goal in this work is to understand how practitioners prioritize their resources across many potential failure modes, and what Figure 2. We create a suite of regex-based unit tests to detect translation errors without the need for ground-truth translation. For example, some tests check if the source and translation contain the same special words (e.g., Emoji, URLs, and roman numerals). Some tests check punctuation (e.g., question marks and exclamation points). Another test validates that a translation does not contain profane words if its source does not contain any profane words, by matching language-specific offensive, vulgar, slur word lists. In the translation columns, each row lists two examples that pass (top) and fail (bottom) the unit test. information they need to do so. We generate example challenge sets to guide this exploration using pattern-matching rules (the current state-of-the-art in MT) and topic modeling on areas of low coverage. However, further research is needed to evaluate and extend these methods. While we did not conduct a formal evaluation of our challenge sets in this work, both kinds of sets are certainly imperfect in terms of error identification - there are perfect translations included in the challenge sets, and there are translation errors in the larger data that are not included in any challenge set. Given the large space of possible inputs, and probabilistic nature of machine learning, we cannot expect to ever have methods to identify all possible failures with perfect accuracy. Thus, there is a need for interactive visualization tools that support practitioners to explore and make sense of _potential_ failure modes and prioritize development and annotation resources under uncertainty. ### Table View When users launch Ancler, they first see the _Table View_ listing all pre-computed challenge sets in a table (Fig. 3A). Each challenge set can contain samples from the training data and usage logs, color coded as orange and green respectively throughout Ancler. We name challenge sets based on their construction methods (Fig. 4). For challenge sets created by unit test failures, we name them "mismatch-[_unit test name_]." For challenge sets created by unfamiliar topics, we name them "topic-[_top-4 keywords_]." These Figure 3. (A) _The Table View_ summarizes all challenge sets in a table, where users can compare the model’s performance and familiarity across these sets. To help users interpret aggregate metrics, the _Table View_ visualizes the distributions of metrics and sets’ compositions as sparkline-like charts. (B) _The Preview_ presents more details of a challenge set after a user clicks a row. The _Sample Sentences_ (left) lists 100 randomly selected source sentences from this set with translations. The _Keywords_ (right) visualizes the most representative words in this set, where a darker background indicates higher representativeness. keywords are the same as keywords shown in the _Set Preview_ (Fig. 3-B). In addition to the names of challenge sets, the _Table View_ view provides five metrics associated with each set: * _Train Count_ and _Log Count_: the number of training and usage log samples in the set. * _Chr_: a measure of the model's performance on the training samples in the set. ChrF is the F-score based on character \(n\)-gram overlap between the hypothesis translation produced by the model and a validated reference translation (Zhou et al., 2017). We use the open-source SacreBLEU implementation of ChrF (Zhou et al., 2017). * _Familiarity_: a measure of how familiar the usage log samples in the set are to the model, by reference to the training data distribution (SS 4.1.2). * _Train Ratio_: the percentage of samples in the set that are training samples. Users can sort challenge sets by any of these metrics by clicking the sorting button \(\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{ \boxed{\boxboxed{\boxboxed #### 4.3.7. Overlap Spotlight The design of the _Overlap Spotlight_ (Fig. 6-6) is similar to _Source Spotlight_. Instead of encoding the source category, the y-axis here represents other challenge sets. For example, for a challenge set created by unit test failures, the y-axis in its _Overlap Spotlight_ represents other challenge sets created by unfamiliar topics. As unfamiliar topics are strictly non-overlapping, this view only shows overlap with challenge sets of the other type. By cross-referencing two challenge set types (unit test failures and unfamiliar topics), this visualization can help users explore syntactic errors within semantic topics, and vice versa. #### 4.3.8. Sentence List The _Sentence List_ (Fig. 5F) shows all sentences in the challenge set that satisfy the currently applied filters. Users can click a sentence to see the model's translation. To further fine-tune a challenge set, users can click the \(\$-\)Edit button and remove unhelpful sentences from the set. Finally, users can click the \(\$\) Export button to export sentences shown in the list along with their translations and attributes; users can then easily share these sentences with colleagues and human annotators (D4). Figure 5. After a user selects a challenge set from the _Table View_, **Angler presents the _Detail View_ to help the user further analyze this set from diverse perspectives. (A) The Header shows the name and statistics associated with this challenge set. (B) The Filter Panel helps users keep track of the currently active filters. (C) The Timeline visualizes the usage log count over time, allowing users to focus on traffic data from a particular time window. (D) The Thumbnails and (E) Spotlight visualize diverse representations of sentences in this set—users can click different _Thumbnails_ to switch the _Spotlight_, on which users can further filter sentences with particular attributes. (F) The Sentence List displays all sentences that meet the active filters, where users can inspect translations, search words, and remove sentences from this set.** ### Open-source Implementation Ancler is an open-source interactive visualization system built with _D3js_(Han et al., 2018): users with diverse backgrounds (D4) can easily access our tool directly in their web browsers without installing any software or writing code. We use the standard NLP suite for data processing (e.g., _NLTK_(Tran et al., 2019), _Scikit-learn_(Tran et al., 2019)) and topic modeling (e.g., _BERTopic_(Tran et al., 2019), _UMAP_(Tran et al., 2019)). We first implemented Angler with an industry-scale MT model (English \(\rightarrow\) Spanish) and real training data and usage logs. For demonstrations in this paper, we use the public MT model _OPUS-MT_ (English \(\rightarrow\) Simplified Chinese) (Rendle et al., 2019) and its training data (Rendle et al., 2019). To simulate usage logs we augment a sample of the model's test data (Rendle et al., 2019) with publicly available sources to emulate realistic use cases that can be difficult for MT models: social media (Han et al., 2018), conversation (Rendle et al., 2019), and scientific articles (Rendle et al., 2019). ### Usage Scenario We present a hypothetical usage scenario to illustrate how Angler can help Priya, an MT engineer, explore usage logs and guide new training data acquisition. The first part of this usage scenario, in which the user explores and selects challenge sets of interest, is informed by real user interactions that we observed in the user study (SS 5). The second phase of the scenario describes how we envision extending Angler in future work to help practitioners use the datasets they collect with Angler to improve model performance. Priya works on improving an English-Chinese translation model, and she only speaks English. Priya first applies the challenge set extraction pipeline (SS 4.1) to the training data and usage logs from the past 6 months. The pipeline yields 100 challenge sets from unfamiliar topics and 6 challenge sets from unit test failures. After Priya opens Angler to load extracted challenge sets in a web browser, she sees the _Table View_ (Fig. 1A) summarize all 106 sets in a table with a variety of statistics. Priya wants to prioritize subsets of data where the model may not perform well, but which are important to the end-users of the model, e.g., because they occur frequently in the usage logs, or represent a high-stakes use case. To focus on data on which the current MT model may not perform well, Priya sorts challenge sets in ascending order by their mean ChrF scores by clicking the sort button. After inspecting the top rows and their _Set Previews_, the topic-headache set draws Priya's attention-the MT model performs poorly on this set (mean ChrF score is only 0.39), and this set involves high-stakes medical topics where the MT quality is critical (observed from the _Keywords_ in the _Set Preview_). To learn more about this challenge set, Priya clicks the _EventEvent_ button to open the _Detail View_ (Fig. 1B). Priya notices that the number of usage logs is consistent across the past nine months (from July 2021 to March 2022) in the _Timeline_ (Fig. 1-B). She then clicks the _Embedding Thumbnail_ (Fig. 5D) to switch the _Spotlight_ from the default _Keywords_ view (Fig. 6-1) to the _Embedding_ view (Fig. 6-2). Through zooming and hovering over the scatter plot, Priya Figure 6. The _Detail View_ presents six options for the _Spotlight_ to help users explore a challenge set from diverse perspectives. (1) The _Keywords_ shows the most representative words in a set. (2) The _Embedding_ uses a zoomable scatter plot with contour backgrounds to help users explore the high-dimensional representations of sentences in a set. (3) The _ChrF Distribution_ allows users to inspect and filter training sentences by their ChrF scores. (4) The _Familiarity_ helps users filter usage logs by the models’ familiarity scores. (5) The _Source Distribution_ visualizes the usage log source as a horizontal histogram where users can filter usage logs from particular sources. (6) The _Set Overlap_ allows users to see sentences that are also in other challenge sets. finds that most sentences from this set form a cluster in the high-dimensional representation space, and all these sentences are about health issues. She is surprised to see that people are using the model to communicate about health concerns, and wonders whether the training data covers this use case. To explore this, Priya opens the _Familiarity Distribution Spotlight_ (Fig. 6-4) and brushes the histogram to select the region with low familiarity scores. The _Timeline_, charts in _Thumbnails_, and the _Sentence List_ update in-real time to focus on the usage logs with a familiarity score in the selected range. Browsing the sentences in the _Sentence List_, Priya realizes that many of these unfamiliar sentences are about fever. She worries that wrong translations about fevers could pose a health risk to users. Therefore, Priya decides to prioritize improving her model's performance on this challenge set; she clicks the \(\copyright\) Export button to save all sentences along with their translations from this challenge set. The current Angler prototype was designed to explore what information practitioners need to prioritize subsets of data to send to annotators. Priya follows a similar process to identify and export a few other challenge sets of high importance and sends all of this data to human annotators to acquire additional training data. Human annotators can speak both English and Chinese. They write reference Chinese translations for given English sentences by directly editing the translations produced by the model. In the future, Angler could be extended to allow Priya to continue her analysis after the data has been annotated. For example, Priya could check a few sentences from the health-related challenge set whose reference translations are significantly different from their translations produced by Priya's model. At this point, Priya might find that her model has made several serious translation errors. For example, her model translates the input sentence "The fever burns you out" to " in Chinese (Fig. 1-B3), which means "fever will burn you to death." After retraining the MT model with the newly annotated data, Priya would hope to see that the model's ChF scores and familiarity scores on the original challenge sets have significantly improved. In this case, Priya would schedule to deploy her new MT model in the next software release cycle, now with better support for a safety-critical use case. ## 5. User Study We used Angler in a user study with seven people who contribute to machine translation development at Apple as ML engineers (E1-3), and in user experience-focused roles, such as product management, design, or analytics (UX1-4). Our goal in this study was to understand how users with different expertise would use Angler to explore and prioritize challenge sets. We were also interested in whether exploring challenge sets using Angler could help practitioners to uncover new insights about their models, and identify new ways to improve their models in line with users' needs. Our goal was not to measure whether Angler can support prioritization more effectively than another tool, e.g., by finding more translation errors, but rather to explore what kind of information is useful to practitioners, and how the process of exploring challenge sets could shape future evaluation practices. The study was approved by our internal IRB. Each session was conducted over video chat and lasted between 45 minutes and one hour. With participants' consent, each session was recorded and transcribed for analysis. During each study session, we introduced Angler with a tutorial demonstrating each view (Fig. 3-Fig. 6). Angler showed training and usage data from the team's own translation model, for the language pair English \(\rightarrow\) Spanish. Next, we sent the participant a one-time secure link that allowed them to access the tool in their own browser, and asked them to share their screen while they explored the tool. For the remainder of the session, we asked the participant to think aloud as they completed three tasks: 1. First, we asked the participant to navigate to the _Detail View_ of a unit test challenge set that targeted mismatches in numbers between source and output translations, and to discuss what they saw. 2. Second, we asked the participant to choose a topic-based challenge set that was interesting to them, explain their choice, and again explore the _Detail View_ to learn more. 3. Finally, we gave the participant a hypothetical budget of 2,000 sentences that they could choose to get evaluated by expert human translators, and asked them to explain how they would allocate that budget. The evaluation by professional translators could involve rating the quality of model-produced translations and/or correcting the translations to create gold standard reference translations that could be used for future model training. We analyzed the transcripts following a similar qualitative data analysis procedure to that of the formative interview study (SS 3.1). One author conducted two rounds of open coding, synthesizing and combining codes each round (Yin et al., 2017). Next, a second author took the code book in development and independently coded all of the transcripts, adding new codes where relevant and noting disagreements. These two authors then discussed and resolved disagreements, and converged on final coding scheme. We found that **T1** and **T2** mainly served as a way for participants to acclimate to Angler's interface, and understand the two types of challenge sets. Although participants confirmed that **T3** was a realistic task for the team, most participants did not do the task as we had originally planned. We report our findings regarding how participants picked which challenge sets they deemed _important_ for model improvements, but we do not report on their fictional budget allocations because the majority of participants were resistant to allocating concrete (even completely hypothetical) numbers. We discuss this tension more in limitations SS 5.3. All three tasks required participants to prioritize among the available challenge sets, but our findings focus largely on participants' judgments during **T3**, where they spent the majority of the study time. As discussed in Section 3, the team's existing approaches to model debugging and improvement were either one-off, focused analyses, which do not require prioritization between issues, or random samples of usage logs, which implicitly prioritize use cases based on frequency of requests. Participants informally compared what they could do with Angler to these existing practices. Our goal in this study was to explore the space of possibilities for visual analytics tools to support prioritization, rather than quantify the relative benefit of our specific prototype compared to existing practices. ### Results: Prioritizing under Uncertainty Participants had to rely on imperfect and incomplete metrics to estimate the quality of translations. All participants knew a little Spanish, but not enough to spot-check the quality of a translation in most cases. Though Angler shows sentences from usage data and model training data, only the training data had _reference translations_ certified as correct by human translators. Sentences from the usage logs have no quality annotations (SS 4.1). Thus the ChrF quality estimation for any challenge set was based on the limited data with reference translations. This evaluation setup is far from ideal, yet realistic to what MT practitioners ordinarily encounter. Participants knew that no one metric was a reliable source of quality information, so they weighed multiple signals and still knew that human annotation would generally be required to get a reliable measure of quality. Three participants discussed how they incorporated uncertainty when interpreting metrics like average ChrF to ensure they were getting meaningful estimates of quality [E1, E2, UX4]: _"You might get a low metric or a low familiarity score, but the smaller the sample is the more likely it is [that] there's gonna be some noise in there that's kind of moving the metric."_ -- UX4 Future iterations of the tool could use re-sampling methods to estimate confidence intervals for challenge set summary statistics to make this uncertainty more explicit. Despite available metrics in Angler being uncertain proxies for model performance, participants nonetheless used metrics to judge the relative importance of a challenge set: _"It's better to have a statistical way, I mean, [rather than] just by what I'm thinking, right?"_ -- E2 All participants tended to rank the challenge sets by potential risk of model failure by combining low ChrF, low familiarity, and low train-ratio. Low ChrF indicates that the limited _training data_ that falls within the challenge set _might be poorly translated_(Krishnan et al., 2018). Low familiarity and low train-ratio are proxies for where the training data set _might lack coverage_ of the usage data. Low familiarity suggests that the user request data that falls within the challenge set is semantically different from the overall training data. Low train-ratio indicates that this challenge set represents a subgroup that is much more represented in the user requests than in the training data. Familiarity and train-ratio were calculated in a way that correlates (SS 4.1). As a first pass, most participants used the sorting feature in the _Table View_ (Fig. 3) to rank which challenge sets scored the lowest on one or more of these three metrics [E1, E2, UX1, UX3, UX4]. Four of these five participants additionally considered set size, with a preference for larger sets. While participants could have stopped at this point, given the opportunity to explore further, none of the practitioners relied solely on the available metrics. Next, we discuss other ways that participants used Angler to explore the data beyond aggregate metrics and decide which sets to annotate. #### 5.1.1. Estimating Meaningful Use Cases From the list of challenge sets in the _Table View_ sorted by metrics, three participants chose to prioritize topics that appeared to represent a "meaningful," coherent use case [UX1, E2, UX2]. Partially, this is because the BERTopic(Rosen et al., 2018) model tends to generate some "topics" with little meaning, e.g., the topic topic-haha_lol_so_you versus the more meaningful topic-health_nursing_care_medical in Fig. 3. Partially, participants needed to make judgments on the value of improving the model on various sentence types, since some challenge sets mostly contained data that appeared to be fraud, spam, or automated messages. Participants demonstrated the value of being able to directly read sentences in the _Detail View_ (Fig. 5) to make these judgments: _"Yeah, a lot of these are spam. [...] As I'm kind of going through them, it's like a lot of spam, a lot of porn and a lot of things that are like, automated messages. So I would use my discretion, of course, and wouldn't just use the numbers."_ -- UX1 E1 knew from prior experience that _"it's always good to look at what the data actually are [...] besides looking at the high level statistics."_ They had seen in the past that even keyword summaries can be misleading and obscure complexity that is apparent when directly inspecting the data: _"From my past experience, sometimes we have seen some data contain some keywords and we imagine them to be, for example, articles, but looking at the actual example, they are kind of fraud messaging. [...] Combining them together as a single dedicated targeted test would not make too much sense for us to understand the performance on it."_ -- E1 Some of the dataset contained explicit sexual content and profanity, to which participants ascribed different value. UX1 argued for prioritizing model improvement resources towards use cases that aligned with their organization's values (such as supporting small business owners), _over_ explicit content. UX4 was far more accepting of explicit content, arguing that if users were translating that content, there was no reason to treat it differently. #### 5.1.2. Estimating Impact on Users Participants assessed how severe the consequences of specific model failures would be for end users. They considered the stakes of the interaction mediated by the model [UX1, UX2, UX3], whether a failure was especially sensitive, e.g., offensive outputs, and how likely a user was to be misled if they were to receive an erroneous translation of this nature, such as an incorrect date [UX2]. In task **T1** all participants looked at number mismatch translations (Fig. 2). All participants skimmed the raw translations to focus on specific sub-cases of number translation, for instance how the model converts roman numerals, dates, or currency. Even though they could not read the Spanish translation, E2, UX2, and UX4 talked about wanting to find "obvious" errors where they could clearly see numbers changing from English to Spanish. e.g., 1,100 dollars to 100 dollars. An obvious error may not mislead a user, but could degrade their trust in the translation product. Participants dug into specific sub-cases through filtering and search in Angler to get a sense for the severity of an error: _"It's a really nice way of quickly getting into the patterns to see whether or not we're looking at something like a serious problem with translation or if it's just kind of surface level formatting issues."_ -- UX4 #### 5.1.3. Estimating Complexity of the Error Practitioners wanted to prioritize annotation resources on more complex kinds of failures, rather than those that could be solved internally without additional annotations [(10, 11)]. For example, issues with translating numbers could be identified within the team by using regular expressions to match source and translations. For pattern-based failures, such as translating numbers or translating automated messages, E1 proposed trying data augmentation techniques first. Data augmentation includes increasing the samples and variations on existing data, e.g., the sentence "I ate breakfast on Sunday" can be duplicated to create a sentence for all the weekdays "I ate breakfast on Monday": _"If it's a lack of data issue, it should be very easy to augment the data for this particular example."_ -- E1 While they used train-ratio and familiarity metrics to identify potential coverage issues, directly inspecting the data gave them insight into whether a problem was complex enough to warrant annotation. E3, E2, and UX2 used the _Embedding_ view to develop more nuanced hypotheses about how customers' use of the model differs from the training data. Even within an apparently similar topic, participants used clusters of usage data with comparatively less training data to estimate subtopics that may need better coverage [(10, 11), (12)]. They used their experience to hypothesize why or why not an area of low-coverage might be difficult for the model. For instance a cluster with a lot of domain-specific language may be best improved by paying for additional annotations [(11)]. While we had initially prompted practitioners to budget annotation resources between the challenge sets we gave them, more often we found that they wanted to prioritize subgroups _within those sets_ to optimize annotation for the most complex and impactful issues. We observed that practitioners were able to form more interesting and user-focused hypotheses for prioritization when they combined summary statistics with qualitative assessment of the data by reading sentences. Their use of Angler was promising evidence for the strength of a visual analytics approach in MT prioritization. At the same time, practitioners demanded more features for flexibly creating challenge sets and exploring more analytics lenses than the Angler prototype supported. We next discuss strengths and limitations of the tool to inform next steps. ### Results: Angler Strengths and Usefulness #### 5.2.1. Develop Intuition for Model Behavior Neural machine translation models are large language models that are not easily understandable even to those who have developed them [(50)]. We found that exploring data with Angler helped practitioners develop a deeper understanding of how their models work. UX4 said that _"a lot of what I like to do is just develop my like, mental model of how our translation models are working."_ Exploring challenge sets by the quality metrics gave E2 _"some insights into the weaknesses of the model."_ Given that practitioners' intuitions about model weaknesses guide their future debugging effort (SS 3.2.2), this can bring value beyond identifying specific failures in the moment. #### 5.2.2. Develop Intuition for Translation Usage Participants used the topic-based challenge sets to improve their understanding of how customers use their translation products. The _Keyword Spotlight, Sentence List_, and _Source Spotlight_ were especially useful for participants to develop hypotheses about how people use the model and the context where they are using it [(10, 11, 12, 13, 14)]. UX1 works in a user experience focused role and said that, _"it helps us inform feature development when we understand the conversation topics that people are using_ [the model for]."_ While browsing the unit test challenge set based on mismatches in numbers between source and output, E3 imagined potential future features that could give users greater control over how different number formats are handled in translation, e.g., when to use Roman numerals or convert date formats. #### 5.2.3. Develop a Shared Interdisciplinary Understanding UX-focused participants expressed excitement for using a visual analytics tool like Angler to broadly explore the use cases surrounding potential failure-modes--as opposed to a purely metrics-driven report that does not allow them to develop their sense of the use context. UX1 and E3 pointed out that being able to describe specific use cases makes it easier to engage cross-functional teams in planning and prioritizing product improvements and developments. UX1 wanted to spend time using Angler to _"generate some insights about each of [the topics] in human understandable terms"_ that they could then present to other team members. _"Our [team members] come from a variety of backgrounds [...] not all of them are engineers. So it's like, could I translate the high level findings here into something that they could understand in a brief?"_ -- UX1 Using Angler, UX2 and UX3 even learned new insights about where the model performs unexpectedly well on specific kinds of inputs: _"the date formatting changed. I didn't even know if that's something that we do"_ -- UX2. Discussing improvements in terms of use cases and specific customer needs not only supports internal cross-functional collaboration, but also makes it easier to acquire new data targeting specific topics from external vendors [(10)]. ### Results: Angler Limitations and Usability Issues As we mentioned in SS 5, most participants did not assign concrete numbers in the annotation budgeting task **T3**. Partially this was a limitation of the study setup, since we provided participants with a long list of pre-made challenge sets in Angler, and there was not enough time for a participant to closely examine all of them. However, in many cases participants also wanted more information and analytics features to drive their prioritization than Angler provides --or else wanted to refine challenge sets from those we pre-made. A major design takeaway for future work is that although Angler is _already_ a relatively complex visualization tool, far more lenses and interactions were desired to complete the MT annotation prioritization task. We discuss these key areas for improvement next. #### 5.3.1. Provide More Context and Comparison Participants wanted to contextualize the data they were seeing in each challenge set with reference to the overall distribution of data in the training data and usage logs. For example, UX3 and UX4 wanted to know what the average familiarity score was over all of the usage logs to interpret familiarity scores on specific challenge sets. Other participants suggested additional useful reference points, for instance, understanding overall ChrF score distribution by language pair [E2, UX3] or understanding the overall distribution of topics in the training data and usage logs [UX2, UX4]. E3 suggested that it would be helpful to be able to more easily compare challenge sets: _"It's not so easy for me to compare them [challenge sets]._ _So it would be great if I can somehow select a cluster and compare them side by side."_ -- E3 #### 5.3.2. Support Authoring and Refining Challenge Sets Our focus in this work was on how visual analytics tools could support the process of prioritizing specific areas for improving MT models. Therefore, we chose two plausible methods for constructing challenge sets to use as examples in the study. While participants found those sets interesting there was a clear need for future tooling to support challenge set _creation_ in addition to exploration. Participants wanted to search all of the data by specific terms [UX3, E3] and refine the unit test logic to better capture specific types of errors [E1, E2, UX2, E3, UX4]. UX4 even asked if they could onboard their own datasets that they already had available to explore them with Angel's visualizations. #### 5.3.3. Differ Advanced Export Options for Custom Analysis Two participants [UX1, E2] expressed a desire to conduct their own analyses on the data, e.g., conduct a custom analysis over time [UX1] or experiment with the underlying topic model [E2]. While Angel offers a simple export option, these options could be made more sophisticated to support users with advanced skills to build on the default visualizations. #### 5.3.4. Expand Filtering and Sorting Capabilities Several participants found issues with the data filtering and sorting features that made it difficult to organize and prioritize data in the way they wished [UX1, E1, UX3, E3]. For example, two participants expected to be able to filter to all sentences containing _all_ of the keywords selected, but the tool returned _any_ such sentences [UX3, E3]. Two participants also struggled to keep track of which filters they had applied when navigating between views [E1, UX3], suggesting potentials to make the ability to view and remove filters more prominent in the interface. Two participants wanted to sort the _Table View_ by multiple columns to be able to organize and prioritize sets by multiple metrics or factors, e.g., find large sets within the sets with the lowest average ChrF score [UX1, E1]. #### 5.3.5. Validate and Extend Challenge Set Creation Methods Grouping data using a topic model was a useful way for practitioners to explore data and better understand use cases for the model. However, it is not clear whether a group of sentences that are close together in a latent embedding space that was trained to group together sentences with similar meaning are also likely to be similarly difficult for an MT model. There are certainly other dimensions by which to group data. As UX3 described, _"When I think about trying to determine where we're doing poorly, there are a lot of dimensions you can look at. Topic is one, right? But there could be other dimensions, like how long the sentence is."_ -- UX3 In this paper, we found that exploring challenge sets has the potential to help practitioners prioritize their model evaluation and development resources on issues that are important to end-users. An important direction for future work is to validate that the challenge sets presented indeed represent areas where the translation model performance is relatively weaker. For instance, rather than asking practitioners to allocate hypothetical budgets, they could allocate real budgets and have professional translators evaluate the challenge sets selected to see whether they were able to identify new model failure cases using Angeler. In general, we designed Angeler to be agnostic to the method of generating challenge sets. Thus, research developing and evaluating methods for generating challenge sets can proceed in parallel to efforts to improve visual analytics support for exploring, comparing, and prioritizing them. ## 6. Discussion and Future Work To conclude, we discuss our findings in the broader context of tooling for ML evaluation and debugging and highlight directions for future work. ### Trade-offs Between Automation and Human Curation Angler encourages MT practitioners to inspect training data and usage logs, so that they can better understand how end-users use MT models and detect model failures. Practitioners can then annotate related data samples and retrain the model to address detected failures. Since exploring raw data is a manual and tedious process, we introduce an approach that uses unit tests and topic modeling to automatically surface interesting challenge sets (SS 4.1). Our approach yields many challenge sets, but it still takes time for MT practitioners to inspect and fine-tune these sets. One might argue that we should automate the whole pipeline and have human raters annotate all extracted challenge sets. However, annotating MT data is expensive [34, 76]. In our study, some challenge sets reveal MT errors that are trivial, where MT experts hesitate to spend the budget to annotate the challenge sets (SS 5.1.1). Besides data acquisition prioritization, our mixed-initiative approach can also help users interact with raw usage logs and gain insights into the real-life use cases of MT models. Future researchers can use Angler as a research instrument to further study the trade-offs between automation and human curation for challenge set creation. To further reduce human effort, researchers can surface challenge sets more precisely before presenting them to MT practitioners. ### Generalization to Different Model Types We situate Angler in the MT context, as it is particularly challenging to discover failures for MT models due to the scarcity of ground truth and the high cost of human annotators (SS 2.2). However, our method is generalizable to different model types. In our formative interview study, we find that it is a common practice to use challenge sets to test and monitor NLP, computer vision, and time-series models (SS 3.3). ML practitioners can adapt our _unit tests_ and _topic modeling_ (clustering) approach to surface challenge sets for other model types. Consider an image classification model; practitioners could define perturbation-based unit tests to detect model weaknesses. For example, we would expect a model to give the same prediction when the input image is rotated, resized, or with different lighting (Wang et al., 2019; Wang et al., 2019). Then, we can create challenge sets by collecting images where the model's prediction changes after adding image perturbations. Similar to topic modeling, practitioners could use embedding clustering (Wang et al., 2019; Wang et al., 2019) and sub-group analysis (Wang et al., 2019; Wang et al., 2019) to identify unfamiliar images from both usage logs and training data. For example, if an image classifier team receives a user's complaint about a misclassification, they can use embedding-based image search (Wang et al., 2019) to identify similar images and create a challenge set. Finally, researchers can adapt Ancler's _overview+detail_ design (SS 4) and open-source implementation (SS 4.4) to summarize extracted challenge sets and allow practitioners to explore and curate potentially error-prone images through different perspectives. ### Unit Tests for Machine Learning Researchers have argued that unit tests can help pay down the technical debt in ML systems (Wang et al., 2019; Wang et al., 2019). There are many different ways to apply unit tests to an ML system. For example, practitioners can write unit tests to validate the data quality (Wang et al., 2019; Wang et al., 2019), verify a model's behavior (Wang et al., 2019; Wang et al., 2019), and maintain ML operations (MLOps) (Wang et al., 2019; Wang et al., 2019). In Ancler, we design simple rule-based unit tests, such as if the source does not contain offensive words, then the translation should not either. We then apply these tests to the training data and usage logs to surface challenge sets (SS 4.1.1). Since they were intended to be a proof of concept, our unit tests were blunt and imperfect. Still, MT practitioners in the evaluation study especially appreciated the unit tests, as they are powerful to detect glaring translation mistakes and yet are easy to adopt in the current model development workflow (SS 5.2.2). Therefore, we see rich research opportunities to study unit tests for ML systems. For example, future researchers could extend our unit tests to support MT data validation and MLOps in general. Researchers could also adapt Ancler to design future interactive tools that allow ML practitioners to easily write, organize, and maintain unit tests for ML systems. ### Broader Impact To overcome the limitations of aggregate metrics on held-out test sets (SS 2.1, SS 2.2), Ancler uses _real usage logs_ to help MT practitioners gain a better understanding of how their models are used and prioritize model failures. Drawing on usage data raises privacy and security concerns. All of the authors have received training and license to use usage logs from their institution for this research. Researchers adapting Ancler should carefully consider the ethical implications of their choice of data source (Bartos et al., 2019). Before collecting usage logs, researchers need to obtain consent from the users (Wang et al., 2019) and compensate them when applicable (Bartos et al., 2019). Usage logs must be de-identified before viewing them with Ancler. Finally, we encourage researchers and developers to thoroughly document their process of adopting Ancler with new models and datasets (Wang et al., 2019), including how they approach these ethical considerations. ## 7. Conclusion In this work, we present Angler, an open-source interactive visualization system that empowers MT practitioners to prioritize model improvements by exploring and curating challenge sets. To inform the design of the system, we conducted a formative interview study with 13 ML practitioners to explore current practices in evaluating ML models and prioritizing evaluation resources. Through a user study with 7 MT stakeholders across engineering and user experience-focused roles, we revealed how practitioners prioritize their efforts based on an understanding of how problems could impact end users. We hope our work can inspire future researchers to design human-centered interactive tools that help ML practitioners improve their models in ways that enrich and improve the user experience. ###### Acknowledgements. We thank our colleagues at Apple for their time during the interview study and evaluating our system. We especially thank Danielle Olson who sparked early ideas and connections for pursuing this line of research, and Yanchao Ni for lending his expertise in machine translation for our work.
2301.11114
Unitary anchored planar algebras
In our previous article [arXiv:1607.06041], we established an equivalence between pointed pivotal module tensor categories and anchored planar algebras. This article introduces the notion of unitarity for both module tensor categories and anchored planar algebras, and establishes the unitary analog of the above equivalence. Our constructions use Baez's 2-Hilbert spaces (i.e., semisimple $C^*$-categories equipped with unitary traces), the unitary Yoneda embedding, and the notion of unitary adjunction for dagger functors between 2-Hilbert spaces.
André Henriques, David Penneys, James Tener
2023-01-26T14:07:08Z
http://arxiv.org/abs/2301.11114v2
# Unitary anchored planar algebras ###### Abstract In our previous article [arXiv:1607.06041], we established an equivalence between pointed pivotal module tensor categories and anchored planar algebras. This article introduces the notion of unitarity for both module tensor categories and anchored planar algebras, and establishes the unitary analog of the above equivalence. Our constructions use Baez's 2-Hilbert spaces (i.e., semisimple C\({}^{*}\)-categories equipped with unitary traces), the unitary Yoneda embedding, and the notion of unitary adjunction for dagger functors between 2-Hilbert spaces. ###### Contents * 1 Introduction * 1.1 Motivations for this article * 1.1.1 Enriched subfactor theory * 1.1.2 Internal structure of bicommutant categories * 1.1.3 Unitary higher categories * 1.2 Outline * 1.3 Acknowledgements * 2 Preliminaries * 2.1 The unitary Yoneda lemma and unitary adjunctions * 2.2 Unitary dual functors and spherical states * 2.3 Involutive functors * 2.4 Involutive lax monoidal functors * 3 Unitary module tensor categories * 3.1 Module tensor categories * 3.2 Unitary module tensor categories * 3.3 The map \(i^{\dagger}\) and a formula for \(\operatorname{coev}^{\dagger}_{\mathbf{Tr}_{\mathcal{V}}(c)}\) * 3.4 Unitarity of the traciator * 4 Unitary anchored planar algebras * 4.1 Planar algebras * 4.2 Anchored planar algebras * 4.3 Unitary anchored planar algebras
2302.08115
Tunable optical multistability induced by a single cavity mode in cavity quantum electrodynamics system
A tunable optical multistability scheme based on a single cavity mode coupled with two separate atomic transitions in an atom-cavity system is proposed and demonstrated. Under the collective strong coupling condition, multiple polariton eigenstates of the atom-cavity system are produced. The threshold and optical multistability curve can be tuned freely by system parameters in a broadband range. Moreover, a certain bistability region of the system is split to two bistability regions due to destructive quantum interference induced by an extra weak control field. Compared to traditional optical multistabilities created by two or more light fields, the proposed optical multistability scheme has compactness and is easy to be miniaturized. The proposed scheme is useful for manufacturing integrated application of multi-state all-optical logic devices and constructing basic elements of all-optical communication networks.
Liyong Wang, Yinxue Zhao, Jiajia Du
2023-02-16T06:40:54Z
http://arxiv.org/abs/2302.08115v3
Tunable optical multistability induced by a single cavity mode in cavity quantum electrodynamics system ###### Abstract We study tunable optical multistability in a single cavity mode coupled with multi-level atoms under strong collective coupling conditions. Two or three separate atomic transitions are excited with appropriate input cavity field andmultiple polariton eigenstates are produced simultaneously. We show that in the nonlinear excitation regime with a moderate input field, optical multistability is created and can be manipulated by a weak control field coupled to the atoms from free space. The threshold and multistability shape can be tuned by system parameters in a broad range. Under certain conditions, a bistability region can be split into two bistability regions due to destructive quantum interference induced by theweak control field.. The proposed scheme can be used as a multistate passive optical device, and is useful for applications of multi-state all-optical logic devices and all-optical communication networks. ## 1 Introduction Over the past decades, optical bistability and optical multistability, which is manifested by the nonlinear interaction of optical media and light fields in a resonator, have been extensively studies theoretically and experimentally due to its importance for practical applications in all-optical switching [1, 2, 3, 4], all-optical coding elements [5, 6], optical memories [7, 8, 9, 10], quantum information processing [11, 12], etc [13, 14]. Optical multistabilities may exhibit two distinct forms: one leads to three or more output values for a given input light intensity s, and the other shows simultaneous multiple bistability regions in the system input output curve. With rapid increase of data usage, it is an essential requirement that all-optical logical devices should work in parallel with multiple channels to transport and transfer data in all-optical communication network [15, 16]. Optical multistability may be used to realize multistate quantum elements like all-optical switching [17], all-optical coding and routing [18], all-optical logic gates [19, 20], all-optical transistors [5, 21], etc. Compared to optical bistability, optical multistability is more difficult to obtain and control since it usually requires high-order nonlinearity or multiple nonlinearity regions in the absorptive or dispersive spectrum of aphysical optical system. It has been shown that optical multistability can be realized with three-level \(\Lambda\) -type [22] or V-type atomic systems [23]. Here it requires strong off-resonant input field to create high-order nonlinearity [24] and the tunability of multistability is limited. Optical tristabilities based on degenerate Zeeman sublevels created by extra magnetic fields are also reported, but the performance of the tristabilities is limited by Zeeman splittings which depend on the intensities of extra magnetic fields [25, 26, 27]. In view of this, schemes involving multi-level atoms (such as a diamond-type configuration [28], a N-type configuration [29], and a Y-type configuration [30]) interacting with two or more light fields have been proposed for realizing optical multistability These schemes enhance the tunability of optical multistability via the quantum coherence and interference in multilevel atom systems, but the extra light fields increase the system complexity. In recent years, optical multistabilities based on spontaneously generated coherence (SGC) effect has been reported [31, 32]. However, the SGC is difficult to produce in a practical physical system due to the near-degenerate levels and nonorthogonal dipole matrix elements requirements [29, 33]. Here we propose a new scheme for optical multistability based on a single cavity mode coupled with multi-level atoms in a cavity quantum electrodynamics (CQED) system, in which two or more atomic transitions are excited by a single cavity mode under the collective strong coupling condition. Optical multistability is induced by a single input signal field without the assistance of extra light fields, magnetic fields, etc. The nonlinear input-output curve and its threshold can be tuned and controlledby system parameters. The proposed scheme has a simple configuration and may be used as a multistate passive optical device.Hence,the scheme should be useful for applications in multistate quantum logic devices [34], multistate all-optical switches [35, 36, 37], etc. The article is organized as follows. In section 2, we describe the theoretical model in which asingle cavity mode couples with three-level atoms and four-level atoms respectively in an atom-cavity system,. In section 3, we first analyze the optical multistability induced by single cavity mode coupled with three-level atoms and its manipulation with an additionalcontrol field. We then analyze optical multistability created by a single cavity mode coupled with four-level atoms. Finally, in section 4 we summarize the results and present the conclusion. ## 2 Theoretical Model Fig. 1(a) shows the schematic setup. A unidirectional optical ring cavity consists of four mirrors M\({}_{1}\) (i = 1-4). R and T denote the reflection and transmission coefficients of mirrors M\({}_{1}\) and M\({}_{2}\) respectively. R+T = 1. The reflectivities of mirrors M\({}_{3}\) and M\({}_{4}\) are 1. An input signal field \(E_{\text{in}}^{s}\) is injected into the cavity from the left side of cavity mirror M\({}_{1}\). The single cavity mode circulates in the ring cavity and interacts with an atom ensemble inside the optical cavity [38, 39], then the output field \(E_{T}^{s}\) comes out from the right side of cavity mirror M\({}_{2}\). A free-space control field E\({}_{c}\) interacts with the atom ensemble in the optical cavity. The CQED system is excited in nonlinear regime when the optical pumping effect induced by the signal field cannot be neglected, and the population of atoms in excited states are nonzero [40]. ### Optical multistability in the coupled cavity and three-level atom system As shown in Fig. 1(b), a single cavity mode couples three energy-levels of atoms under the strong collective coupling condition [40; 41; 42].. The input signal field \(E_{\mathrm{in}}^{s}\) is detuned from atomic transition \(\left|1\right\rangle\rightarrow\left|3\right\rangle\) by \(\Delta_{p}=\omega_{p}-\omega_{31}\). The cavity mode detuning is defined as \(\Delta_{c}=\omega_{cav}-\omega_{31}\). Two separate atomic transitions \(\left|1\right\rangle\rightarrow\left|2\right\rangle\) and \(\left|1\right\rangle\rightarrow\left|3\right\rangle\) with Rabi frequency \(\Omega_{1}\) and \(\Omega_{2}\) are excited simultaneously by the single cavity mode. \(\delta_{23}\) is the frequency separation of two atomic excited states \(\left|2\right\rangle\) and \(\left|3\right\rangle\). A weak free-space control field \(E_{c}\) with Rabi frequency \(\Omega_{c}\) couples the atomic transition \(\left|4\right\rangle\rightarrow\left|3\right\rangle\) and the control field detuning is defined as \(\Delta=\omega_{c}-\omega_{34}\). The Rabi frequencies are defined as \(\Omega_{1}=\mu_{12}E_{in}^{s}/\left(2\hbar\right)\), \(\Omega_{2}=\mu_{13}E_{in}^{s}/\left(2\hbar\right)\) and \(\Omega_{c}=\mu_{43}E_{in}^{s}/\left(2\hbar\right)\). \(\sigma_{mn}^{(i)}\) (m, n=1-4) is the atomic operator for the \(i\)th atom. The interaction Hamiltonian of the CQED system can be written as: \[H=-\hbar\sum_{i=1}^{N}\left(\Omega_{c}\hat{\sigma}_{34}^{(i)}+\Omega_{1}\hat{ \sigma}_{21}^{(i)}+\Omega_{2}\hat{\sigma}_{31}^{(i)}\right)+H.C. \tag{1}\] Under the rotating wave approximation, the system equations of motion for atoms are \(\dfrac{d\hat{\rho}}{dt}=\dfrac{1}{i\hbar}\left[\hat{H},\hat{\rho}\right]+ \hat{L}\hat{\rho}\)[43]. Here L denotes the quantum superoperator of dissipation. The Figure 1: Schematic diagram of optical multistability induced by single cavity mode in CQED system. (a) A unidirectional optical ring cavity contains of an atomic ensemble confined in a magneto-optical trap (MOT) setup. (b) Atomic energy-level configuration for manipulating single cavity mode coupled with three-level atoms. (c) Atomic energy-level configuration for single cavity mode coupled with four-level atoms. system equations of motion for atoms are obtained as: \[\dot{\sigma}_{11}=\gamma_{21}\sigma_{22}+\gamma_{31}\sigma_{33}+\gamma_{41}\sigma _{44}+i\Omega_{1}^{*}\sigma_{21}-i\Omega_{1}\sigma_{12}+i\Omega_{2}^{*}\sigma_{ 31}-i\Omega_{2}\sigma_{13}, \tag{2}\] \[\dot{\sigma}_{22}=-\gamma_{2}\sigma_{22}+i\Omega_{1}\sigma_{12}-i\Omega_{1}^{*} \sigma_{21}, \tag{3}\] \[\dot{\sigma}_{33}=-\gamma_{3}\sigma_{33}+ig_{2}a\sigma_{13}-ig_{2}a^{+}\sigma_{ 31}+i\Omega_{c}\sigma_{43}-i\Omega_{c}^{*}\sigma_{34}, \tag{4}\] \[\dot{\sigma}_{44}=\gamma_{24}\sigma_{22}+\gamma_{34}\sigma_{33}-\gamma_{41} \sigma_{44}+i\Omega_{c}^{*}\sigma_{34}-i\Omega_{c}\sigma_{43}, \tag{5}\] \[\dot{\sigma}_{23}=-[(\gamma_{2}+\gamma_{3})/2-i\delta_{23}]\sigma_{23}+i\Omega _{1}\sigma_{13}-i\Omega_{2}^{*}\sigma_{21}-i\Omega_{c}^{*}\sigma_{24}, \tag{6}\] \[\dot{\sigma}_{42}=-[\gamma_{2}/2+i(\Delta-\Delta_{p})]\sigma_{42}-i\Omega_{1} ^{*}\sigma_{41}+i\Omega_{c}^{*}\sigma_{32}, \tag{7}\] \[\dot{\sigma}_{43}=-(\gamma_{3}/2+i\Delta)\sigma_{43}+i\Omega_{c}^{*}(\sigma_{ 33}-\sigma_{44})-i\Omega_{2}^{*}\sigma_{41}, \tag{8}\] \[\dot{\sigma}_{12}=-[\gamma_{2}/2+i(\Delta_{p}+\delta_{23})]\sigma_{12}+i\Omega _{1}^{*}(\sigma_{22}-\sigma_{11})+i\Omega_{2}^{*}\sigma_{32}, \tag{9}\] \[\dot{\sigma}_{13}=-(\gamma_{3}/2+i\Delta_{p})\sigma_{13}+i\Omega_{2}^{*}(\sigma _{33}-\sigma_{11})+i\Omega_{1}^{*}\sigma_{23}-i\Omega_{c}^{*}\sigma_{14}, \tag{10}\] \[\dot{\sigma}_{14}=-[\gamma_{41}/2+i(\Delta-\Delta_{p})]\sigma_{14}+i\Omega_{1} ^{*}\sigma_{24}+i\Omega_{2}^{*}\sigma_{34}-i\Omega_{c}\sigma_{13}, \tag{11}\] where \(\gamma_{i}\) denotes spontaneous decay rate of atom from upper state \(\left|i\right\rangle\) to ground state \(\left|1\right\rangle.\)\(\gamma_{41}\) denotes spontaneous decay rate between atomic states \(\left|4\right\rangle\) and \(\left|1\right\rangle.\) Consider a closed atom system, thus \(\sigma_{11}+\sigma_{22}+\sigma_{33}+\sigma_{44}=\)I. Under the slowly varying envelope approximation, the dynamic equation of the signal field governed by Maxwell's equation is: \[\frac{\partial E^{s}}{\partial t}+c\frac{\partial E^{s}}{\partial z}=i\frac{ \omega_{p}}{2\varepsilon_{0}}P\left(\omega_{p}\right) \tag{12}\] where \(P\left(\omega_{p}\right)=N(\cdot\mu_{12}\sigma_{12}+\mu_{13}\sigma_{13})\) is the induced polarization of three-level atomic excitation by a single cavity mode. The dipole matrix elements for atomic transitions \(\left|1\right\rangle\rightarrow\left|2\right\rangle\) and \(\left|1\right\rangle\rightarrow\left|3\right\rangle\) are assumed to be identical, i.e., \(\mu_{12}=\mu_{13}=\mu\). N is the effective number density of atoms in the cavity. \(\varepsilon_{0}\) is permittivity of free space and \(c\) is the light speed in vacuum. In steady state, the left sides of Eqs. (2-11) are zero. Then the boundary conditions of input field \(E_{in}^{s}\) and output field \(E_{T}^{s}\) for the CQED system are [12]: \[E^{s}(0)=\sqrt{T}E_{in}^{s}+RE^{s}(\mathrm{L}) \tag{13}\] \[E_{T}^{s}=\sqrt{T}E^{s}(\mathrm{L}) \tag{14}\] L is the length of atomic ensembleSolving Eqs. (12-14) under the mean-field limit [22, 23], the input-output relationship of the CQED system is given as: \[y=2x-i(C_{1}\sigma_{12}+C_{2}\sigma_{13}) \tag{15}\] where \(C_{1}=LN\omega_{p}\mu_{12}^{2}/2\hbar c\varepsilon_{0}\mathrm{T}\) and \(C_{2}=LN\omega_{p}\mu_{13}^{2}/2\hbar c\varepsilon_{0}\mathrm{T}\) are the cooperativity parameters. C\({}_{1}\)=C\({}_{2}\)=C since the dipole matrix elements \(\mu_{12}\) and \(\mu_{13}\) are assumed to be identical. \[x=\mu E_{T}^{s}\ /\left(2\hbar\sqrt{T}\right)\ \ \ \text{and}\ \ \ y=\mu E_{in}^{s}\ /\left(2\hbar\sqrt{T}\right).\ \text{The input field intensity}\ \ I_{in}=\left|y\right|^{2}\ \ \text{and the output field intensity}\ \ I_{T}=\left|x\right|^{2}\ \ \text{can be obtained from Eq. 15.}\] ### Optical multistability in the coupled cavity and four-level atom system Fig. 1(c) shows that a single cavity mode can couple with four-level atoms under appropriate frequency detuning of the input signal field [44]. The detuning of the input signal field \(E_{in}^{s}\) is defined as \(\Delta_{p}=\omega_{p}-\omega_{41}\) and the cavity mode detuning is defined as \(\Delta_{c}=\omega_{cav}-\omega_{41}\) in this case. Three separate atomic transitions \(\left|1\right\rangle\rightarrow\left|2\right\rangle,\ \ \left|1\right\rangle \rightarrow\left|3\right\rangle\) and \(\left|1\right\rangle\rightarrow\left|4\right\rangle\) with Rabi frequencies \(\Omega_{1}\), \(\Omega_{2}\) and \(\Omega_{3}\) are excited simultaneously by the cavity mode under the strong collective coupling condition. Here \(\Omega_{1}\)=\(\mu_{12}E_{in}^{s}\ /\left(2\hbar\right),\ \ \Omega_{2}\)=\(\mu_{13}E_{in}^{s}\ /\left(2\hbar\right)\) and \(\Omega_{3}\)=\(\mu_{14}E_{in}^{s}\ /\left(2\hbar\right).\) The interaction Hamiltonian of the CQED system in this case is: \[H= -\hbar\sum_{i=1}^{N}\left(\Omega_{1}\hat{\sigma}_{21}^{(i)}+\Omega_{2} \hat{\sigma}_{31}^{(i)}+\Omega_{3}\hat{\sigma}_{41}^{(i)}\right)+H.C. \tag{16}\] Under the rotating wave approximation, the system equations of motion for atoms are given by: \[\hat{\sigma}_{11} =\sum_{i=2}^{i=4}\left[\gamma_{i}\sigma_{ii}+i(\Omega_{i-1} \sigma_{i1}-\Omega_{i-1}^{*}\sigma_{ii})\right], \tag{17}\] \[\hat{\sigma}_{22} =-\gamma_{2}\sigma_{22}-i(\Omega_{1}\sigma_{21}-\Omega_{1}^{*} \sigma_{12}),\] (18) \[\hat{\sigma}_{33} =-\gamma_{3}\sigma_{33}-i(\Omega_{2}\sigma_{31}-\Omega_{2}^{*} \sigma_{13}),\] (19) \[\hat{\sigma}_{44} =-\gamma_{4}\sigma_{44}-i(\Omega_{3}\sigma_{41}-\Omega_{3}^{*} \sigma_{14}),\] (20) \[\hat{\sigma}_{12} =-\left[\frac{\gamma_{2}}{2}+i(\Delta_{p}+\delta_{24})\right] \sigma_{12}+i\Omega_{1}^{*}(\sigma_{22}-\sigma_{11})+i\Omega_{2}^{*}\sigma_{ 32}+i\Omega_{3}^{*}\sigma_{42},\] (21) \[\hat{\sigma}_{13} =-\left[\frac{\gamma_{3}}{2}+i(\Delta_{p}+\delta_{34})\right] \sigma_{13}+i\Omega_{2}^{*}(\sigma_{33}-\sigma_{11})+i\Omega_{1}^{*}\sigma_{ 32}+i\Omega_{3}^{*}\sigma_{43},\] (22) \[\hat{\sigma}_{14} =-\left(\frac{\gamma_{4}}{2}+i\Delta_{p}\right)\sigma_{14}+i \Omega_{3}^{*}(\sigma_{44}-\sigma_{11})+i\Omega_{1}^{*}\sigma_{24}+i\Omega_{ 2}^{*}\sigma_{34},\] (23) \[\hat{\sigma}_{23} =(\frac{\gamma_{2}+\gamma_{3}}{2}-2i\delta_{23})\sigma_{23}+i \Omega_{1}^{*}\sigma_{13}-i\Omega_{2}\sigma_{21},\] (24) \[\hat{\sigma}_{34} =(\frac{\gamma_{5}+\gamma_{4}}{2}-2i\delta_{34})\sigma_{34}+i \Omega_{2}^{*}\sigma_{14}-i\Omega_{3}\sigma_{31},\] (25) \[\hat{\sigma}_{24} =(\frac{\gamma_{2}+\gamma_{4}}{2}-2i\delta_{24})\sigma_{24}+i \Omega_{1}^{*}\sigma_{14}-i\Omega_{3}\sigma_{21}. \tag{26}\] where \(\delta\)mn (m, n=1-4) is the frequency separation between two atomic states \(\left|m\right\rangle\) and \(\left|n\right\rangle.\) Similarly, the input-output relationship of the CQED system is given by \[y=2x-i(C_{1}\sigma_{12}+C_{2}\sigma_{13}+C_{3}\sigma_{14}) \tag{15}\] The cooperativity parameters are \(C_{1}=LN\omega_{p}\mu_{12}^{2}/2\hbar c\varepsilon_{0}\mathrm{T}\), \(C_{2}=LN\omega_{p}\mu_{13}^{2}/2\hbar c\varepsilon_{0}\mathrm{T}\) and \(C_{3}=LNo_{p}\mu_{14}^{2}/2\hbar c\varepsilon_{0}\mathrm{T}\). The dipole matrix elements for atomic transition \(\left|1\right\rangle\rightarrow\left|2\right\rangle\), \(\left|1\right\rangle\rightarrow\left|3\right\rangle\) and \(\left|1\right\rangle\rightarrow\left|4\right\rangle\) are supposed to be identical, i.e., \(\mu_{12}=\mu_{13}=\mu_{14}=\mu\), thus \(C_{1}=C_{2}=C_{3}\). The input-output relationship between the input field intensity \(I_{in}=\left|y\right|^{2}\) and the output field intensity \(I_{T}=\left|x\right|^{2}\) of the CQED system can be re-written as \(y=2x-i\,C(\sigma_{12}+\sigma_{13}+\sigma_{14})\). ## 3 Results ### Numerical results of multistability for the coupled cavity and three-level atom system Unlike the basic normal mode splitting in a two-level atom-cavity system, two separate atomic transitions \(\left|1\right\rangle\rightarrow\left|2\right\rangle\) and \(\left|1\right\rangle\rightarrow\left|3\right\rangle\) are excited simultane Figure 2: The input-output curves of the CQED system for single cavity mode coupled with three-level atoms and without the control field (\(\Omega c\)=0) The common system parameters are \(\delta_{23}=12\Gamma\), \(\Delta_{c}=-\delta_{23}/2\), \(\Delta_{p}=0\).. The solid blue line, the dash-dotted green line, and the dashed red line correspond to C=90, 180, and 380, respectively. mode under the conditions of strong collective coupling. Fig. 2 shows the nonlinear input-output relation of the CQED system When the control light is absent (\(\Omega c\)=0), optical multistability is produced As the cooperative parameter \(C\) increases, both the upper and lower thresholds of the optical multistability increase. The lower threshold increases faster than the upper threshold, thus the multistability area is enlarged accordingly. The multistability curve also can be influenced by the energy-level separation \(\delta_{23}\) of two upper atomic states, the input field detuning \(\Delta_{p}\), and etcFig. 3(a) plots the multistability curves with different \(\delta_{23}\) values. values. It shows that as \(\delta_{23}\) increases, the lower thresholds decreases while the upper threshold increases. As a result, the lower bistability area increases and the upper bistability area decreases. Fig. 3(b) plots the multistability curves with several values of input field detuning \(\Delta_{p}\). The multistability area enlarges as the input field frequency is tuned to the middle of two upper atomic states. That is because the excitation of two atomic transitions by the single cavity mode is enhanced. The thresholds of the multistability also increase as the absorption of atomic medium by the cavity field is strengthened. However, the nonlinear input-output curve is transferred from multistable to bistable as \(\Delta_{p}\) is tuned away from the two upper excited states, asshown by the dashed red line in Fig. 3(b), in which the transition \(\left|1\right\rangle\rightarrow\left|2\right\rangle\) can be neglected as the input field detuning \(\Delta_{p}\) is sufficiently large and the CQED system behaves as a two-level atom-cavity system. With \(\Delta_{p}\)=0, the output light intensity \(I_{T}\) is plotted versus the input light intensity \(I_{in}\) for several \(\delta_{23}\) values (the solid blue line, dash-dotted green line, and the dashed red line correspond to \(\delta_{23}\)=12\(\Gamma\), 8\(\Gamma\), 4\(\Gamma\), respectively.). (b) With \(\delta_{23}\)=12\(\Gamma\), the output light intensity \(I_{T}\) is plotted versus the input light intensity \(I_{in}\) for several \(\Delta_{p}\) values (the solid blue line, dashed line) and the dashed red line correspond to \(\delta_{23}\)=12\(\Gamma\), 8\(\Gamma\), 4\(\Gamma\), respectively. (c) With \(\delta_{23}\)=12\(\Gamma\), the output light intensity \(I_{T}\) is plotted versus the input light intensity \(I_{in}\) for several \(\Delta_{p}\) values (the solid blue line, dashed line) and the dashed red line correspond to \(\delta_{23}\)=12\(\Gamma\), 8\(\Gamma\), 4\(\Gamma\), respectively. (d) With \(\delta_{23}\)=12\(\Gamma\), the output light intensity \(I_{T}\) is plotted versus the input light intensity \(I_{in}\) for several \(\Delta_{p}\) values (the solid blue line, dashed line) and the dashed red line correspond to \(\delta_{23}\)=12\(\Gamma\), 8\(\Gamma\), 4\(\Gamma\), respectively. (e) With \(\delta_{23}\)=12\(\Gamma\), the output light intensity \(I_{T}\) is plotted versus the input light intensity \(I_{in}\) for several \(\Delta_{p}\) values (the solid blue line, dashed line) and the dashed red line correspond to \(\delta_{23}\)=12\(\Gamma\), 8\(\Gamma\), 4\(\Gamma\), respectively. (e) With \(\delta_{23}\)=12\(\Gamma\), the output light intensity \(I_{T}\) is plotted versus the input light intensity \(I_{in}\) for several \(\Delta_{p}\) values (the solid blue line, dashed line) and the dashed red line correspond to \(\delta_{23}\)=12\(\Gamma\), 8\(\Gamma\), 4\(\Gamma\), respectively. (e) With \(\delta_{23}\)=12\(\Gamma\), the output light intensity \(I_{T}\) is plotted versus the input light intensity \(I_{in}\) for several \(\Delta_{p}\) values (the solid blue line, dashed line) and the dashed red line correspond to \(\delta_{23}\)=12\(\Gamma\), 8\(\Gamma\), 4\(\Gamma\), respectively. (f) With \(\delta_{23}\)=12\(\Gamma\), the output light intensity \(I_{T}\) is plotted versus the input light intensity \(I_{in}\) for several \(\Delta_{p}\) values (the solid blue line, dashed line) and the dashed red line correspond to \(\delta_{23}\)=12\(\Gamma\), 8\(\Gamma\), 4\(\Gamma\), respectively. (g) With \(\delta_{23}\)=12\(\Gamma\), the output light intensity \(I_{T}\) is plotted versus the input light intensity \(I_{in}\) for several \(\Delta_{p}\) values (the solid blue line, dashed line) and the dashed red line correspond to \(\delta_{23}\)=12\(\Gamma\), 8\(\Gamma\), 4\(\Gamma\), respectively. (g) With \(\delta_{23}\)=12\(\Gamma\), the output light intensity \(I_{T}\) is plotted versus the input light intensity \(I_{in}\) for several \(\Delta_{p}\) values (the solid blue line, dashed line) and the dashed red line correspond to \(\delta_{23}\)=12\(\Gamma\), 8\(\Gamma\), 4\(\Gamma\), respectively. (h) With \(\delta_{23}\)=12\(\Gamma\), the output light intensity \(I_{T}\) is plotted versus the input light intensity \(I_{in}\) for several \(\Delta_{p}\) values (the solid blue line, dashed line) and the dotted red line correspond to \(\delta_{23}\)=12\(\Gamma\), 8\(\Gamma\), 4\(\Gamma\), respectively. the dash-dotted green line, and the dashed red line correspond to \(\Delta_{p}\) =0, -3\(\Gamma\), 3\(\Gamma\), respectively. When a control field is injected from free space and couples the atomic transition \(\left|4\right\rangle\rightarrow\left|3\right\rangle\), destructive quantum interference is induced in the CQED system in which the cavity mode couples two atomic transitions and three polariton states are created. An electromagnetic-induced-transparency-like dip occurs at one of the polariton resonance peaks when the double resonance condition \(\Delta\)=\(\Delta_{p}\) is satisfied. Fig. 4 shows the input-output relation of the four-level atom excited with a single cavity mode and a control field (see Fig. 1(b)). In Fig. 4(a), the control field frequency is tuned to be resonant to the atomic transition \(\left|4\right\rangle\rightarrow\left|3\right\rangle\). Due to the destructive quantum interference induced by the control field, the original lower bistability region in Fig. 3 is split into two bistability regions, and there are three bistability regions in the input-output curve of the CQED system. In Fig. 4(a), as the input field detuning \(\Delta_{p}\) increases, the threshold and area of the lower bistability region increase. However, the upper bistability region does not change. That is, the destructive quantum interference induced by a weak control field changes the nonlinear properties of the system at the lower input intensities near the polariton state when the double resonance condition is satisfied, while the nonlinear properties of the system at the higher input intensities are not affected. In Fig. 4(b), the threshold of the lower bistability region decreases as the control field intensity \(\Omega\) increases, which attributes to the decrease of absorption in atomic medium from the signal field. Based on this, a broadband three-state all-optical switching can be designed and manipulated in a wide range of operating parameters. ### Numerical results for the coupled cavity and four-level atom system When a single cavity mode couples with four-level atoms in the CQED system under the strong collective coupling condition, three separate atomic transitions \(\left|1\right\rangle\rightarrow\left|2\right\rangle,\)\(\left|1\right\rangle\rightarrow\left|3\right\rangle\) and \(\left|1\right\rangle\rightarrow\left|4\right\rangle\) are excited simultaneously. Fig. 5 shows the input-output relation of such a CQED system. There are three bistability regions in the input-output curve. In Fig. 5(a), as the input field detuning \(\Delta_{p}\) increases, the threshold of the upper bistability region decreases and the thresholds of the two lower bistability regions increase. This is because the strengths of two lower atomic transitions \(\left|1\right\rangle\rightarrow\left|2\right\rangle\) and \(\left|1\right\rangle\rightarrow\left|3\right\rangle\) decrease as the input field detuning \(\Delta_{p}\) increases. In Fig. 5(b), all thresholds of three bistability regions increase as the atomic energy-level separations \(\delta_{23}\) and \(\delta_{34}\) increase. Based on the multi-bistability effect exhibited by a single cavity field, the proposed CQED system behaves as a multistate passive device and may be useful for multi-state all-optical switching which provides multiple signal outputs with varying field intensities. and may find applications in construction of all-optical communication networks [45, 46] and quantum teleportation networks [15, 16]. Figure 5: The input-output curve of a single cavity mode coupled with four-level atoms in the CQED system without the control field (\(\Omega\)c=0). (a) The output light intensity \(\mathsf{I}_{\mathsf{r}}\) versus the input light intensity \(\mathsf{I}_{\mathsf{n}}\) for three sets of \(\Delta_{\mathsf{p}}\) values (the solidblue line, \(\Delta_{\mathsf{p}}\)=-10\(\Gamma\); the dashed green line, \(\Delta_{\mathsf{p}}\)=-12.5\(\Gamma\), and the dashed line, \(\Delta_{\mathsf{p}}\)=-15\(\Gamma\). Other parameters are \(\Delta_{\mathsf{c}}\)=0. \(\delta_{23}\)=5\(\Gamma\), and \(\delta_{34}\)=10\(\Gamma\). (b) The output light intensity \(\mathsf{I}_{\mathsf{r}}\) versus the input light intensity \(\mathsf{I}_{\mathsf{n}}\) for three different sets of parameter values. tThe solid blue line corresponds to \(\delta_{23}\)=5\(\Gamma\), \(\delta_{34}\)=10\(\Gamma\), and \(\Delta_{\mathsf{p}}\)=-12.5\(\Gamma\); the dashed green line corresponds to \(\delta_{23}\)=4\(\Gamma\), \(\delta_{34}\)=6\(\Gamma\), \(\Delta_{\mathsf{p}}\)=-8\(\Gamma\); and the dashed red dashed line corresponds to and \(\delta_{23}\)=2\(\Gamma\), \(\delta_{34}\)=4\(\Gamma\), and \(\Delta_{\mathsf{p}}\)=-5\(\Gamma\).. ## 4 Conclusion In conclusion, we propose a broadband and tunable optical multistability scheme based on a CQED system Consisting of a single cavity mode and multi-level atoms. Multi-bistabilities are exhibited in the input-output curves of the cavity output field and their thresholds can be tuned by varying the system parameters. We also showed that by inducing the destructive quantum interference with an extra weak control field, the initial bistability region can be split into two bistability regions, which may be useful for realizing broadband multi-throw all-optical switching. The proposed scheme can be realized experimentally in real atomic systems with moderate system parameter requirements [47, 48]. For example, one can use the \({}^{85}\)Rb D2 line to form two separate atomic transitions coupled by a single cavity mode \(\left|{}^{\,2}S_{1/2},\ F{=}2\right\rangle\) to \(\left|{}^{\,2}P_{3/2},\ F{=}1\right\rangle\) and \(\left|{}^{\,2}S_{1/2},\ F{=}2\right\rangle\) to \(\left|{}^{\,2}P_{3/2},\ F{=}2\right\rangle\); and a different set of the transition \(\left|{}^{\,2}S_{1/2},\ F{=}3\right\rangle\) to \(\left|{}^{\,2}P_{3/2},\ F{=}2\right\rangle\) of \({}^{85}\)Rb D2 line coupled by a weak control field, which then forms the coupling scheme depicted in Fig. 1b.. \({}^{85}\)Rb D2 line can be also used to realize the scheme depicted in Fig. 1c, in which four atomic states coupled by a single cavity mode are designated as \(\left|{1>=}\right|^{\,2}S_{1/2},\ F{=}2\right\rangle\)\(\left|{2>=}\right|^{\,2}P_{3/2},\ F{=}1\right\rangle\), \(\left|{3>=}\right|^{\,2}P_{3/2},\ F{=}2\right\rangle\), and \(\left|{4>=}\right|^{\,2}P_{3/2},\ F{=}3\right\rangle\). The proposed optical multistability scheme based on a single cavity mode coupled with multi-level atoms in a CQED system can be made in a compact module [49, 50] and forms a cascaded package, which is then compatible for integrated manufacturing applications. The proposed optical multistability scheme may also be desiged as a multistate passive optical device which may be used for applications such as multistate all-optical switching [51, 52, 53], all-optical memory [54, 55], all-optical communications [45, 46], and all-optical quantum logic elements [56]. ## Acknowledgments This work was supported by Wuhan University of Science and Technology, the Postdoctoral Applied Research Program of Qingdao (Grant No. 62350079311135), and Postdoctoral Applied Innovation Program of Shandong (Grant No. 62350070311227). ## Conflict of interest The authors declare that there are no conflicts of interest related to this article. ## Data availability statement The data that support the findings of this study are available upon reasonable request from the authors.
2305.04788
Fair and Efficient Allocation of Indivisible Chores with Surplus
We study fair division of indivisible chores among $n$ agents with additive disutility functions. Two well-studied fairness notions for indivisible items are envy-freeness up to one/any item (EF1/EFX) and the standard notion of economic efficiency is Pareto optimality (PO). There is a noticeable gap between the results known for both EF1 and EFX in the goods and chores settings. The case of chores turns out to be much more challenging. We reduce this gap by providing slightly relaxed versions of the known results on goods for the chores setting. Interestingly, our algorithms run in polynomial time, unlike their analogous versions in the goods setting. We introduce the concept of $k$ surplus which means that up to $k$ more chores are allocated to the agents and each of them is a copy of an original chore. We present a polynomial-time algorithm which gives EF1 and PO allocations with $(n-1)$ surplus. We relax the notion of EFX slightly and define tEFX which requires that the envy from agent $i$ to agent $j$ is removed upon the transfer of any chore from the $i$'s bundle to $j$'s bundle. We give a polynomial-time algorithm that in the chores case for $3$ agents returns an allocation which is either proportional or tEFX. Note that proportionality is a very strong criterion in the case of indivisible items, and hence both notions we guarantee are desirable.
Hannaneh Akrami, Bhaskar Ray Chaudhury, Jugal Garg, Kurt Mehlhorn, Ruta Mehta
2023-05-08T15:41:04Z
http://arxiv.org/abs/2305.04788v3
# Fair and Efficient Allocation of Indivisible Chores with Surplus ###### Abstract We study fair division of indivisible chores among \(n\) agents with additive disutility functions. Two well-studied fairness notions for indivisible items are envy-freeness up to one/any item (EF1/EFX) and the standard notion of economic efficiency is Pareto optimality (PO). There is a noticeable gap between the results known for both EF1 and EFX in the goods and chores settings. The case of chores turns out to be much more challenging. We reduce this gap by providing slightly relaxed versions of the known results on goods for the chores setting. Interestingly, our algorithms run in polynomial time, unlike their analogous versions in the goods setting. We introduce the concept of \(k\) surplus which means that up to \(k\) more chores are allocated to the agents and each of them is a copy of an original chore. We present a polynomial-time algorithm which gives EF1 and PO allocations with \((n-1)\) surplus. We relax the notion of EFX slightly and define tEFX which requires that the envy from agent \(i\) to agent \(j\) is removed upon the transfer of any chore from the \(i\)'s bundle to \(j\)'s bundle. We give a polynomial-time algorithm that in the chores case for 3 agents returns an allocation which is either proportional or tEFX. Note that proportionality is a very strong criterion in the case of indivisible items, and hence both notions we guarantee are desirable. ## 1 Introduction Fair division of a set of indivisible items among agents is a fundamental area with applications in various multi-agent settings. The items can be either goods (provides positive utility) or chores (provides negative utility). The case of goods has been vastly studied [3]. On the other hand, the case of chores is relatively new. In both settings, given a set \(N=[n]\) of \(n\) agents and a set \(M=[m]\) of \(m\) items, the goal is to find an allocation \(X=\langle X_{1},\ldots,X_{n}\rangle\) satisfying some fairness and efficiency criteria where agent \(i\) receives the bundle \(X_{i}\) for all \(i\in[n]\). In this paper, we focus on fair division of chores when each agent \(i\) has a disutility function \(d_{i}:2^{M}\rightarrow\mathbb{R}_{\geq 0}\) which indicates how much agent \(i\) dislikes each subset \(S\subseteq M\) of the chores. We assume that each \(d_{i}\) is additive, i.e., \(d_{i}(S)=\sum_{j\in S}d_{i}(\{j\})\). Envy-freeness is one of the most accepted notions of fairness. In the chores setting, allocation \(X\) is envy-free if for every pair of agents \(i\) and \(j\), \(d_{i}(X_{i})\leq d_{i}(X_{j})\). However, envy-freeness is too strong to be satisfied.1 Hence, to obtain positive results we need to relax the fairness notion. Therefore, we study envy-freeness up to one item (EF1), envy-freeness up to transferring any item (tEFX) and proportionality as our fairness criteria. For efficiency, we consider (fractional) Pareto optimality (fPO). Footnote 1: For example, consider division of one chore among two agents. ### EF1 and fPO with Surplus for \(n\) Agents An allocation \(X=\langle X_{1},\ldots,X_{n}\rangle\) is Pareto optimal (PO), if there exists no allocation \(Y=\langle Y_{1},\ldots,Y_{n}\rangle\) such that \(d_{i}(Y_{i})\leq d_{i}(X_{i})\) for all agents \(i\) and for some agent \(j\), \(d_{j}(Y_{j})<d_{j}(X_{j})\). For Pareto optimality, we assume \(Y\) is an integral allocation. A stronger notion is fractional Pareto optimality (fPO) which allows \(Y\) to be a fractional allocation. In a fractional allocation \(y=\langle y_{1},\ldots,y_{n}\rangle\), \(y_{i,c}\) is the fraction of chore \(c\in[m]\) allocated to agent \(i\) with \(\sum_{i\in[n]}y_{i,c}=1\) and \(y_{i}=(y_{i,1},\ldots,y_{i,m})\) is \(i\)'s bundle. Then \(d_{i}(y_{i})=\sum_{c\in[m]}y_{i,c}\cdot d_{i}(\{c\})\) is the disutility of agent \(i\) in the fractional allocation. Fractional Pareto optimality (fPO).Allocation \(x\) is fractionally Pareto optimal or fPO, if there exists no fractional allocation \(y\) such that \(d_{i}(y_{i})\leq d_{i}(x_{i})\) for all \(i\) and for some agent \(j\), \(d_{j}(y_{j})<d_{j}(x_{j})\). Envy-freeness up to one chore (EF1).Allocation \(X=\langle X_{1},\ldots,X_{n}\rangle\) is EF1 if for all \(i,j\in N\), \(d_{i}(X_{i})\leq d_{i}(X_{j})\) or there exists a chore \(c\in X_{i}\) such that \(d_{i}(X_{i}\setminus\{c\})\leq d_{i}(X_{j})\). EF1 is defined for the case of the goods accordingly, with the difference that the good should be removed from the bundle of the envied agent [13]. For both goods and chores settings, EF1 allocations are known to exist, and they can also be computed in polynomial-time [32, 9]. However, the outputs of these algorithms are not guaranteed to be efficient. Satisfying EF1 and PO simultaneously turns out to be a challenging problem. In the goods setting under additive valuations, Caragiannis et al. [15] proved that any allocation with maximum Nash welfare is EF1 and PO. Later, Barman et al. [6] gave a pseudopolynomial-time algorithm for computing an EF1 and PO allocation, which was recently improved to output an EF1 and fPO allocation [28]. In the case of chores on the other hand, the existence of EF1 and PO allocations is a big open problem. Similar results on chores are known for very limited settings of bivalued disutilities [29, 22], three agents [30] and when chores are divided into two types [4]. In this paper, we make progress in this line of work by proving that given additive disutilities, there exists an EF1 and fPO allocation with \((n-1)\) surplus. The analouge of surplus in the goods setting is charity, which is a well-accepted concept, and it means that some goods might remain unallocated. Caragiannis et al. [14] introduced the notion of EFX with charity. Many follow-up papers proved relaxations of envy-freeness with charity [20, 8, 2, 33, 7]. In the chores setting, by "\(k\) surplus", we mean that all the chores are allocated, and at most, \(k\) extra chores are allocated to the agents, and each of these chores is a copy of an original chore. One motivation behind defining the concept of surplus for chores is the lack of progress on the original problem for over half a decade. It is likely that an allocation that is both EF1 and PO might not always exist, and in that case, the concept of surplus seems a good alternative. Moreover, duplicating chores makes sense for many applications. For instance, consider the task of distributing papers among reviewers. The goal is to have all papers reviewed and also be fair toward the reviewers. To this end, it does not harm if a few papers are reviewed more than needed. Another practical scenario is when the chores are going to be repeated. Consider the case where the same set of chores needs to be done every month. This can happen in households, corporations, etc. In this case, multiplying some chore \(c\) for \(k\) times means that we already decide which agents should do \(c\) in the following \(k\) months. Thus, when planning for the next \(k\) months, we can remove \(c\) from the set of chores that need to be assigned. Our first main result is formally stated in Theorem 1. **Theorem 1**.: _Given additive disutilites, there exists an allocation with at most \((n-1)\) surplus which is EF1 and fPO. Moreover, it can be computed in polynomial time._ Note that the allocation in Theorem 1 being fPO means that it fractionally Pareto dominates all the allocations with the same surplus. Our approach is based on rounding of competitive equilibrium with equal incomes (CEEI). Since there is no polynomial-time algorithm known for computing a CEEI, we round a \((1-\epsilon)\)-approximate-CEEI for \(\epsilon=\frac{1}{5nm}\), which can be computed in polynomial-time [17]. By integrally assigning chores which are fractionally allocated in the \((1-\epsilon)\)-CEEI, we guarantee that the final allocation is fPO. However, the main challenge here is to achieve EF1 guarantee with at most \(n-1\) surplus, which requires careful rounding. ### tEFX or Proportionality for 3 Agents The discrepancy between known results for the goods and chores setting carries over even for instances with a small number of agents. In the goods setting, EFX allocations always exist for 3 agents with additive utilities [19]. However, the analogous problem for chores is open. An allocation \(X=\langle X_{1},\ldots,X_{n}\rangle\) is EFX if for all agents \(i\) and \(j\) and all chores \(c\in X_{i}\), \(d_{i}(X_{i}\setminus\{c\})\leq d_{i}(X_{j})\). The existence of EFX allocations for chores has been studied in the very limited settings of 3 agents with bivalued disutilites [37] and also when agents have the same ordinal preferences on the chores [31]. Let us briefly discuss the technique to obtain EFX for three agents for the goods setting and why it fails in the chores setting. In [19], the high-level idea is to start with an empty allocation and at each step, allocate some unallocated goods to some agents, possibly take away some goods from them or move the bundles among the agents while guaranteeing that the partial allocation is EFX at the end of each step. Basically, the algorithm moves in the space of partial EFX allocations, improving a sophisticated potential function at each step and terminates when it reaches a complete allocation. This algorithm relies on involved concepts such as _champion-graphs_ and _half-bundles_. In the goods setting, by allocating more goods, we make progress in the sense of improving agents' utilities. However, in the chores setting, by allocating more chores, we make the agents less happy. Therefore, it is not easy to adapt the algorithm and come up with a potential function which improves after more chores get allocated. In fact, the existence of allocations satisfying even weaker notions of fairness than EFX like tEFX is open for the chores setting even when \(n=3\). Yin and Mehta [36] proved the existence of a tEFX allocation for three agents if two of them have additive disutility functions and the ratio of their highest to lowest cost is bounded by two. Envy-freeness up to transferring any chore (tEFX).An allocation is tEFX if no agent \(i\) envies another agent \(j\) after transferring any chore from \(i\)'s bundle to \(j\)'s bundle. Formally, allocation \(X\) is tEFX if for all agents \(i\) and \(j\) and any chore \(c\in X_{i}\), \(d_{i}(X_{i}\setminus\{c\})\leq d_{i}(X_{j}\cup\{c\})\). We note that given additive utility/disutility functions, tEFX is stronger than EF2X studied in [2]. EF2X guarantees that any envy is removed upon the removal any two items from the envied/envious bundle. Recently, Akrami et al. [1] gave an alternative proof for the existence of EFX allocations for three agents in the goods setting which overcomes the mentioned barrier. We use similar techniques, and instead of moving in the space of partial fair allocations and terminating when reaching a complete allocation, we move in the space of complete allocations and stop when we reach a fair allocation. Our technique resembles the cut-and-choose protocol used for fairly allocating items among two agents. In cut and choose, whether the resource is divisible or indivisible, one agent divides it into two parts so that she finds both parts fair. Then the second agent chooses her favorite part and the remaining part goes to the first agent. A similar idea for the case of three agents would be to find a partition \((X_{1},X_{2},X_{3})\) such that agent \(1\) finds \(X_{1}\) and \(X_{2}\) fair and agent \(2\) finds \(X_{2}\) and \(X_{3}\) fair. This way the third agent can choose her favorite bundle and the remaining bundles can be fairly allocated to the two remaining agents. An allocation \(X\) is proportional if for every agent \(i\), \(d_{i}(X_{i})\leq d_{i}(M)/n\). Note that proportionality is too strong to be satisfied when chores are indivisible.2 We show that given any instance comprising of three agents with additive disutilities, in polynomial time one can find an allocation that is either proportional or tEFX; the choice of alternative is made by the algorithm. Note that the EFX result for \(3\) agents in the goods setting is existential and although the approach is constructive, the algorithm is not polynomial. Our second main result is stated in Theorem 2. Footnote 2: Again consider the counter example of two agents and one chore. **Theorem 2**.: _Given an instance comprising of three agents with additive disutilities, and a set of indivisible chores, there exists an allocation \(X\), such that for all \(i\in[3]\)_ * _either_ \(d_{i}(X_{i})\leq 1/3\cdot d_{i}(M)\)_, or_ * _for all_ \(c\in X_{i}\) _and_ \(j\in[3]\)_, we have_ \(d_{i}(X_{i}\setminus\{c\})\leq d_{i}(X_{j}\cup\{c\})\)_._ _Furthermore, such an allocation can be determined in polynomial time._ We remark that although our result does not fully settle the existence of tEFX allocations in the chores setting, the guarantees in Theorem 2 are indeed desirable, especially given that no relaxation of envy-freeness other than EF1, is currently known to exist in the chores setting. Proportionality is a very desirable property of an allocation and is often unattainable in the discrete setting. In fact, the discrete fair division protocol used in Spliddit3, prior to the Nash-welfare maximization algorithm in 20154, first checks for a proportional allocation and only if proportional allocations are unattainable, it attempts at finding relaxations of envy-freeness. There is also research in discrete fair division that attempts to give as many agents their proportional share [24], whilst satisfying certain relaxations of classical fairness notions. Footnote 3: spliddit.org Footnote 4: This is elaborated in Introduction of [15]. ### Further Related Work The notion of CEEI has a long history dating back to classical theories in microeconomics [25]. When agents have linear utilities, CEEI with goods is known to be convex, and the equilibrium prices are unique [23]. Such properties have facilitated the formulation of several polynomial time algorithms [21, 35]. In contrast, CEEI with chores forms a non-convex disconnected set [10] and admits several equilibrium prices. Branzei and Sandomirskiy [12] give a polynomial-time algorithm when the number of agents or the number of goods is constant, which was later improved in [27, 26] to the case of mixed manna containing both goods and chores. Later, Chaudhury et al. [16] gave a complementary pivot algorithm for finding a CEEI for the case of mixed manna, which runs fast in practice and is provably polynomial-time when the number of agents or the number of items (goods and chores) is constant. Recently, Boodaghians et al. [11] and Chaudhury et al. [18] have given polynomial time algorithms for computing \((1-\varepsilon)\)-CEEI. However, the complexity of finding an exact CEEI in the chores setting is open. Moreover, Fisher markets that admit integral equilibria is studied in [5]. ## 2 Preliminaries An instance of discrete fair division with chores is given by the tuple \(\langle N,M,\mathcal{D}\rangle\), where \(N=[n]\) is the set of \(n\) agents, \(M=[m]\) is the set of \(m\) indivisible chores and \(\mathcal{D}=(d_{1}(\cdot),\ldots,d_{n}(\cdot))\), where each \(d_{i}:2^{M}\to\mathbb{R}_{\geq 0}\) is the disutility function of agent \(i\). For all agents \(i\), \(d_{i}\) is assumed to be _normalized_, i.e., \(d_{i}(\emptyset)=0\) and _monotone_, i.e., \(d_{i}(S\cup\{c\})\geq d_{i}(S)\) for all \(S\subseteq M\) and \(c\notin S\). A function \(f:2^{M}\to\mathbb{R}_{\geq 0}\) is said to be _additive_ if \(f(S)=\sum_{s\in S}f(\{s\})\) for all \(S\subseteq M\). For ease of notation, we use \(c\) instead of \(\{c\}\). For \(\oplus\in\{\leq,\geq,<,>\}\), we use \(S\oplus_{i}T\) for \(d_{i}(S)\oplus d_{i}(T)\). Fisher market.In the Fisher market setting for chores in addition to a set \(N\) of agents, a set \(M\) of chores and a disutility profile \(\mathcal{D}\), each agent \(i\) has an initial liability \(\ell_{i}>0\) which specifies how much money this agent should earn in the market. We denote the fisher market instance by \(F=\langle N,M,\mathcal{D},\mathcal{L}\rangle\) where \(\mathcal{L}=(\ell_{1},\ldots,\ell_{n})\). Given the instance \(F\), the market outcome is a pair of fractional allocation and payment vector \(\langle x,p\rangle\). For all agents \(i\) and chores \(c\), \(x_{i,c}\) denotes what fraction of \(c\) is assigned to \(i\) and \(p_{c}\) denotes the price of chore \(c\). The income of agent \(i\) from market outcome \(\langle x,p\rangle\) is \(p(x_{i})=\sum_{c\in M}x_{i,c}p_{c}\). We can also treat integral bundles as vectors with \(0\) and \(1\) entries. Given payment vector \(p\), the pain per buck of agent \(i\) for chore \(c\) is \(d_{i}(c)/p_{c}\). We denote the minimum pain per buck of agent \(i\) at payment \(p\) by \(\text{MPB}_{i}\), i.e., \(\text{MPB}_{i}=\min_{c\in M}d_{i}(c)/p_{c}\). **Definition 1**.: _Given a Fisher market instance \(F\), a market outcome \(\langle x,p\rangle\) is a Fisher market equilibrium if_ * _the market clears, i.e., for all chores_ \(c\in M\)_,_ \(\sum_{i\in[n]}x_{i,c}=1\)_, and_ * _for all agents_ \(i\)_,_ \(\sum_{c\in M}x_{i,c}\cdot p_{c}=\ell_{i}\)_, and_ * _all agents only receive chores with minimum pain per buck, i.e., for all agents_ \(i\) _and chores_ \(c\)_, if_ \(x_{i,c}>0\)_, then_ \(d_{i}(c)/p_{c}=\text{MPB}_{i}\)_._ If for all agents \(i\), \(\ell_{i}=1\), then a Fisher equilibrium is called competitive equilibrium with equal incomes or CEEI. Bogomolnaia et al. [10] proved that a CEEI always exists when agents have linear disutilities. For goods, any Fisher equilibrium is fPO [34]. The same holds true for chores as essentially the same argument shows. **Proposition 1**.: _Given additive disutilities, any Fisher equilibrium is fractionally Pareto Optimal._ Proof.: Let \(\langle x,p\rangle\) be a Fisher equilibrium and let \(y\) be any other allocation. Then \[\sum_{i\in[n]}\frac{d_{i}(x_{i})}{\text{MPB}_{i}} =\sum_{i\in[n]}\sum_{c\in M}x_{i,c}p_{c}\] \[=\sum_{c\in M}p_{c}\] \[=\sum_{i\in[n]}\sum_{c\in M}y_{i,c}p_{c}\] \[\leq\sum_{i\in[n]}\sum_{c\in M}\frac{y_{i,c}d_{i}(c)}{\text{MPB}_ {i}}\] \[=\sum_{i\in[n]}\frac{d_{i}(y_{i})}{\text{MPB}_{i}}.\] Hence, it cannot be the case that \(d_{i}(y_{i})\leq d_{i}(x_{i})\) for all \(i\in[n]\) with one strict inequality. Given a market \(\langle x,p\rangle\), the payment graph of \(x\) is a weighted bipartite (undirected) graph with one part consisting of nodes corresponding to the \(n\) agents and one part consisting of nodes corresponding to the \(m\) chores. We denote the payment graph of \(x\) by \(G_{\langle x,p\rangle}\). There is an edge between agent \(i\) and chores \(c\), if and only if \(x_{i,c}>0\). For any edge \(\{i,c\}\) in \(G_{\langle x,p\rangle}\), the weight of \(\{i,c\}\) is \(e_{i,c}=x_{i,c}\cdot p_{c}\) which is the earning of agent \(i\) from chore \(c\) in this market. For any graph \(G\), we denote the set of edges of \(G\) by \(E(G)\). There is no known polynomial time algorithm for computing a CEEI. However, Boodaghians et al. [11] gave an exterior point algorithm to compute a \((1-\epsilon)\)-CEEI in polynomial time. The running time was improved by a combinatorial algorithm in [17]. Namely, a \((1-\epsilon)\)-CEEI can be computed in time polynomial in the size of the input and \(\frac{1}{\epsilon}\). In a \((1-\epsilon)\)-CEEI, the income of each agent is between \(1-\epsilon\) and \(1+\epsilon\). We formally define \((1-\epsilon)\)-CEEI below. **Definition 2**.: _Given a Fisher market \(F\), a market outcome \(\langle x,p\rangle\) is a \((1-\epsilon)\)-CEEI, for an \(\epsilon\in[0,1]\), if_ * _the market clears, i.e., for all chores_ \(c\in M\)_,_ \(\sum_{i\in[n]}x_{i,c}=1\)_, and_ * _for all agents_ \(i\)_,_ \(1-\epsilon\leq\sum_{c\in M}x_{i,c}\cdot p_{c}\leq 1+\epsilon\)_, and_ * _all agents only receive chores with minimum pain per buck, i.e., for all agents_ \(i\) _and chores_ \(c\)_, if_ \(x_{i,c}>0\)_, then_ \(d_{i}(c)/p_{c}=\text{MPB}_{i}\)_._ Similar to envy-freeness and its relaxations, we can define payment envy-freeness and its relaxations. In particular, given a payment vector \(p=(p_{1},\ldots,p_{m})\) for the chores, an integral allocation \(X\) is _payment envy-free up to one chore_ or pEF1, if for all agents \(i\) and \(j\), either \(X_{i}=\emptyset\) or there exists a chore \(c\in X_{i}\) such that \(p(X_{i}\setminus c)\leq p(X_{j})\). **Proposition 2** (Lemma 3.5 in Ebadian _et al._, 2022).: _If an integral allocation \(X\) is pEF1 with respect to payment vector \(p\) and \(\langle X,p\rangle\) is a Fisher equilibrium, then \(X\) is EF1._ Proof.: Consider any two agents \(i,j\) such that \(X_{i}\neq\emptyset\). Let \(c\in X_{i}\) be such that \(p(X_{i}\setminus c)\leq p(X_{j})\). We have \[X_{i}\setminus c =_{i}\operatorname{MPB}_{i}\cdot p(X_{i}\setminus c) (\langle X,p\rangle\text{ is a Fisher equilibrium})\] \[\leq\operatorname{MPB}_{i}\cdot p(X_{j}) (X\text{ is pEF1})\] \[=\operatorname{MPB}_{i}\cdot\sum_{c\in X_{j}}\frac{p(c)}{d_{i}(c) }\cdot d_{i}(c)\] \[\leq_{i}X_{j}. (\operatorname{MPB}_{i}=\text{min}_{c\in M}d_{i}(c)/p(c))\] ## 3 Ef1 + fPO + Surplus In this section, we prove that after introducing at most \(n-1\) chores, an allocation exists which is EF1 and fPO at the same time. Each of these new chores is a copy of an existing chore. Moreover, we compute such an allocation in polynomial time. The high-level idea is to first consider a fractional allocation \(x\) which admits a \((1-\epsilon)\)-CEEI for \(\epsilon=\frac{1}{5nm}\). Then to each agent, we fully allocate some of the chores that are fractionally allocated to her in \(x\). This way, each agent only receives her MPB chores and therefore the allocation is fPO. Furthermore, we guarantee that each agent earns at least \(1-\epsilon\) amount of money and there exists a chore that upon its removal, the earned money drops below \(1-\epsilon\). This way, we can also guarantee EF1 property for the allocation. In order to achieve such an allocation, we allocate some chores to multiple agents and hence we need multiple copies of some of the chores. However, we prove that the number of required copies does not exceed \(n-1\). Basically, our algorithm introduces at most \(n-1\) copies of the existing chores and finds an integral Fisher equilibrium where each agent earns \(1-\epsilon\) amount of money up to one chore. **Lemma 1**.: _Given any Fisher equilibrium \(\langle x,p\rangle\) for a Fisher market \(F\), there exists a polynomial time algorithm \(\operatorname{\mathsf{makeAcyclic}}(x,p)\) that computes allocation \(y\) such that \(\langle y,p\rangle\) is a Fisher equilibrium for \(F\) and \(G_{\langle y,p\rangle}\) is acyclic._ Proof.: We define \(\operatorname{\mathsf{makeAcyclic}}(x,p)\) as following. If \(G_{\langle x,p\rangle}\) is acyclic then return \(\langle x,p\rangle\). Otherwise, as long as \(G_{\langle x,p\rangle}\) has a cycle do the following. Let \(C=(a_{1},c_{1}\dots a_{t},c_{t},a_{1})\) be a cycle in \(G_{\langle x,p\rangle}\) where \(a_{i}\) corresponds to the agent nodes and \(c_{i}\) to the chore nodes. Let us denote the earning of agent \(i\) from chore \(c\) in allocation \(x\) as \(e_{i,c}^{x}=x_{i,c}\cdot p_{c}\). Without loss of generality, assume \(e_{a_{1},c_{1}}^{x}\) is minimum among all \(e_{i,j}^{x}\) where \((i,j)\) is an edge in \(C\). Now consider the allocation \(y\) where for all \(i\in[t]\)\(e_{a_{i},c_{i}}^{y}=e_{a_{i},c_{i}}^{x}-e\), \(e_{i,c_{i}-1}^{y}=e_{a_{i},c_{i-1}}^{x}+e\). For all other pairs of \((i,j)\), \(e_{i,j}^{y}=e_{i,j}^{x}\). In the end of each iteration of detecting a cycle and computing \(y\), set \(x\gets y\). First we prove \(\operatorname{\mathsf{makeAcyclic}}(x,p)\) terminates in polynomial time. Let \(x\) be the allocation in the beginning of each iteration of detecting a cycle and \(y\) be the allocation in the end of the iteration. Note that \(E(G_{\langle y,p\rangle})\subsetneq E(G_{\langle x,p\rangle})\) since the edge \((a_{1},c_{1})\) exists in \(G_{\langle x,p\rangle}\) but not in \(G_{\langle y,p\rangle}\). Since at each step the number of edges decreases and each step terminates in polynomial time, the procedure terminates in polynomial time and in the end \(G_{\langle y,p\rangle}\) is acyclic. Now we prove the final \(\langle y,p\rangle\) is a Fisher equilibrium by induction. Note that in the beginning \(\langle x,p\rangle\) is a Fisher equilibrium. Now assuming \(\langle x,p\rangle\) is a Fisher equilibrium in the beginning of an iteration of removing an edge, we prove in the end of that iteration \(\langle y,p\rangle\) is a Fisher equilibrium too. Note that for each chore \(c\), \(\sum_{i\in[n]}e_{i,c}^{y}=e_{i,c}^{x}=p_{c}\). Thus, all chores are fully allocated in \(y\). Also, for each agents \(i\), \(\sum_{c\in M}e_{i,c}^{y}=\sum_{c\in M}e_{i,c}^{x}=\ell_{i}\). Moreover, for all agents \(i\) and chore \(c\), if \(y_{i,c}>0\), then \(x_{i,c}>0\). Therefore, in \(y\) like in \(x\) agents only receive chores with MPB. This means that \(\langle y,p\rangle\) is also a Fisher equilibrium. Now we explain Algorithm 1. Given instance \(\mathcal{I}\), let \(\epsilon=\frac{1}{5nm}\) and \(\langle x,p\rangle=\texttt{approxCEEI}(\mathcal{I},\epsilon)\) be the \((1-\epsilon)\)-CEEI computed in polynomial time by [17]. First we run \(\texttt{makeAcyclic}(x,p)\) to make \(G_{\langle x,p\rangle}\) acyclic. Then, we compute the integral allocation \(Y\) as follows. Our Algorithm consists of two phases. We start with \(G=G_{\langle x,p\rangle}\) and during Phase 1, we alter \(G\). At each point in time, let \(y\) be such that \(G\) is the payment graph of \(\langle y,p\rangle\) (i.e. \(G=G_{\langle y,p\rangle}\)). Let \(N_{v}\) be the set of the neighbors of node \(v\) in \(G\). Phase 1.Start from an empty allocation \(Y\) and run phase 1 as long as there is an unallocated chore \(c^{*}\) such that \(|N_{c^{*}}|=1\). Phase 1 of the algorithm consist of 2 steps. Basically, as long as there exists an unallocated chore \(c^{*}\) with \(|N_{c^{*}}|=1\), run Step 1 and then Step 2. Step 1.For all unallocated chores \(c\) with \(|N_{c}|=1\), let \(i_{c}\) be the agent such that \(N_{c}=\{i_{c}\}\). Then add \(c\) to \(Y_{i_{c}}\). Step 2.For all agents \(i\) and chores \(c\) such that \(\{i,c\}\in E(G)\), if for all chores \(c^{\prime}\in Y_{i}\cup\{c\}\), \(p((Y_{i}\cup c)\setminus c^{\prime})>1-\epsilon\), then distribute the earning of agent \(i\) from chore \(c\) equally among the other neighbors of \(c\) and remove the edge \(\{i,c\}\) from \(G\). Recall that \(e_{j,c}=x_{j,c}p_{c}\) is the earning agent \(j\) receives from chore \(c\) in the market outcome \(\langle x,p\rangle\). Formally, for all \(j\in N_{c}\setminus\{i\}\), we set \[e_{j,c}\gets e_{j,c}+\frac{y_{i,c}\cdot p_{c}}{|N_{c}|-1}.\] Phase 2.The second phase starts when for all unallocated chores \(c\), \(|N_{c}|\neq 1\). In Lemma 2 we prove the case \(|N_{c}|=0\) is not possible and therefore for all remaining chores \(c\), \(|N_{c}|>1\). Each of the connected components of \(G\) is a tree. For each of the trees \(T\) do the following. Take an arbitrary agent \(i_{0}\) in \(T\) and consider \(T\) rooted at \(i_{0}\). For agent \(i_{0}\), as long as \(p(Y_{i_{0}})<1-\epsilon\), keep adding chores from \(N_{i}\setminus Y_{i_{0}}\) to \(Y_{i_{0}}\). Then iterate on the agents in \(T\) in a breadth-first order and for each agent \(i\) do the following. Let \(c_{i}\) be the chore corresponding to the parent of agent \(i\) in \(T\). If \(c_{i}\) is not allocated yet, add it to \(Y_{i}\), i.e., \(Y_{i}\gets Y_{i}\cup\{c_{i}\}\). Then, keep adding the chores in \(N_{i}\setminus(Y_{i}\cup\{c_{i}\})\) to \(Y_{i}\) until \(p(Y_{i})\geq 1-\epsilon\) or until we run out of chores. Note that all chores in \(N_{i}\setminus(Y_{i}\cup\{c_{i}\})\) correspond to children nodes of agent \(i\) in \(T\). If at the end of this process \(p(Y_{i})<1-\epsilon\), add a copy of \(c_{i}\) to \(Y_{i}\). Algorithm 1 shows the pseudocode of our algorithm. In the rest of this section we prove that the final allocation \(Y\) is pEF1 and fPO with at most \((n-1)\) surplus. **Observation 1**.: _For all agents \(i\), \(p(y_{i})\geq 1-\epsilon\) at any time during Phase \(1\)._ Proof.: The proof is by induction. In the beginning of the algorithm, \(y=x\) and thus the claim holds. Now fix an agent \(i\) and let \(y\) be the allocation such that \(G=G_{\langle y,p\rangle}\) before deleting an edge \(e\) and \(y^{*}\) be the allocation such that \(G\setminus\{e\}=G_{\langle y^{*},p\rangle}\). Assuming \(p(y_{i})\geq 1-\epsilon\), we prove \(p(y_{i}^{*})\geq 1-\epsilon\). If \(e\) is not incident to \(i\), then \(p(y_{i}^{*})\geq p(y_{i})\) and thus the claim holds. If \(e\) is incident to \(i\), then \(p((Y_{i}\cup c)\setminus c^{\prime})>1-\epsilon\) for all \(c^{\prime}\in Y_{i}\cup\{c\}\). Therefore, \(p(Y_{i})=p((Y_{i}\cup c)\setminus c)>1-\epsilon\). Note that all chores in \(Y_{i}\) are incident to \(i\) in \(G_{\langle y^{*},p\rangle}\). Therefore, \(p(y_{i}^{*})\geq p(Y_{i})>1-\epsilon\). **Observation 2**.: _For all agents \(i\), \(p(y_{i})\leq 1+(2n-1)\epsilon\) at any time during Phase \(1\)._ Proof.: In the beginning of the algorithm, since \(y=x\), we have \(\sum_{i\in N}p(y_{i})\leq(1+\epsilon)n\). Allocation \(y\) changes during Phase \(1\) when an edge is deleted in Step 2. Upon the deletion of edge \(\{i,c\}\), \(y_{i,c}\cdot p_{c}\) is distributed among the neighbors of \(c\) (in case any such neighbors exist). Therefore, the value of \(\sum_{i\in N}p(y_{i})\) cannot increase during Phase 1. Thus, for all agents \(i\) at any point during Phase 1 we have \[(1+\epsilon)n \geq\sum_{j\in N}p(x_{j})\geq\sum_{j\in N}p(y_{j})\] \[\geq(1-\epsilon)(n-1)+p(y_{i}).\] (by Observation 1) Therefore, \(p(y_{i})\leq 1+(2n-1)\epsilon\). **Lemma 2**.: _Before the execution of Phase \(2\), for all unallocated chores \(c\), \(N_{c}\neq\emptyset\)._ Proof.: Towards a contradiction, assume at some point during Phase 1, \(\{i,c\}\) is the only edge incident to \(c\) and it gets deleted. Let \(y\) be such that \(G=G_{\left\langle y,p\right\rangle}\) just before deleting \(\{i,c\}\). Note that during Phase 1, as long as a chore has an incident edge, it remains fully allocated. Therefore, \(y_{i,c}=1\) since \(i\) is the only neighbor of \(c\). Also, \(p(Y_{i})>1-\epsilon\) (otherwise \(\{i,c\}\) would not be deleted). We have \[1+(2n-1) \epsilon \geq p(y_{i})\] (by Observation 2) \[\geq p(Y_{i})+y_{i,c}\cdot p_{c}\] \[\geq(1-\epsilon)+p_{c}.\] (by Observation 1) Thus, \(p_{c}\leq 2n\epsilon\). Together with Observation 2 we get \[p(Y_{i}\cup c)\leq 1+(2n-1)\epsilon+2n\epsilon<1+4n\epsilon. \tag{1}\] Let \(c^{*}\) be a chore with maximum \(p_{c^{*}}\) in \(Y_{i}\cup\{c\}\). By Pigeonhole principle, \(p_{c^{*}}\geq\frac{p(Y_{i}\cup c)}{|Y_{i}\cup\{c\}|}\geq\frac{p(Y_{i}\cup c)}{m}\). Thus \[p((Y_{i}\cup c)\setminus c^{*}) \leq\frac{m-1}{m}\cdot p(Y_{i}\cup c)\] \[<\frac{m-1}{m}\cdot(1+4n\epsilon)\] (by Inequality ( 1 ) ) \[\leq 1-\epsilon,\] (since \[\epsilon=\frac{1}{5nm}\] ) which is a contradiction with the edge \(\{i,c\}\) getting deleted. Therefore, for all chores \(c\) at least one incident edge of \(c\) remains until the end of Phase 1. **Observation 3**.: _All the chores in \(M\) are allocated in \(Y\)._ Proof.: By Lemma 2, in the beginning of Phase 2 no unallocated chore is isolated in \(G\). In Phase 2, all the chores that are the parent of some agent in \(T\) get allocated. Moreover, the leaf chores in \(T\) got allocated in Phase 1. Hence, all the chores are allocated in \(Y\). **Observation 4**.: _The number of copied chores in \(Y\) is at most \(n-1\)._ Proof.: In Phase 1, no chore is allocated more than once. Consider the step in which we allocate chores to agent \(i\) when iterating on \(T\) in breadth-first order. Note that except \(c_{i}\), all the chores that we allocate to \(i\) are her children nodes. Since we run BFS on \(T\), these children chores had not been assigned to any other agent before. Therefore, for each non-root agent, we might need to copy one chore and namely her parent node. Thus, the number of copied chores is at most \(n-1\). **Observation 5**.: _For all agents \(i\), \(p(Y_{i})\geq 1-\epsilon\)._ Proof.: Fix an agent \(i\). Since \(\langle x,p\rangle\) is a \((1-\epsilon)\)-CEEI, \(p(x_{i})\geq 1-\epsilon\). Note that if at some iteration of Step 2, an adjacent edge of \(i\) is deleted, then \(p(Y_{i})\geq 1-\epsilon\). Now assume no adjacent edge of \(i\) is deleted. Let \(X_{i}=\{c\in M|x_{i,c}>0\}\). We have \(p(X_{i})\geq p(x_{i})\geq 1-\epsilon\). Note that all the chores in \(X_{i}\) which are not added to \(Y_{i}\) in phase 1 are either children of \(i\) in \(T\) or her parent node. In either of the cases, as long as \(p(Y_{i})<1-\epsilon\), we add these chores to \(Y_{i}\). If we stop before adding the whole chores in \(X_{i}\) to \(Y_{i}\), it means that the condition \(p(Y_{i})\geq 1-\epsilon\) is satisfied. Otherwise we have \(Y_{i}=X_{i}\) and thus, \(p(Y_{i})\geq 1-\epsilon\). **Observation 6**.: _For all agents \(i\), there exists a chore \(c\in Y_{i}\) such that \(p(Y_{i}\setminus c)<1-\epsilon\)._ Proof.: Consider \(Y\) in the end of Phase 1. By Observation 2, \(p(Y_{i})\leq p(y_{i})\leq 1+(2n-1)\epsilon\). Let \(c\) be the chore with maximum \(p_{c}\) in \(Y_{i}\). We have \[p(Y_{i}\setminus c) \leq\frac{m-1}{m}\cdot p(Y_{i}) (p_{c}\geq p(Y_{i})/m\text{ by Pigeonhole principle})\] \[\leq\frac{m-1}{m}\cdot(1+(2n-1)\epsilon) (\text{by Observation 2})\] \[\leq 1-\epsilon. (\text{since }\epsilon=\tfrac{1}{5nm})\] Therefore, there exists a chores \(c\in Y_{i}\), such that \(p(Y_{i}\setminus c)\leq 1-\epsilon\) before the execution of Phase 2. Also, there exists a chore \(c\in Y_{i}\cup\{c_{i}\}\) such that \(p((Y_{i}\cup\{c_{i}\})\setminus c)<1-\epsilon\). Otherwise, the edge \((i,c_{i})\) would be deleted before Phase 2. So if in Phase 2 no chore is added to \(Y_{i}\) or only \(c_{i}\) is added to \(Y_{i}\), the claim holds. Otherwise, let \(c\) be the last chore added to \(Y_{i}\). Since we stop adding chores to \(Y_{i}\) the moment \(p(Y_{i})>1-\epsilon\), \(p(Y_{i}\setminus c)\leq 1-\epsilon\). Now we are ready to prove Theorem 1. **Theorem 1**.: _Given additive disutilites, there exists an allocation with at most \((n-1)\) surplus which is EF1 and fPO. Moreover, it can be computed in polynomial time._ Proof.: Let \(Y\) be the output of Algorithm 1. Let \(M^{\prime}\) be the set of copied chores that are allocated in \(Y\) in addition to the chores in \(M\). First we prove that \(\langle Y,p\rangle\) is a Fisher equilibrium for the market given by \(\langle N,M\cup M^{\prime},\mathcal{D},(p(Y_{1}),\dots,p(Y_{n}))\rangle\). By Observation 3, the market clears. Since \(\langle x,p\rangle\) is a CEEI for \(\langle N,M,\mathcal{D}\rangle\), for each agent \(i\), all the chores in \(X_{i}=\{c\in M|x_{i,c}>0\}\) are MPB chores. Since \(Y_{i}\subseteq X_{i}\), all the chores in \(Y_{i}\) are also MPB chores. In the end, it is clear that each agent \(i\) earns \(p(Y_{i})\). So all the conditions of a Fisher equilibrium hold for \(\langle Y,p\rangle\). Now we prove each of the properties for \(Y\) separately. Ef1.By Observations 5 and 6, \(Y\) is pEF1. Since \(\langle Y,p\rangle\) is a Fisher equilibrium, by Proposition 2, \(Y\) is EF1. fpo.By Proposition 1, every Fisher equilibrium is fPO. \((\mathbf{n-1})\) surplus.By Observation 3, all the chores in \(M\) are allocated and by Observation 4, the size of the surplus is at most \(n-1\). Now we prove Algorithm 1 terminates in polynomial time. The subroutines makeAcyclic runs in \(\operatorname{poly}(n,m)\) and \(\texttt{approxCEEI}(x,\epsilon)\) runs in \(\operatorname{poly}(n,m)\) for \(\epsilon=\tfrac{1}{5nm}\). Step 1 can be executed at most \(m\) times since in each iteration of Step 1 a chore gets allocated. Step 2 can be executed at most \(m+n-1\) times since in each iteration of Step 2 an edge gets deleted. Phase 2 is a BFS subroutine which terminates in \(\operatorname{poly}(n,m)\). Therefore, the total running time of Algorithm 1 is polynomial with respect to \(n\) and \(m\). Remark.The bound \(n-1\) on the size of the surplus is tight for Algorithm 1. Consider the instance with \(n\) agents and one chore \(c\) with disutility \(1\) for all the agents. Then any \(\epsilon\)-CEEI (for \(\epsilon=\tfrac{1}{5n}\)) allocates some fraction of \(c\) to all of the agents and Algorithm 1 copies \(c\) for \(n-1\) times and allocates one copy to each agent. ## 4 Fairness Among Three Agents Given an allocation \(X=\langle X_{1},X_{2},\ldots,X_{n}\rangle\), we say that an agent \(i\)_strongly envies_ an agent \(j\) if and only if \(X_{i}\setminus c>_{i}X_{j}\cup c\), for some \(c\in X_{i}\). Thus, an allocation is a tEFX allocation if there is no strong envy between any pair of agents. We now introduce certain concepts that will be useful in this section. **Definition 3** (tEFX feasibility).: _Given a partition \(X=(X_{1},X_{2},\ldots,X_{n})\) of \(M\), a bundle \(X_{k}\) is tEFX-feasible to agent \(i\) if and only if for all chores \(c\in X_{k}\) and all \(j\in[n]\),_ \[X_{k}\setminus c\leq_{i}X_{j}\cup c.\] _Therefore an allocation \(X=\langle X_{1},X_{2},\ldots,X_{n}\rangle\) is tEFX if and only if for each agent \(i\), \(X_{i}\) is tEFX-feasible._ Note that when agents have additive disutility functions, \(X_{k}\) is tEFX-feasible for agent \(i\) if and only if for all \(j\in[n]\), \(X_{k}\setminus c^{*}\leq_{i}X_{j}\cup c^{*}\) for \(c^{*}=\operatorname{argmin}_{c\in X_{k}}d_{i}(c)\). EFX-feasibility is defined in the same way. Formally, given a partition \(X=(X_{1},X_{2},\ldots,X_{n})\) of \(M\), a bundle \(X_{k}\) is EFX-feasible to agent \(i\) if and only if for all chores \(c\in X_{k}\) and all \(j\in[n]\), \(X_{k}\setminus c\leq_{i}X_{j}\). Restriction to non-degenerate instances is no loss of generality and simplifies arguments about linear programs. The same is true for allocation of goods and chores. Here, it means that no two distinct bundles of chores are valued the same by any agent. Chaudhury et al. [19] showed that to prove the existence of EFX allocations in the goods setting, when agents have additive valuations, it suffices to show the existence of EFX allocations for all non-degenerate instances. We adapt their approach and in Appendix A we show that the same claim holds, even when agents have additive disutilities and the notion of fairness is tEFX. _Henceforth, in the rest of this section we assume that the given instance is non-degenerate, implying that every agent has positive disutility for every chore._ In this section we prove Theorem 2. We start with an allocation which is EFX assuming all agents' disutility functions are \(d_{1}\). During the algorithm we maintain a partition \((X_{1},X_{2},X_{3})\) of the chores such that all the following invariants hold. **Invariant 1**.: \(X_{1}\) _and \(X_{2}\) are tEFX-feasible for agent \(1\)._ **Invariant 2**.: _For all \(i\in[2]\) and \(c\in X_{i}\), \(X_{i}\setminus c\leq_{1}X_{3}\)._ **Invariant 3**.: \(X_{3}\) _is tEFX-feasible for agent \(3\)._ We use the potential function \(\Phi(X)=|X_{1}|+|X_{2}|\). Each iteration of our algorithm updates the allocation such that the new allocation is proportional or tEFX or satisfies all the invariants and has a smaller potential value. Since the value of the potential is at most \(m\), the number of iterations is at most \(m\). Li et al. [31] proved when agents have identical ordering on the chores, an EFX allocation can be computed in polynomial time. For completeness, we prove Lemma 3. **Lemma 3**.: _When all agents have additive disutility function \(d\), Algorithm 2 returns an EFX allocation in time \(\mathcal{O}(m\log m)\)._ Proof.: The proof is by induction on the number of allocated chores. In the beginning, the empty allocation is EFX. Now assume the allocation is EFX right before allocating chore \(c_{i}\) to agent \(j\). It suffices to prove that agent \(j\) does not strongly envy any other agent. For all chores \(c\in X_{j}\cup\{c_{i}\}\) and all agents \(j^{\prime}\neq j\) we have \[d(X_{j}\cup\{c_{i}\}\setminus\{c\}) =d(X_{j})+d(c_{i})-d(c)\] (additivity of \[d\] ) \[\leq d(X_{j}) (d(c_{i})\leq d(c))\] \[\leq d(X_{j^{\prime}}). (j=\text{\it argmin}_{\ell\in[n]}d(X_{\ell}))\] Sorting the chores according to their disutility takes \(\mathcal{O}(m\log m)\) time. We keep the pairs \((d(X_{i}),X_{i})\) in a priority queue which takes \(\mathcal{O}(n\log n)\). Then each round of allocating a chore requires a _delete-min_ action (\(\mathcal{O}(1)\)) and an _insert_ (\(\mathcal{O}(\log n)\)). Hence, Algorithm 2 terminates in time \(\mathcal{O}(m\log m+n\log n+m\log n)=\mathcal{O}(m\log m)\). ``` 1:Input : Instance \(\mathcal{I}=([n],M,d)\) 2:Output: allocation \(X\) 3:\(X\leftarrow\langle\emptyset,\emptyset,\ldots,\emptyset\rangle\) 4:Let \(d(c_{1})\geq d(c_{2})\geq\ldots\geq d(c_{m})\) 5:for\(i\gets 1\) to \(m\)do 6: Let \(j=\text{\it argmin}_{\ell\in[n]}d(X_{\ell})\) 7:\(X_{j}\gets X_{j}\cup\{c_{i}\}\) 8:Return \(X\) ``` **Algorithm 2** EFX-Identical In the beginning, we run Algorithm 2 with \(d=d_{1}\) to obtain allocation \(X\). Note that all \(X_{1}\), \(X_{2}\) and \(X_{3}\) are EFX-feasible for agent 1. Without loss of generality, assume \(X_{3}\leq_{3}X_{1}\leq_{3}X_{2}\), i.e., \(d_{3}(X_{3})\leq d_{3}(X_{1})\leq d_{3}(X_{2})\). Then, since all bundles are EFX-feasible for agent 1, Invariants 1 and 2 hold and since \(X_{3}\) is the favorite bundle of agent 3, Invariant 3 holds too. If \(X_{1}\) or \(X_{2}\) is tEFX-feasible for agent 3, we can allocate a tEFX-feasible bundle to each of the agents. Without loss of generality assume \(X_{2}\) is also tEFX-feasible for agent 3. Then we let agent 2 pick her favorite bundle. If she picks \(X_{2}\), we assign \(X_{1}\) to agent 1 and \(X_{3}\) to agent 3. If agent 2 picks \(X_{1}\), then we assign \(X_{2}\) to agent 1 and \(X_{3}\) to agent 3. The case that agent 2 picks \(X_{3}\) is symmetric. Now we assume that \(X_{3}\) is the only tEFX-feasible bundle for agent 3. Let \(c_{1}=\text{\it argmin}_{c\in X_{1}}d_{3}(c)\). Then the algorithm moves \(c_{1}\) from \(X_{1}\) to \(X_{3}\). Let \(X_{1}^{\prime}=X_{1}\setminus c_{1}\), \(X_{2}^{\prime}=X_{2}\) and \(X_{3}^{\prime}=X_{3}\cup c_{1}\). The next step of the algorithm depends on whether \(X_{2}^{\prime}\) is tEFX-feasible for agent 1 or not. In Lemma 4 we show that if \(X_{2}^{\prime}\) is tEFX-feasible for agent 1 then \(X^{\prime}\) satisfies all the invariants. **Observation 7**.: _Let \(c_{1}=\text{\it argmin}_{c\in X_{1}}d_{3}(c)\). If \(X_{3}\) is the only tEFX-feasible bundle for agent \(3\) and \(X_{1}\leq_{3}X_{2}\), then \(X_{1}\setminus c_{1}>_{3}X_{3}\cup c_{1}\)._ Proof.: Assume otherwise. For all \(c\in X_{1}\) we have \[X_{1}\setminus c \leq_{3}X_{1}\setminus c_{1} (c_{1}\leq_{3}c\text{ and additivity of }d_{3})\] \[\leq_{3}X_{3}\cup c_{1}\] \[\leq_{3}X_{3}\cup c. (c_{1}\leq_{3}c\text{ and additivity of }d_{3})\] Since \(X_{1}\leq_{3}X_{2}\), \(X_{1}\) is tEFX-feasible for agent 3 which is a contradiction. **Lemma 4**.: _If \(X_{2}^{\prime}\) is tEFX-feasible for agent \(1\), then Invariants 1, 2 and 3 hold._ Proof.: For all \(c\in X_{1}^{\prime}\) and \(i\in\{2,3\}\) we have \[X_{1}^{\prime}\setminus c \leq_{1}X_{1}\setminus c (X_{1}^{\prime}\subset X_{1})\] \[\leq_{1}X_{i}\cup c (X_{1}\text{ is tEFX-feasible for agent 1})\] \[\leq_{1}X_{i}^{\prime}\cup c. (X_{i}\subseteq X_{i}^{\prime})\] Therefore, Invariant 1 holds. Also, for all \(i\in[2]\) and \(c\in X_{i}^{\prime}\) \[X_{i}^{\prime}\setminus c \leq_{1}X_{i}\setminus c (X_{i}^{\prime}\subseteq X_{i})\] \[\leq_{1}X_{3} (\text{Invariant 2 holds for }X)\] \[\leq_{1}X_{3}^{\prime}. (X_{3}\subset X_{3}^{\prime})\] Thus, Invariant 2 holds. By Observation 7, we have \(X_{1}^{\prime}>_{3}X_{3}^{\prime}\). Also, \(X_{2}^{\prime}=_{3}X_{2}\geq_{3}X_{1}\geq_{3}X_{1}^{\prime}\). Hence, \(X_{3}^{\prime}\) is the favorite bundle of agent 3 and is tEFX-feasible for her. Therefore, Invariant 3 holds as well. After moving \(c_{1}\), we have \(\Phi(X^{\prime})=|X_{1}^{\prime}|+|X_{2}^{\prime}|=|X_{1}|-1+|X_{2}|<\Phi(X)\). Thus, if \(X_{2}^{\prime}\) is tEFX-feasible for agent 1, by Lemma 4 all the invariants hold and also the potential function decreases. Now we assume that \(X_{2}^{\prime}\) is not tEFX-feasible for agent 1. As long as the second bundle is not tEFX-feasible for agent 1, keep moving chores from \(X_{2}^{\prime}\) to \(X_{1}^{\prime}\) in non-decreasing order of \(d_{1}(\cdot)\). Formally, let \(X_{2}^{\prime}=\{c_{1}^{\prime},c_{2}^{\prime},\ldots,c_{k}^{\prime}\}\) and \(c_{1}^{\prime}\leq_{1}c_{2}^{\prime}\leq_{1}\ldots\leq_{1}c_{k}^{\prime}\). Then \(Y_{1}=X_{1}^{\prime}\cup\{c_{1}^{\prime},\ldots,c_{\ell}^{\prime}\}\) and \(Y_{2}=X_{2}^{\prime}\setminus\{c_{1}^{\prime},\ldots,c_{\ell}^{\prime}\}\) such that \(Y_{1}<_{1}Y_{2}\) and \(Y_{1}\cup c_{\ell+1}^{\prime}\geq_{1}Y_{2}\setminus c_{\ell+1}^{\prime}\). Note that \(\ell\geq 1\). Let \(Y_{3}=X_{3}^{\prime}\). **Lemma 5**.: _Invariants 1 and 2 hold for \(Y\)._ Proof.: We have \[Y_{1} <_{1}Y_{2}\leq_{1}X_{2}^{\prime}\setminus c_{1}^{\prime} (Y_{2}\subseteq X_{2}^{\prime}\setminus c_{1}^{\prime})\] \[\leq_{1}X_{3}^{\prime} (\text{Invariant 2 holds for }X^{\prime}\text{ by Lemma 4})\] \[=_{1}Y_{3} (Y_{3}=X_{3}^{\prime})\] Therefore, Invariant 2 holds. We also know that for all \(c^{\prime}\in Y_{2}\), \(c^{\prime}\geq_{1}c_{\ell+1}\). Hence, for all \(c^{\prime}\in Y_{2}\), \[Y_{1}\cup c^{\prime}\geq_{1}Y_{1}\cup c_{\ell+1}^{\prime}\geq_{1}Y_{2} \setminus c_{\ell+1}^{\prime}\geq_{1}Y_{2}\setminus c^{\prime}.\] Since \(Y_{1}<_{1}Y_{2}\), Invariant 1 holds too. Now if \(Y_{3}\) is tEFX-feasible for agent 3, then all the invariants hold and \(\Phi(Y)=|Y_{1}|+|Y_{2}|=|X_{1}^{\prime}|+|X_{2}^{\prime}|=|X_{1}|+|X_{2}|-1<\Phi (X)\). In Section 4.1, we prove that if \(Y_{3}\) is not tEFX-feasible for agent 3, we can obtain a proportional allocation. ### Proportional Allocation When \(Y_{3}\) Is Not tEFX-feasible for Agent \(3\) In the following observations, we prove that \(Y_{1}\) and \(Y_{2}\) are proportional for agent 1, and \(Y_{2}\) and \(Y_{3}\) are proportional for agent 3. Then without any further modification of the bundles, we allocate these bundles to the agents such that the final allocation is proportional. **Observation 8**.: \(d_{3}(Y_{3})<d_{3}(M)/3\) Proof.: We have \[X_{2}^{\prime}=_{3}X_{2}\geq_{3}X_{1} (X_{2}^{\prime}=X_{2})\] \[\geq_{3}X_{1}^{\prime} (X_{1}^{\prime}=X_{1}\setminus c_{1})\] \[>_{3}X_{3}^{\prime} (\text{Observation 7})\] \[=_{3}Y_{3}. (Y_{3}=X_{3}^{\prime})\] Hence, \(d_{3}(X_{3}^{\prime})<d_{3}(X_{1}^{\prime})\) and \(d_{3}(X_{3}^{\prime})<d_{3}(X_{2}^{\prime})\). By additivity of \(d_{3}(\cdot)\), we get that \(d_{3}(X_{3}^{\prime})<d_{3}(M)/3\). **Observation 9**.: _If \(Y_{3}\) is not tEFX-feasible for agent \(3\), then \(d_{3}(Y_{2})<d_{3}(M)/3\)._ Proof.: We have \[Y_{1}\geq_{3}X_{1}^{\prime} (X_{1}^{\prime}\subset Y_{1})\] \[>_{3}X_{3}^{\prime} (\text{Observation 7})\] \[=_{3}Y_{3}. (Y_{3}=X_{3}^{\prime})\] Since \(Y_{3}\) is not tEFX-feasible for agent \(3\), it cannot be her favorite bundle. Since \(d_{3}(Y_{1})>d_{3}(Y_{3})\), we have \(d_{3}(Y_{2})<d_{3}(Y_{3})\). By Observation 8, \(d_{3}(Y_{3})<d_{3}(M)/3\). Hence, \(d_{3}(Y_{2})<d_{3}(M)/3\). Finally, in Observation 10, we prove that \(d_{1}(Y_{1})\leq d_{1}(M)/3\) and \(d_{1}(Y_{2})\leq d_{1}(M)/3\). **Observation 10**.: \(d_{1}(Y_{1})\leq d_{1}(M)/3\) _and \(d_{1}(Y_{2})\leq d_{1}(M)/3\)._ Proof.: Consider the allocation \(\langle X_{1}\cup c_{1}^{\prime},X_{2}\setminus c_{1}^{\prime},X_{3}\rangle\). Note that since Invariants 1 and 2 hold for \(X\), we have \(d_{1}(X_{2}\setminus c_{1}^{\prime})\leq d_{1}(X_{1}\cup c_{1}^{\prime})\) and \(d_{1}(X_{2}\setminus c_{1}^{\prime})\leq d_{1}(X_{3})\). By additivity of \(d_{1}\), we have \(d_{1}(X_{2}\setminus c_{1}^{\prime})\leq d_{1}(M)/3\). Now it suffices to prove that \(d_{1}(Y_{1})\leq d_{1}(X_{2}\setminus c_{1}^{\prime})\) and \(d_{1}(Y_{2})\leq d_{1}(X_{2}\setminus c_{1}^{\prime})\). Note that \(d_{1}(Y_{1})<d_{1}(Y_{2})\) and \(Y_{2}\subseteq X_{2}\setminus c_{1}^{\prime}\). Therefore, we have \(d_{1}(Y_{1})<d_{1}(Y_{2})\leq d_{1}(X_{2}\setminus c_{1}^{\prime})\). At this stage of the algorithm, by Observation 10 we have that \(d_{1}(Y_{1})\leq d_{1}(M)/3\) and \(d_{1}(Y_{2})\leq d_{1}(M)/3\). Also by Observations 8 and 9, we have \(d_{3}(Y_{3})<d_{3}(M)/3\) and \(d_{3}(Y_{2})<d_{3}(M)/3\). Now we let agent 2 pick her favorite bundle. Let it be \(Y_{i}\). Clearly, \(d_{2}(Y_{i})\leq d_{2}(M)/3\). As already argued before, no matter which bundle agent 2 chooses, we can allocate one of \(Y_{1}\) or \(Y_{2}\) to agent 1 and one of \(Y_{2}\) or \(Y_{3}\) to agent 3. Therefore, we obtain a proportional allocation. ## 5 Conclusion We have introduced a concept of \(k\) surplus and showed the existence of an allocation that is both EF1 and fPO with at most \(n-1\) surplus in the case of indivisible chores. Furthermore, such an allocation can be computed in polynomial time. A natural open question is whether there exists an allocation that is both EF1 and fPO with \(<n-1\) surplus. Our second result shows the existence of allocation of indivisible chores among 3 agents that is either tEFXor proportional. Since proportionality is a very strong guarantee, which is not possible to satisfy for every instance, this result is the first non-trivial result for a slight relaxation of EFX for 3 agents. A natural open question is whether EFX allocations exist for 3 agents.
2310.18117
Do we need scan-matching in radar odometry?
There is a current increase in the development of "4D" Doppler-capable radar and lidar range sensors that produce 3D point clouds where all points also have information about the radial velocity relative to the sensor. 4D radars in particular are interesting for object perception and navigation in low-visibility conditions (dust, smoke) where lidars and cameras typically fail. With the advent of high-resolution Doppler-capable radars comes the possibility of estimating odometry from single point clouds, foregoing the need for scan registration which is error-prone in feature-sparse field environments. We compare several odometry estimation methods, from direct integration of Doppler/IMU data and Kalman filter sensor fusion to 3D scan-to-scan and scan-to-map registration, on three datasets with data from two recent 4D radars and two IMUs. Surprisingly, our results show that the odometry from Doppler and IMU data alone give similar or better results than 3D point cloud registration. In our experiments, the average position error can be as low as 0.3% over 1.8 and 4.5km trajectories. That allows accurate estimation of 6DOF ego-motion over long distances also in feature-sparse mine environments. These results are useful not least for applications of navigation with resource-constrained robot platforms in feature-sparse and low-visibility conditions such as mining, construction, and search & rescue operations.
Vladimír Kubelka, Emil Fritz, Martin Magnusson
2023-10-27T13:04:00Z
http://arxiv.org/abs/2310.18117v1
# Do we need scan-matching in radar odometry? ###### Abstract There is a current increase in the development of "4D" Doppler-capable radar and lidar range sensors that produce 3D point clouds where all points also have information about the radial velocity relative to the sensor. 4D radars in particular are interesting for object perception and navigation in low-visibility conditions (dust, smoke) where lidars and cameras typically fail. With the advent of high-resolution Doppler-capable radars comes the possibility of estimating odometry from single point clouds, foregoing the need for scan registration which is error-prone in feature-sparse field environments. We compare several odometry estimation methods, from direct integration of Doppler/IMU data and Kalman filter sensor fusion to 3D scan-to-scan and scan-to-map registration, on three datasets with data from two recent 4D radars and two IMUs. Surprisingly, our results show that the odometry from Doppler and IMU data alone give similar or better results than 3D point cloud registration. In our experiments, the average position error can be as low as 0.3% over 1.8 and 4.5 km trajectories. That allows accurate estimation of 6DOF ego-motion over long distances also in feature-sparse mine environments. These results are useful not least for applications of navigation with resource-constrained robot platforms in feature-sparse and low-visibility conditions such as mining, construction, and search & rescue operations. 4D Radar, Radar Odometry, Mobile robot, Localization ## I Introduction Rapid development in millimeter wave imaging radars, driven by the automotive industry, has enabled localization and mapping in environments where we expect deteriorated visibility conditions and dirt deposition on the sensors. Deploying autonomous vehicles in the mining industry, construction or search & rescue are example applications that demand such capability. Modern imaging radars, similarly to 3D lidars, provide 3D scans of the surroundings. They are additionally able to estimate the radial velocity of each sensed 3D point by leveraging precise phase measurements of the returning signal. This Doppler velocity, as we further denote it, has proven to be advantageous for odometry methods, aiding in the segmentation of dynamic and static objects [1], as well as introducing more constraints to the ego-motion estimation [2, 3]. Moreover, the velocity measurement comes without the need to perform data association, which can be challenging in feature-sparse environments, such as underground mines. In recent years, several approaches to radar odometry and simultaneous localization and mapping (SLAM) have emerged. Motivated by the problem of developing a SLAM system for an underground mining environment, we compare several representative radar odometry estimation methods. To that end, we deploy them on three datasets that include two distinct modern imaging radars. Two datasets have been recorded with our mobile sensor rig: in an underground mine (Fig. 1) and in an outdoor testing site for large wheel loaders (Figs. 3 and 4). The third dataset has been published by Zhang et al. [4] and represents a structured urban environment. Surprisingly, using the simplest method of directly fusing the Doppler-based radar ego-velocity with the orientation provided by an inertial measurement unit (IMU), we are able to achieve localization drift as low as 0.3% over \(4.5\,\mathrm{km}\) and \(1.8\,\mathrm{km}\) trajectories from the mine and the outdoor testing site. We find this experimental result useful for designing localization and mapping systems for the mentioned applications and worth spreading in the robotics community for further investigation. Moreover, we make our dataset publicly available at [https://github.com/kubelvla/mine-and-forest-radar-dataset](https://github.com/kubelvla/mine-and-forest-radar-dataset) as the high-grade radars are still difficult to obtain. ## II Related Work In this section, we briefly review radar odometry algorithms based on 2D scanning radars. Then, more closely relevant to this article, we focus on the state of the art in modern 4D radar odometry. Classical 2D scanning radars usually provide the scans in the form of _spectral images_, which encode signal intensity in the radial direction for every azimuth they measure during a single radar scan. Cen et al. [5, 6] proposed methods Fig. 1: Detail from the Kvarntorp mine environment captured by two sensor modalities, lidar and radar. The radar modality suffers from limited field of view (FOV), lower resolution and fewer returns. It is however more suitable for low-visibility conditions expected in mining. to extract radar key points from such spectral images and showed how their approach improves the scan matching. Later works such as Barnes et al. [7] include machine-learning techniques to improve the key point detection. Burnett at al. [8] shows the importance of addressing motion distortion and Doppler shift in the radar data. Contrary to searching for key points, Adolfsson et al. [9] approach the problem from the point cloud perspective, focusing on local geometry that could better constrain the matching process. Park et al. [10] avoid point matching altogether by applying Fourier-Mellin transform to find correlation between subsequent radar scans. The rapid development of 4D Doppler-capable imaging radars opens new possibilities in object detection, motion estimation and localization. The surveys by Venon et al. [11] and Zhou et al. [12] provide comprehensive overview of the state of the art in this research direction. The mmWave sensors by Texas Instruments have gained a lot of attention, as they are lightweight and still provide tens of 3D points with Doppler velocity. Doer and Trommer [2] propose a loosely-coupled extended Kalman filter (EKF) filtering method, which fuses radar ego-velocity, inertial and barometric measurements to track the pose of an unmanned aerial vehicle (UAV). They develop a RANSAC-based least squares optimization algorithm to extract the radar ego-velocity from the radar data. In their later work [13], they incorporate global navigation satellite system (GNSS) measurements as well. In our work, we adopt their open-source implementation to perform experiments with our sensor suite. Contrary to a filtering approach, Kramer et al. [14] propose a sliding window optimization algorithm that fuses Doppler and inertial measurements to track a pose of a mobile sensor rig. They verify their results in underground and outdoor environments. Focusing on the UAV application, Michalczyk et al. [15, 3] propose a tightly-coupled, EKF-based radar odometry. Their approach includes point matching, where the perceived distance between the matched points, together with the Doppler velocity, serves as the residual vector for the EKF. Lu et al. [16] takes a different direction, both radar measurements and inertial measurement are processed by a deep neural network (DNN) to estimate pose of a mobile agent. They use a convolutional neural network (CNN) to extract features from the radar data and a recurrent network to analyze the inertial data. The features are fused in following DNN stages to produce pose estimates. As the 4D imaging radars evolve and their resolution increases, classical 3D scan-matching methods have become feasible. Zhuang et al. [17] fuse inertial data, Doppler-based ego-motion estimates and scan-to-submap constraints by an iterative EKF to obtain radar odometry. In a separate module, they complete the system into a full SLAM solution by performing a generalized ICP (GICP)-based loop closure and global map optimization. They use a Continental ARS548 radar with \(300\,\mathrm{m}\) range and \(0.3\,\mathrm{m}\) resolution that produces approximately 400 points per scan. Zhang et al. [4] choose a classical SLAM approach with their Oculii Eagle radar sensor (approximately 5000 points per scan in their public dataset). They modify the SLAM network from Koide et al. [18] by adding a modified GICP matching algorithm that takes into account the specific spatial uncertainties in radar point clouds. Since their work and dataset are open-source, we include them in our experimental evaluation. ## III Radar odometry variants From the variety of radar odometry methods, we choose a representative set which is available open-source, applicable to our sensors and spans from simple sensor fusion to advanced scan-matching. ### _Doppler velocity and IMU_ The simplest approach to pose estimation tested in this work exploits the orientation provided by an IMU and ego-velocity as measured by a Doppler-capable radar sensor. The ego-velocity is first transformed from the coordinate frame of the moving platform to the world coordinate frame based on the IMU attitude. It is then numerically integrated assuming constant velocity between consecutive radar scans. This way, a trajectory expressed in the world coordinate frame is generated. Further in the text, we refer to this approach as to _IMU+Doppler_. Since the radar does not directly provide the ego-velocity measurement but rather radial components of its detected target velocities, it is necessary to robustly process this information to estimate the ego-velocity of the radar. For Fig. 3: Aerial view of the Forest environment (left, source: Google Maps\({}^{\text{\textregistered}}\)) and the Volvo CE wheel loader equipped with the sensor rig (right). Fig. 2: The pick-up truck driving through the Mine with the multi-sensor rig attached to the roof (top). The sensor rig detail (bottom). this purpose, we deploy the approach and implementation1 by Doer and Trommer [2]. Their _3-Point RANSAC-LSQ_ ego-motion estimation method applies RANSAC to the underlying least squares optimization problem (eq. 27-32 in [2]). This algorithm is highly efficient. The average processing time for one radar scan is \(10\,\mathrm{ms}\) in our dataset. Footnote 1: [https://github.com/christopherdoer/reve](https://github.com/christopherdoer/reve) ### _Extended Kalman Filter fusion_ In contrast to direct Doppler+IMU fusion, employing the EKF allows more principled handling of noise in the sensor measurements and provides pose confidence estimates. We benefit from the implementation2 by Doer and Trommer [2] which combines their 3-Point RANSAC-LSQ ego-motion estimation with inertial and barometric measurements. The sensor measurements are fused in a loosely-coupled manner by an EKF. There are several extensions of the algorithm available. We choose the original _ekf-rio_ version as it does not require a precise radar trigger signal, which we unfortunately do not get from our radar. The algorithm applies, in that case, the incoming ego-motion measurements with a lag of approximately \(100\,\mathrm{ms}\) which can impede the state estimation quality, especially during highly dynamical motion. Moreover, we omit the barometer measurements as our sensor rig lacks this sensor. The results we obtain here therefore represent a lower bound on the odometry quality achievable by the filtering methods. We refer to this approach as to EKF in the following text. Footnote 2: [https://github.com/christopherdoer/rio](https://github.com/christopherdoer/rio) It is noteworthy that the works of Michalczyk et al. [3, 15] report improvements upon [2] by employing a tightly-coupled EKF filtering for radar-inertial odometry. They are able to achieve localization drift below 1%. It remains an interesting question how the tightly-coupled algorithm handles the high-grade radar scans of thousands of targets. ### _Point-to-plane Iterative Closest Point with local map_ The high resolution of the tested radars allow us to test methods originally developed for registration of lidar point clouds. For testing the scan-to-submap matching variant, we use the _norlab-icp-mapper3_ which is open-source and highly configurable. It supports a range of iterative closest point (ICP) variants, from which we choose the _point-to-plane_ variant as it generally performs well in structured and semi-structured environments. This mapper does not support map optimization by loop closure identification, it rather builds a monolithic map and thus behaves as a lidar odometry method. The mapper is set to add new points into the map up to a maximum density defined by the minimum distance between points, which is \(0.1\,\mathrm{m}\) in our experiments. Point-to-plane ICP also requires estimation of normal vectors based on local geometry around each point in the map. In our experiments, we use 15 nearest points for that. Also, preliminary tests have shown that this mapper requires a prior motion estimate when deployed on the radar data. We thus provide the Doppler+IMU pose as the prior in all experiments. Footnote 3: [https://github.com/norlab-ulaval/norlab_icp_mapper](https://github.com/norlab-ulaval/norlab_icp_mapper) The ICP algorithm in the mapper offers full 6-degrees of freedom (DOF) pose estimation, or a constrained 4-DOF pose estimation. In the 4-DOF variant, only position and heading are optimized in the point cloud registration, the other two DOF are directly adopted by the mapper from the IMU-provided orientation. In this work, we test both variants, and refer to them as to _ICP_ and _ICP 4DOF_. ### _Scan-to-scan matching variants_ The final group of radar odometry variants tested in this work employs the scan-to-scan matching, which is often used in front-end modules of larger SLAM frameworks. Zhang et al. [4] successfully applies this approach in a SLAM framework with a modern imaging radar (Oculii Eagle). Since their implementation of the SLAM framework is open-source4, we include it here for testing their radar odometry with our radar dataset. Moreover, they provide one session from their dataset which in turn allows us to test all the other approaches with the Oculii Eagle radar. Their radar odometry front-end is highly configurable, allowing users to choose from several other scan-matching algorithms. We choose to test their adaptive probability distribution-GICP (APDGICP) variant of GICP. Their scan-matching method Fig. 4: Top-view of a yellow wheel loader navigating a forest road. From its suite of sensors, lidar, radar and front camera are shown. The dark black points denote the active lidar scan, which is in contrast with the colored, front-facing radar scan. Fig. 5: Assembled radar map of the Car Park environment (left) denoted by black points with a live radar scan (red points). The sensor rig (bottom right) was manually pushed along the trajectory. The data and figures adopted from [4]. can function without a prior motion estimate, yet we modify the code to include the option to use the Doppler+IMU odometry prior. This makes the comparison with the scan-to-submap-matching variants fair. When providing the prior, we refer to the method as to _APDGICP IMU Prior_, _APDGICP_ otherwise. We also choose to test their implementation normal distributions transform (NDT) scan-matching algorithm as it is often used in lidar odometry solutions. For NDT, we always use the Doppler+IMU prior and refer to it as to _NDT_ in the evaluation. ## IV Experiments and Analysis This section first introduces the two environments used for recording the experimental trajectories and then one experiment adopted from the dataset published by Zhang [4]. The details about the sensor suites used to record the experiments are provided as well. Subsequently, the performance of the discussed radar odometry approaches is studied by the means of the absolute pose error (APE) and relative pose error (RPE) metrics which are widely used for this purpose. ### _Environment and sensor setup_ Motivated by research towards SLAM in harsh environmental conditions, two field datasets were recorded: one in the Kvartnorp research mine outside of Orebro, Sweden, and one at an outdoor testing site for Volvo Construction Equipment wheel loaders and dumpers in Eskilstuna, Sweden. **The Kvartnorp test mine** provides a model environment for underground mining industry applications. A \(4500\,\mathrm{m}\)-long run was recorded with a sensor rig attached to the roof of a pick-up truck as shown in Fig. 2. The average speed was \(21\,\mathrm{km}\,\mathrm{h}^{-1}\) which is close to the max safely drivable speed in the mine. Fig. 1 gives a general impression of the underground tunnels. At some locations, the tunnels are straight with no side-tunnels and these sections are generally the most demanding for any kind of SLAM, regardless the modality. On the other hand, locations with side-tunnels provide a large number of geometrical constraints a SLAM algorithm can benefit from. Fig. 1 shows such an area, with two modalities presented, lidar in greyscale points and radar in colored points. For reference, the view from an RGB camera is also provided. Further in the text, we will refer to this experiment as _Mine_. **The Eskilstuna outdoor testing site** is used by Volvo CE for development and testing of their products, including the large wheel loaders as shown in Fig. 3. For our experiments, the sensor rig used in Mine was moved from the pickup truck to the Volvo wheel loader. A \(1800\,\mathrm{m}\)-long trajectory was recorded which took the wheel loader through open Fig. 8: The translation component of APE in the Forest for all discussed odometry variants. Fig. 10: The APE values for the Car Park environment for all discussed odometry variants. Fig. 6: The Mine environment recorded with the Sensrad Hugin radar. The 4DOF ICP is omitted from this plot for clarity, its vertical drift would be limited compared to the standard ICP shown in red. Similarly, the scan-to-scan matching odometries are not shown for their fast divergence. Fig. 7: The translation component of APE in the Mine for all discussed odometry variants. Fig. 9: The Car Park experiment from [4] recorded with the Oculii Eagle radar. Only selected odometry variants shown for better clarity. space and on a forest road (see Fig. 4). The trajectory was a loop, repeated twice, with the average speed of \(13.6\,\mathrm{km}\,\mathrm{h}^{-1}\). Precise RTK-GPS reference was recorded but for the purposes of this study, we base our metrics on a lidar-SLAM-based reference, which provides the full 6-DOF pose. We only confirm here that the positioning from the RTK-GPS agrees with our reference SLAM results. Further on in the text, we will refer to this testing site as to _Forest_. **The sensor suite** used in the experiments is detailed in Fig. 2. The sensors are attached to a metal rig and connected to an Intel NUC computer that runs Ubuntu with ROS installed. Raw data are saved to ROS bag files for later processing. The suite consists of three radars, one lidar, three cameras and an IMU. The radar used in Mine and Forest is the Sensrad Hugin A3 radar, with horizontal and vertical FOV \(80^{\circ}\) and \(30^{\circ}\) respectively. Thanks to its configuration of 48 \(\times\) 48 transmitting and receiving antennas, the horizontal and vertical resolution is \(1.25^{\circ}\) and \(1.7^{\circ}\). The radar is operated in _short range_ settings, which implies maximum range \(42\,\mathrm{m}\), but grants the highest range resolution of \(0.1\,\mathrm{m}\). The frame rate is \(16\,\mathrm{Hz}\) and the scans contain approximately 10000 points in our environments. For reference localization, an Ouster OS1-32 lidar is used. The lidar frame rate is \(10\,\mathrm{Hz}\) and the all point are time-stamped under PTP synchronization with the master computer. Finally, inertial data are recorded by an Xsens MTi-30 IMU at \(400\,\mathrm{Hz}\) rate. The IMU is running its own attitude estimation using the _VRU General_ profile, which does not use magnetometer data to absolutely reference the heading angle. Yet, the magnetometer measurements are still used to estimate gyro biases and thus limit the heading drift down to \(3\,\mathrm{\SIUnitSymbolDegree}\,\mathrm{h}^{-1}\) in ideal conditions. **The Car Park** trajectory is a part of the dataset recorded by Zhang et al. [4]. In their setup, they used the Oculii Eagle radar that provides FOV of \(120^{\circ}\times 30^{\circ}\) (horizontal, vertical) and resolution of \(0.5^{\circ}\), \(1^{\circ}\) and \(0.16\,\mathrm{m}\) (horizontal, vertical, range). In the Car Park experiment, the radar scans contain approximately 5000 points. The range of the sensor is over \(350\,\mathrm{m}\) and the manufacturer indicates that adaptive modulation is used to boost resolution while maintaining long range. From the point clouds provided in the dataset, it is apparent that some enhancement is applied by the sensor software. Zhang's sensor rig, shown in Fig. 5, also includes a lidar, barometer, camera and two IMUs, a standalone Vectornav IMU and an internal IMU of the lidar sensor. For testing the odometry variants that require inertial data, we use the Vectornav IMU measurements. The trajectory of the Car Park experiment is a rectangle recorded by a hand-pushed trolley with the sensor rig attached to it. The environment is a parking lot between buildings at a university campus. The pre-computed ground-truth localization based on a lidar SLAM solution is available in the dataset and used by us. ### _Odometry performance evaluation_ To compare the performance of the radar odometry variants presented in Section III, we use the widely adopted APE and RPE metrics, in the _Evo_ library implementation [19]. APE together with trajectory plots provides the initial, general idea about the odometry variant behavior for the given sensory and environmental combinations, but is susceptible to the multiplicative, nonlinear effect of the accumulated attitude error. RPE complements this metric, with the indication of the rate of error accumulation. For APE, we provide its translation component, as the rotation error is apparent in the accompanying trajectory plots. For RPE, we provide an overall statistic for both the translation and rotation component. The ground truth for evaluating the odometry error comes from lidar-based SLAM. The lidar map and subsequently the reference localization was created by the open-source _HDL Graph Slam_[18] implementation. Fig. 6 and Fig. 7 demonstrate the perfomance of the discussed radar odometries in the **Mine** experiment. The scan-to-scan matching **APDGICP** variants (with and without our prior provided) together with NDT are not suitable for the type of output the Hugin radar provides. We omit them from the Fig. 6 plot since they randomly diverge, as can be seen in the APE plot. We attribute this to the low density and high variance in subsequent radar scans, which causes the scan-to-scan matching approach to quickly diverge. This behavior is similar also in the case when we provide the more accurate IMU+Doppler prior estimate. The main source of error, as we later show in RPE, is the strong drift in attitude. The scan-to-submap matching represented by the **ICP** variants (4DOF and 6DOF) performs better in the Mine experiment, although the drift is much stronger compared to what would be expected with lidar odometry in similar environments (e.g., refer to the SLAM results of the Sub-terranean DARPA robotic challenge [20]). Constraining the ICP to 4DOF reduces the vertical drift and results in overall lower APE. Comparable results are obtained from the **EKF** approach, which is free from the scan-matching problems, but suffers from abrupt changes in the measured Doppler velocity. As long as the ground is smooth, the localization drift is comparable to lidar odometry drift rates, i.e. below 1%. Once the truck hits a bump, the EKF reacts by inappropriate corrections, which can be observed at 180 and 250 seconds in the Mine experiment. As the Hugin radar does not provide a measurement trigger signal, we assume that the measurement lag causes mismatch with the inertial measurements. The radar scans are time-stamped, however the available EKF implementation does not recompute the past states, it rather counts on timely trigger signals and the _state cloning_ technique. This problem can be partially alleviated by increasing the measurement uncertainty in the Doppler velocity, which makes the estimated trajectory smoother, but also reduces the EKF capacity to quickly estimate sensor biases and to sense minor motions. We denote this altered variant as _Doppler dampened_ in the plots. Surprisingly, the simplest **IMU+Doppler** approach shows the best results. The drift is minimal, comparable to the best state-of-the-art lidar odometry techniques. We attribute this to the high accuracy of the Hugin radar in the Doppler velocity values, and to the capability of the particular IMU unit to suppress the heading drift by benefiting from the magnetometer measurements. The downside is that it does not, contrary to the other techniques, provide any confidence estimate. The results from the **Forest** experiment follow the trend of the Mine experiment. Fig. 8 shows that the scan-to-scan techniques diverge immediately and the scan-to-submap ICP drifts with similar rate compared to the Mine experiment. The main difference is in the behavior of the Doppler dampened EKF. Thanks to the slower pace and the overall stability of the large and heavy wheel loader, it does not suffer from the abrupt Doppler velocity jolts and closely follows the simple IMU+Doppler odometry. In the **Car Park** experiment from the dataset based on the Oculii Eagle radar, we see a different trend. Fig. 9 shows that the simpler methods, i.e. **IMU+Doppler** and **EKF**, suffer from vertical and heading drift. We assume that this is mainly due to the type of IMU used by [4] when recording the dataset. The Doppler dampened EKF is omitted from the trajectory plot because it immediately diverges due to accelerometer bias, which takes a minute to estimate (see Fig. 10). Moreover, the Doppler velocity estimation is less accurate in this dataset, which affects the smoothness of the trajectory. We assume that the scan enhancement process inside the sensor may affect the quality of the Doppler velocity values. On the other hand, all the scan-matching techniques perform well. The longer range and the adaptive modulation in the Eagle radar make the the task of scan matching more reliable. In fact, all variants of ICP, APDGICP and NDT perform similarly and stay within \(10\,\mathrm{m}\) in APE as shown in Fig. 10. We summarize the performance of the odometry methods with the two distinct radars in Fig. 11 using the RPE metric. For clarity, we omit the sub-variants in this plot as their RPE does not differ substantially. The trajectories are divided into \(1\,\mathrm{m}\) and \(10\,\mathrm{m}\) steps for the RPE evaluation. The plot shows the distribution of the translation and rotation errors with the median value directly in the plot. The translation error is expressed as percentage of the step, the rotation error is left in the absolute value, therefore the longer steps yield larger rotational error. Also note that in translation, we observe higher relative errors in the \(1\,\mathrm{m}\) steps. Given the already high accuracy of all the methods, we are approaching the noise in the ground-truth localization based on the lidar SLAM. This is also why we do not consider single-frame-sized steps, for which more accurate reference would be necessary. The **Mine** and **Forest** experiments with the Hugin radar are mainly affected by the noise in the estimated attitude, as the translation error differences between the methods are not as profound as the resulting APE. The rotation part of the RPE metric confirms the trend seen in APE. The raw orientation provided by the Xsens IMU supports the highly accurate Doppler velocity and leads to the highly accurate results. The **Car Park** experiment reveals that the translation is worse for the methods depending on the Doppler velocity (IMU+Doppler, EKF). In the rotation errors, we see the limiting effect of the scan matching, which prevents larger errors to accumulate, contrary to IMU+Doppler and EKF. ## V Conclusions In this work, we have compared several radar odometry estimation methods on three datasets recorded in underground and outdoor environments with two distinct modern imaging radars. With the Oculii Eagle radar, the scan-matching methods achieved higher accuracy than the filtering methods. On the other hand, thanks to the highly accurate Doppler velocity measurement in the Sensrad Hugin radar, the simplest sensor fusion method IMU+Doppler achieves only 0.3% position drift in the Mine and Forest experiments. This makes the method suitable for resource-constrained machines operating in harsh environments, such as heavy machinery in the mining industry. In future work, we will investigate the source of the inaccuracy of the Doppler velocity in the Eagle radar and extend the radar odometry into a full SLAM solution. Fig. 11: RPE values for the two distinct sensor setups. Each pair of the violin plots represents the step sizes for evaluating RPE, \(1\)m and \(10\)m respectively. The median RPE value is shown directly in each plot.
2308.02305
Topological constraints on the dynamics of vortex formation in a two-dimensional quantum fluid
We present experimental and theoretical results on formation of quantum vortices in a laser beam propagating in a nonlinear medium. Topological constrains richer than the mere conservation of vorticity impose an elaborate dynamical behavior to the formation and annihilation of vortex-antivortex pairs. We identify two such mechanisms, both described by the same fold-Hopf bifurcation. One of them is particularly efficient although it is not observed in the context of liquid helium films or stationary systems because it relies on the compressible nature of the fluid of light we consider and on the non-stationarity of its flow.
Thibault Congy, Pierre Azam, Robin Kaiser, Nicolas Pavloff
2023-08-04T13:15:45Z
http://arxiv.org/abs/2308.02305v2
# Topological constraints on the dynamics of vortex formation in a bi-dimensional quantum fluid ###### Abstract We present experimental and theoretical results on formation of quantum vortices in a laser beam propagating in a nonlinear medium. Topological constrains richer that the mere conservation of vorticity impose an elaborate dynamical behavior to the formation and annihilation of vortex/anti-vortex pairs. We identify two such mechanisms, both described by the same fold-Hopf bifurcation. One of them is particularly efficient although it is not observed in the context of liquid helium films or stationary linear systems because it relies on the finite compressibility and on the non-stationnarity of the fluid of light we consider. pacs: 03.65.-b, 03.65.-b, 03.65.-b It has been long understood that the propagation of light in a nonlinear medium can be described as a (dispersive) hydrodynamic phenomenon. This approach, pioneered in the 60's [1; 2; 3; 4; 5; 6] and further developed in the 90's [7; 8; 9; 10] yielded remarkable successes: observation of bright [11; 12; 13], dark [14; 15; 16; 17] cavity [18; 19] and oblique [20; 21] solitons, of wave breaking and dispersive shock waves [22; 23; 24; 25; 26], of quantized vortices [27; 28; 29; 30; 31; 32; 33; 34; 35; 36] and superfluid flow of light [37; 38]. An extreme hydrodynamic-like behavior is the turbulent regime in which typical observables display scale-invariant power law spectra in momentum space. In the present study we focus on two dimensional configurations, similar to those already studied in the field of Bose-Einstein condensates, where quantum vortices proliferation but also robust vortex structures have been observed [39; 40; 41; 42]. Although their role in the different types of power laws which have been predicted and/or observed [43; 44; 39] is not fully elucidated [45; 46; 47], it makes no doubt that understanding the dynamics of vortex formation is crucial for unraveling the mechanisms leading to quantum turbulence. Recent studies have demonstrated the efficiency of optical platforms for studying this subject [48; 49; 50; 51; 52; 53; 54]. In the present work we use a nonlinear optical platform [55; 56; 26] for studying the formation and annihilation of vortices and of other less conspicuous features, such as saddles and phase extrema which also carry a topological charge. Although the existence of these other critical points has a long history [57; 58] their role in enforcing topological constrains [59; 60; 61; 62] is often overlooked. The detection tool at our disposal is able to simultaneously record the intensity and the phase of the light issued from a nonlinear medium, making it possible to reconstruct the streamlines of the flow of the fluid of light, as illustrated in Fig. 1. This enables to investigate the formation mechanisms of vortices and critical points. In particular we experimentally demonstrate for the first time a scenario of vortex/anti-vortex formation first proposed by Nye _et al._ in 1988 [63] and identify a new one, which appears simpler and presumably more efficient in the context of turbulence of a compressible quantum fluid. Consider a quantum fluid described by a scalar order parameter of the form \[\psi(\vec{r}\,)=A(\vec{r}\,)\exp[iS(\vec{r}\,)], \tag{1}\] defined in the plane \((\vec{r}=x\,\vec{e}_{x}+y\,\vec{e}_{y})\). In such a system the formation of vortices is constrained by topological rules: it is for instance well known that in the absence of externally imparted angular momentum, vortices typically appear in pairs with opposite quantized vorticity. This scenario is enriched by other constrains [63] originating from the fact that, to any closed curve \(C\) of the plane, are associated not one, but _two_ topological indices: the Figure 1: Experimental intensity pattern and streamlines (in red) of the beam at the exit of the nonlinear vapor. Dark regions are of lesser intensity. One distinctly discerns two vortex/anti-vortex pairs and also a saddle located close to the origin. vorticity \(I_{\rm V}(C)=\frac{1}{2\pi}\oint_{C}\mathrm{d}S\) and also the Poincare-Hopf index \(I_{\rm P}(C)=\frac{1}{2\pi}\oint_{C}\mathrm{d}\theta\), where \(\theta\) is a polar angle of the "velocity field" \(\vec{v}(\vec{r})=\vec{\nabla}S\) and in both cases the integral is performed clockwise. \(I_{\rm V}\) and \(I_{\rm P}\) are (positive or negative) integers. This stems from the fact that along a close contour the phase \(S\) of the order parameter (1) and the orientation \(\theta\) of the velocity must both vary by integer multiples of \(2\pi\)[64]. If \(S\) is regular and well-defined in the interior of \(C\), then \(I_{\rm V}(C)=0\). This value does not change unless a vortex [65] crosses \(C\). To each vortex one can associate a vorticity and a Poincare-Hopf index by integrating over a small circle around the vortex core. This leads to \(I_{\rm P}=+1\) and typically [66]\(I_{\rm V}=\pm 1\) for each vortex. Besides vortices, other points are also associated with a finite Poincare-Hopf index: those at which the velocity of the flow cancels. They are known as critical points or equilibria. For a potential flow such as ours, where the phase \(S\) is the velocity potential, they are of two types: phase extrema (local maxima or local minima) and phase saddles. For an extremum \(I_{\rm P}=+1\) and for a saddle \(I_{\rm P}=-1\)[67], while for both \(I_{\rm V}=0\)[68]. Similarly to what occurs for the vorticity, the value \(I_{\rm P}(C)\) does not change unless a critical point with non zero Poincare-Hopf index (a vortex, a saddle or an extremum) crosses \(C\). The above topological considerations are generic and apply to any system described by a complex scalar order parameter. The physical implementation we consider in this Letter consists in the the propagation of a linearly polarized laser beam of wavelength \(\lambda_{0}=2\pi/k_{0}=780\) nm in a cell filled with a nonlinear medium consisting in a natural Rb vapor at a temperature \(T\approx 120^{\circ}\). Within the paraxial approximation, denoting as \(z\) the coordinate along the beam axis and \(\vec{r}\) the transverse coordinate, this propagation is described by a complex scalar field \(\psi(\vec{r},z)\) which obeys a generalized nonlinear Schrodinger equation [69] where \(z\) plays the role of an effective time: \[i\,\partial_{z}\psi=-\frac{1}{2n_{0}k_{0}}(\partial_{x}^{2}+ \partial_{y}^{2})\psi+k_{0}n_{2}|\psi|^{2}\,\psi-\frac{i}{2\,\Lambda_{\rm abs} }\,\psi, \tag{2}\] \(|\psi|^{2}\) being the intensity, expressed in W.mm\({}^{-2}\). \(\Lambda_{\rm abs}\) describes the effects of absorption: if \(\mathcal{T}\) denotes the coefficient of energy transmission, then \(\Lambda_{\rm abs}=-z_{\rm max}/\ln(\mathcal{T})\), where \(z_{\rm max}=7\) cm is the total length of propagation through the vapor. \(n_{0}\) is the refractive index of the medium and \(n_{2}\) is the nonlinear Kerr coefficient. The values of the parameters are \(\mathcal{T}=0.16\), \(n_{0}=1\) and \(n_{2}=2.2\times 10^{-4}\) W\({}^{-1}\).mm\({}^{2}\)[70]. For studying the above discussed topological constrains we use a specifically designed incident light pattern which consists in the superposition of a main Gaussian beam (wide and isotropic) with an auxiliary one, more tightly focused and anisotropic. The initial amplitude can accordingly be written as \[\psi(\vec{r},0) = \sqrt{I_{1}}\exp\left(-\frac{r^{2}}{w_{\rm 2}^{2}}\right) \tag{3}\] \[+\sqrt{I_{2}}\exp\left(-\frac{x^{2}}{w_{x}^{2}}-\frac{y^{2}}{w_{ y}^{2}}\right)\exp\{i\,\varphi_{2}(\vec{r}\,)\},\] where \(r=|\vec{r}\,|\), \(w_{\rm 2}=1.1\) mm, \(w_{x}=0.55\) mm, \(w_{y}=0.08\) mm and \(I_{1}=I_{2}=0.4\) W.mm\({}^{-2}\). The initial phase of the auxiliary beam reads \(\varphi_{2}(\vec{r})=-k_{0}r^{2}/R_{2}+\Phi_{2}\), where \(R_{2}=-0.5\) m is the initial curvature of the wavefront of the auxiliary beam and \(\Phi_{2}\) is the global phase difference between the auxiliary and the main beam. An antiphase relationship (\(\Phi_{2}=\pi\)) corresponds to an intensity dip induced by the narrow auxiliary beam on the wide main one. We image the beam pattern at the exit of the cell for different initial phase differences \(\Phi_{2}\). This is performed thanks to a wave-front sensor which captures the amplitude and phase of the near field at the output of the nonlinear medium. As exemplified in Fig. 1 this offers the possibility to simultaneously measure the intensity and phase of the field, corresponding to the output opti Figure 2: Comparison of experimental measurements (left plots) and simulations (right plots) of the beam intensity pattern at the exit of the vapor (\(z=7\)cm). The initial amplitude is given by (2) with \(\Phi_{2}=0.96\,\pi\) in panels (a) and (b); \(\Phi_{2}=\pi\) in panels (c) and (d) and \(\Phi_{2}=1.05\,\pi\) in panels (e) and (f). The red rectangle in panel (b) marks the location of a vortex/anti-vortex pair which formation is analysed below, see Fig. 3. cal fluid intensity \(|\psi|^{2}\) and velocity \(\vec{v}\) in our experiment. Fig. 2 compares the experimental and theoretical intensity profiles \(|\psi(x,y,z_{\rm max})|^{2}\) at the exit of the cell. In panels (a) and (b) eight vortices distributed symmetrically with respect to the horizontal and vertical axes are observed which have been created during the nonlinear propagation within the cell. When increasing the initial phase difference \(\varPhi_{2}\) between the main and auxiliary beam, the vortices close to the \(y\) axis get even closer [panels (c) and (d)] and eventually merge [panels (e) and (f)]. The agreement between the experimental and numerical results displayed in Fig. 2 is excellent, especially if one considers that there are no free parameters: all the constants of the model have been determined by independent experimental measurements [70]. This validates the use of the nonlinear Schrodinger equation (2) for studying the intermediate steps (\(0<z<z_{\rm max}\)) which are not accessible in our experiment. The dynamics of the critical points during the propagation within the nonlinear vapor can be very complex, but it always fulfills the previously stated topological requirements. For instance, in numerical simulations, we have observed the concomitant apparition of a phase saddle and of a phase extremum, a process which preserves the total Poincare-Hopf index. In a similar way, the topological rules impose that the annihilation of a vortex/anti-vortex pair be associated to the simultaneous disappearance of two saddles in order to ensure the conservation not only of \(I_{\rm V}\) but also of \(I_{\rm P}\). This is the process at play in the disappearance of the two pairs of central vortices observed in Fig. 2, when going from the top to the bottom row. We will not go in the particulars of this mechanism here (see however the discussion in [70]) because it has been described in detail by the Bristol team [63] and also because it is seldom observed in our investigation. In the following we describe an alternative mechanism of vortex formation, much more often encountered in our setting: two phase extrema collide and annihilate one the other, giving birth to a vortex/anti-vortex pair. During this process the total Poincare-Hopf index and total vorticity keep the value 2 and 0, respectively. This mechanism is at the origin of the formation of the two vortices in the red square of Fig. 2(b). Numerically computed intermediate beam structures leading to the output pattern shown in Figs. 2(a) and 2(b) are presented in Fig. 3. A phase minimum (white dot) approaches a phase maximum (red dot), pinching a low density region. The two extrema annihilate each other giving birth to a vortex/anti-vortex pair [cyan diamonds in Fig. 3(b)]. The fact that the two vortices have opposite vorticity is clearly seen from the orientation of the streamlines in the vicinity of each of them. After their formation, the two vortices slowly drift apart, eventually reaching in Fig. 3(c) the configuration identified by a red rectangle in Fig. 2(b). The structure of the flow, entailed in the velocity field \(\vec{v}(\vec{r}\,)\), can be interpreted within the theory of dynamical systems by considering streamlines (red lines in Figs. 1 and 3) as trajectories of a two dimensional system: \[\frac{\mathrm{d}\vec{r}}{\mathrm{d}\gamma}=\vec{v}(\vec{r}\,), \tag{4}\] with \(\gamma\) an arbitrary parametrization on the trajectory. In the terminology of dynamical systems, phase extrema are known as nodes (stable or unstable) and saddles as saddle points [71]. Although vortices are not equilibria of the velocity field, the streamlines encircling a vortex are closed trajectories, and vortices can be seen as "centers" of the dynamical system (4). Within this framework, the change of topology of the flow can be viewed as a bifurcation of (4): for instance, the above mentioned concomitant apparition of a saddle point (phase saddle) and of a node (phase extremum) is described by a so-called saddle-node bifurcation. In the same line, the mechanism described previously, and displayed in Fig. 3, appears in the fold-Hopf bifurcation [72; 73] for which a generic normal form is given explicitly in [70]. For the Figure 3: Snapshots of simulations of the intensity pattern at several propagation distances within the nonlinear vapor. The initial profile is (3) with \(\varPhi_{2}=0.96\,\pi\). The corresponding final intensity pattern is represented in Fig. 2(b) of which plot (c) is a blow up. Regions of low intensity are dark. The oriented curves are the streamlines spanned by the vector field \(\vec{v}(\vec{r}\,)\). A red (white) circle locates a phase maximum (minimum), i.e., a stable (unstable) node. Cyan diamonds are vortices. The green square is a saddle which plays no part in the vortex formation mechanism. present discussion it suffices to consider the system (4) with the specific form: \[\vec{v}=\vec{v}_{\rm{fH}}(\vec{r})\equiv-2\sigma xy\,\vec{e}_{x}+(\mu+\sigma x^{2 }-y^{2})\,\vec{e}_{y}, \tag{5}\] where \(\sigma=\pm 1\) is fixed, and \(\mu\in\mathbb{R}\) is a parameter of the bifurcation. The phase portrait of the dynamical system (4),(5) for \(\sigma=1\) and two different values of \(\mu\) (before and after the bifurcation) is shown in Fig. 4. In this case, the stable and unstable nodes (red and white dot respectively) which exist when \(\mu>0\) annihilate when \(\mu\) becomes negative to form two centers (represented by cyan diamonds); note that the latter are not singularities but true equilibria of the velocity field (5). The transition observed in the phase portraits of Fig. 4 by varying \(\mu\) qualitatively compares to the flow patterns obtained in Figs. 3(a) and 3(b) by varying \(z\). The velocity field (5) is not that of a potential flow, as should be the case for a quantum fluid. However, it is possible to derive a potential flow which shares the same phase portrait. The corresponding velocity field reads (see [70]) \[\vec{v}=\vec{\nabla}S_{\rm{fH}},\quad S_{\rm{fH}}(\vec{r})\equiv\arg\big{[}x^ {2}+\sigma(y^{2}+\mu)+i\sigma y\big{]}. \tag{6}\] The system (4),(6) is not only a gradient flow, it also obeys the Onsager-Feynman quantization condition [64]. In particular the centers of (4),(5) are replaced by singularities (where \(S_{\rm{fH}}\) is ill-defined) that are encircled by closed orbits along which the circulation of \(\vec{\nabla}S_{\rm{fH}}\) is \(\pm 2\pi\) [as depicted in Fig. 4(b)], i.e., quantum vortices. It is important to note that \(S_{\rm{fH}}\) is not the phase of a wavefunction which exactly obeys the nonlinear Schrodinger equation (2). However, varying \(\mu\) in (6) effectively reproduce the local flow pattern of a \(z\)-varying wave-function solving (2). Besides, \(S_{\rm{fH}}\) fulfills all the requirements expected from the phase of the order parameter (1) of a two-dimensional quantum fluid. It is interesting to remark that the normal form (5), once modified to derive from the velocity potential (6) as just explained, also describes when \(\sigma=-1\) the scenario of vortex annihilation presented in [63] which we henceforth denote as the Bristol mechanism: two vortices and two saddle points annihilate when \(\mu\) goes from positive to negative, yielding a featureless flow. Notably, the model wavefunction given in [63] reduces to \(\psi=x^{2}-y^{2}-\mu-iy\) close to the bifurcation point (see [70]), i.e., its phase is \(S_{\rm{fH}}\) with \(\sigma=-1\), validating the analogy presented here: the normal form of the fold-Hopf bifurcation provides an approximated theoretical model of the Bristol mechanism. The mechanism of vortex formation illustrated in Figs. 3 and 4, although generic, cannot be observed in the special case of an incompressible bi-dimensional quantum fluid, such as commonly used to model liquid helium films for instance. Indeed, in such a system the phase \(S\) is a harmonic function, which, by the maximum principle cannot have maxima nor minima: the only possible critical points with zero velocity are saddles and no phase extrema occur, contrarily to what is observed in Fig. 3 (see an extended discussion of this point in [70]). Phase extrema are also forbidden in a stationary (i.e., \(z\)-independent in our case) system, as proven in Ref. [63], but nothing prevents their formation in a \(z\)-dependent configuration. Indeed, such extrema have been theoretically considered [59] and experimentally observed in a random linear speckle pattern [74], but were found to be relatively scarce, being outnumbered in a ratio 14:1 by saddles. Although our use of a specific initial condition (3) prevents a systematic statistical study (cf. [59] for a detailed discussion), we also observe that phase extrema are less numerous than saddles. This corresponds to physical intuition: extrema are typically born in saddle-node bifurcations which create an equal number of extrema and saddles, whereas pairs of saddles could be additionally created thanks to the Bristol mechanism. More significantly, the new mechanism of vortex formation we have identified and observed in many instances, efficiently diminishes the number of extrema. As a result, when vortices proliferate, saddles tend to be more numerous than extrema. In conclusion we emphasize that our experimental apparatus pertains to a new generation of optical techniques which enable a precise measure of both the intensity and the phase of a light sheet [32; 35; 52; 53; 54; 36]. As demonstrated in the present Letter, this offers the possibility of an accurate and simple location not only of vortices but also of other critical points, such as saddles. This enabled us to obtain evidences of several (topologically constrained) mechanisms of vortex formation and of associated singular points in the time domain, with an account of the evolution of the streamlines. As far as vortex formation is concerned, we experimentally demonstrated a scenario proposed more than 30 year ago (the Bristol Figure 4: Phase portraits of the dynamical system (4),(5) [or equivalently with (4),(6)] with \(\sigma=1\) for two different values of the bifurcation parameter \(\mu\). The color scale corresponds here to the phase \(S_{\rm{fH}}\in\,[-\pi,\pi]\) (light yellow corresponds to \(S_{\rm{fH}}=\pi\) and dark green to \(S_{\rm{fH}}=-\pi\)). Note that the position of the \(2\pi\)-jump of \(S_{\rm{fH}}(\vec{r})\) [dashed line in panel (b)], is arbitrary and fixed by the choice of constant of integration in (6). mechanism). We also identified a new scenario, simpler and more common in our setting, in which two nodes collide and give birth to a vortex/antivortex pair. This process requires a non stationary flow and a system with finite (or zero) compressibility. We showed that the two mechanisms of vortex formation (Bristol and nodes collision) pertain to the same fold-Hopf type of bifurcation. We demonstrated that the corresponding normal form can be enriched in order to account for the quantum nature of our system. As a final remark we stress that our study illustrates the efficiency of tools issued from the theory of dynamical systems to investigate the route to turbulence. This opens the path of a new line of research devoted to the statistical study of nodes and saddles dynamics in a turbulent quantum fluid. TC and NP would like to thank the Isaac Newton Institute (INI) for Mathematical Sciences for support and hospitality during the programme "Dispersive hydrodynamics: mathematics, simulation and experiments, with applications in nonlinear waves" when part of the work on this paper was undertaken. The work of TC was partially supported by the Simons Fellowships during the INI Programme.
2301.01307
The on-orbit performance of the Colorado Ultraviolet Transit Experiment (CUTE) Mission
We present the on-orbit performance of the Colorado Ultraviolet Transit Experiment ($CUTE$). $CUTE$ is a 6U CubeSat that launched on September 27th, 2021 and is obtaining near-ultraviolet (NUV, 2480 A -- 3306 A) transit spectroscopy of short-period exoplanets. The instrument comprises a 20 cm $\times$ 8 cm rectangular Cassegrain telescope, an NUV spectrograph with a holographically ruled aberration-correcting diffraction grating, and a passively cooled, back-illuminated NUV-optimized CCD detector. The telescope feeds the spectrograph through an 18$'$ $\times$ 60$''$ slit. The spacecraft bus is a Blue Canyon Technologies XB1, which has demonstrated $\leq$ 6$''$ jitter in 56% of $CUTE$ science exposures. Following spacecraft commissioning, an on-orbit calibration program was executed to characterize the $CUTE$ instrument's on-orbit performance. The results of this calibration indicate that the effective area of $CUTE$ is $\approx$ 19.0 -- 27.5 cm$^{2}$ and that the average intrinsic resolution element is 2.9 A across the bandpass. This paper describes the measurement of the science instrument performance parameters as well as the thermal and pointing characteristics of the observatory.
Arika Egan, Nicholas Nell, Ambily Suresh, Kevin France, Brian Fleming, A. G. Sreejith, Julian Lambert, Nicholas DeCicco
2023-01-03T19:00:04Z
http://arxiv.org/abs/2301.01307v1
# The on-orbit performance of the Colorado Ultraviolet Transit Experiment (\(Cute\)) Mission ###### Abstract We present the on-orbit performance of the Colorado Ultraviolet Transit Experiment (\(CUTE\)). \(CUTE\) is a 6U CubeSat that launched on September 27th, 2021 and is obtaining near-ultraviolet (NUV, 2480 A - 3306 A) transit spectroscopy of short-period exoplanets. The instrument comprises a 20 cm \(\times\) 8 cm rectangular Cassegrain telescope, an NUV spectrograph with a holographically ruled aberration-correcting diffraction grating, and an NUV-optimized CCD detector. The telescope feeds the spectrograph through an 18\({}^{\prime}\)\(\times\) 60\({}^{\prime\prime}\) slit. The detector is a passively cooled, back-illuminated NUV-enhanced CCD. The spacecraft bus is a Blue Canyon Technologies XB1, which has demonstrated \(\leq\) 6\({}^{\prime\prime}\) jitter in 56% of \(CUTE\) science exposures. Following spacecraft commissioning, an on-orbit calibration program was executed to characterize the \(CUTE\) instrument's on-orbit performance. The results of this calibration indicate that the effective area of \(CUTE\) is \(\approx\) 19.0 - 27.5 cm\({}^{2}\) and that the average intrinsic resolution element is 2.9 A across the bandpass. This paper describes the measurement of the science instrument performance parameters as well as the thermal and pointing characteristics of the observatory. Exoplanet atmospheres (487) -- Flux calibration (544) -- Hot Jupiters (753) -- Near ultraviolet astronomy (1094) -- Space telescopes (1547) -- Ultraviolet telescopes (1743) -- Spectroscopy (1558) -- Exoplanet atmospheric composition (2021) -- Transmission spectroscopy (2133) -- Space observatories (1543) -- Astronomical instrumentation (799) + Footnote †: journal: AJ 0000-0002-4880-7885]Arika Egan 0000-0002-4880-0708]Nicholas Nell 0000-0002-4880-0880]Ambily Suresh 0000-0002-4880-0880]Kevin France 0000-0002-4880-0880]Brian Fleming 0000-0002-4880-0880]Aickara Gopinathan Sreejith 0000-0002-4880-0880]Julian Lambert 0000-0002-4880-0880]Nicholas DeCicco ## 1 Introduction The Colorado Ultraviolet Transit Experiment (\(CUTE\)) is a small satellite currently obtaining near-ultraviolet transit spectroscopy of short-period exoplanets around bright stars. \(CUTE\)'s spacecraft body, measuring 11.2 cm \(\times\) 23.7 cm \(\times\) 36.2 cm, is a 6U CubeSat, where a CubeSat is a class of small satellites with external dimensions set by some multiple of a single standardized unit (U) measuring 10 cm \(\times\) 10 cm \(\times\) 10 cm. \(CUTE\) is NASA's first ultraviolet astronomy CubeSat and the first grant-funded small satellite dedicated to the characterization of exoplanetary atmospheres. The mission was developed at the Laboratory for Atmospheric and Space Physics (LASP) at the University of Colorado Boulder. This paper describes the in-flight instrument performance while a companion paper in this issue, France et al., details the \(CUTE\) science and mission overview. Two forthcoming papers will discuss the spacecraft and instrument commissioning (A. Suresh et al. 2023, in preparation) and the data reduction pipeline Sreejith et al. (2022). This paper is organized as follows: Section 1.1 provides the science motivation for the \(CUTE\) mission; Section 1.2 provides a small overview of spacecraft testing and the first year of operations; Section 2 describes the spacecraft and payload; Section 3 details the instrument performance, including the spectral bandpass and resolution, effective area, background characteristics, and pointing stability; and Section 4 describes how instrument performance details drive the mission's observing strategies. Near-ultraviolet (NUV) transit spectroscopy is a powerful tool for characterizing the upper atmospheric layers of highly-irradiated exoplanets. Planets with short orbital periods of just a few days are bathed in high-energy photons and stellar winds from their host stars, swelling their atmospheres to several planetary radii and potentially past the planet's gravitational boundary (Vidal-Madjar et al. (2003); Lammer et al. (2003); Yelle (2004); Munoz (2007); Murray-Clay et al. (2009); Ehrenreich, D. & Desert, J.-M. (2011)). Signatures of atmospheric inflation and escape are evidenced in spectroscopic transit light curves created using several strong atomic and ionic absorption features at NUV wavelengths, the depth of which corresponds to the ion's altitude and relative abundance within the atmosphere. For example, Cosmic Origins Spectrograph (COS) NUV observations of the hot Jupiter WASP-12b revealed a pseudo-continuum of metal features throughout the bandpass, with deeper transit depth in Mg ii (2796/2803 A) and at several Fe ii wavelengths (Fossati et al. (2010); Haswell et al. (2012); Nichols et al. (2015)). WASP-121b displayed extended absorption at Mg ii, Fe i at 2484 A and in several Fe ii lines, including 2381 A and 2600 A (Sing et al. (2019)). HD 209458b has displayed Fe ii at 2370 A absorbing beyond the planet's Roche lobe (Cubillos et al. (2020)). The shape of a planet's NUV transmission spectrum have the ability to provide constraints on the presence of high-altitude clouds or hazes (Lothringer et al. (2022); Wakeford et al. (2020); Cubillos et al. (2020)). Models using these light curves seek to identify the main drivers of atmospheric outflows and provide estimates for the atmosphere's mass-loss rates. The \(CUTE\) mission was designed to observe these NUV absorption features to explore exoplanetary composition and the drivers of atmospheric mass-loss for several known exoplanets: it is observing 6 - 10 transits of each target to constrain the atmospheric composition properties, identify variability among transits, and provide mass-loss rates for approximately 10 targets over the mission's lifetime. ### \(CUTE\) Mission Overview The \(CUTE\) CubeSat, shown in Figure 1, was developed, assembled, and tested at LASP from Summer 2017 to Summer 2021. Long lead time items were ordered in the first year, component level calibration and testing occurred in years two and three (including a 10-month delay due to the COVID-19 pandemic), and the majority of assembly and spacecraft testing took place in the year before launch. The spacecraft was delivered to the Vandenberg Space Force Base on July 21st, 2021 and launched on September 2021. The \(CUTE\) spacecraft was designed to observe the spacecraft's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's's atmosphere's atmosphere's atmosphere's's atmosphere's atmosphere's's atmosphere's atmosphere's's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's atmosphere's's atmosphere's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's atmosphere's atmosphere's's atmosphere's atmosphere's's atmosphere's atmosphere's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's atmosphere's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's atmosphere's atmosphere's atmosphere's's atmosphere's' atmosphere's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's' atmosphere's atmosphere's's' atmosphere's's's atmosphere's's atmosphere's's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's' atmosphere's's atmosphere's's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's atmosphere's's's atmosphere's's' atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's's atmosphere's's's atmosphere's's's atmosphere's's's atmosphere's's's atmosphere's's atmosphere's's's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's's's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's's atmosphere's's atmosphere's's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's's atmosphere's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's's' atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's's's atmosphere's's atmosphere's's atmosphere's atmosphere's's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's' atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's's atmosphere's's atmosphere's's's atmosphere's's atmosphere's's atmosphere's's's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's atmosphere's atmosphere's's atmosphere's's atmosphere's atmosphere's's atmosphere's's' atmosphere's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's' atmosphere's's atmosphere's's atmosphere's's' atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's' atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's's atmosphere's' atmosphere's' atmosphere's's atmosphere's's atmosphere's' atmosphere's's' atmosphere's's' atmosphere's's atmosphere's' atmosphere's's atmosphere's's atmosphere's' atmosphere' ber 27th, 2021 into an orbit with an average altitude of 560 km, a 97.6\({}^{\circ}\) inclination, a 10 am local ascending node, and an approximately 95-minute orbital period. Data is downlinked with an S-band radio to the LASP ground station. Spacecraft and payload commissioning took place shortly after launch through February 2022. We used two main stars to conduct initial payload characterization: \(\zeta\) Puppis (HD 66811; O4 star, V = 2.25 mag) and Castor (\(\alpha\) Gem, HD 60178; A1 star, V = 1.58 mag). These two stars have International Ultraviolet Explorer (\(IUE\)) data against which we can calculate \(CUTE\)'s effective area; they were chosen for their brightness and expected high signal-to-noise ratio that were calculated using \(CUTE\) pre-flight effective area estimates and CCD background rates obtained from commissioning exposures. \(CUTE\)'s on-orbit effective area a representative flux calibrated spectrum are presented in Section 3.1. \(CUTE\) science and dark exposures are typically 300s while the readout time takes an additional 32s; we further command two CCD erasures, or two full transfers using vertical clocking only, before each science, dark, or bias exposure. There are opportunities for up to five 300s exposures per \(CUTE\) orbit, though constraints due to solar and lunar keep-out angles, telescope elevation lower limits, and avoidance windows around the north/south poles and South Atlantic Anomaly occasionally reduce that number (France et al. - this issue). Considering the full range of keep-out angles, between 20% and 30% of an orbit is used to obtain science and calibration frames. Additional mission operation details will be presented in a forthcoming paper (A. Suresh et al. 2023 - in prep). Full frame images with no processing, called NOPROCs, have 2200 x 515 pixels and are occasionally downlinked to assess the full CCD health. However, due to constrained downlink capacity, the typical science data product is a 2200 x 100 pixel sub-image that is centered on the spectral trace, called a TRIM2D. \(CUTE\) science operations were adjusted post-launch to accommodate damages to the thermoelectric cooler (TEC) and the electronics board that controls both the TEC and the shutter. During thermal vacuum testing, the TEC was damaged. As a secondary payload on a rideshare, it was not possible to replace the TEC and re-test before the delivery date. The CCD now relies on passive cooling (see Figure 10). The damaged TEC likely deposited a small contamination layer on the CCD and nearby optics, potentially degrading the instrument's effective area (Section 3.1). The TEC and shutter share the same electronics board. While the TEC was damaged pre-launch, the shutter operated nominally and showed no indication of damage. However, during on-orbit payload commissioning, the 12V rail on TEC/shutter electronics board exhibited spikes in its current within a few hours of being powered; these current spikes would trigger the spacecraft's fault protection and reset the spacecraft and payload. The cause of the damage is unclear. The TEC/shutter electronics board contains a capacitor that will close the shutter whenever the 12V rail loses power, meaning that each spacecraft/payload reset closed the shutter. To prevent the electronics board from continuing to interrupt spacecraft operations, we used a 10 minute pass over the LASP ground station to power the TEC/shutter board, open the shutter, and remove power from the board's 12V line slowly to drain the shutter capacitor to a low enough charge that it would not be able to close the shutter. The shutter is now permanently open. Details about how an open shutter affects \(CUTE\) science operations and data reduction are outlined in Sections 3.3 and 4. ## 2 Instrument Description The \(CUTE\) instrument, shown in Figure 2, is a rectangular Cassegrain telescope provided by Nu-Tek Precision Optical Corporation with an NUV spectrograph and passively cooled CCD. The rectangular primary mirror provides 3\(\times\) the surface area of a standard cir \begin{table} \begin{tabular}{l r} \hline \hline \multicolumn{1}{c}{ Instrument Metric} & \multicolumn{1}{c}{Value} \\ \hline Primary Dimensions & 206 x 84 mm \\ Primary Radius & 300 mm \\ Secondary Dimensions & 68 \(\times\) 26 mm \\ Secondary Radius & -129.6 mm \\ Telescope Focal Ratio & f/2.6 \\ Telescope PSF FWHMa & 6\({}^{\prime\prime}\) \\ Instrument Focal Ratio & f/5.5 in cross-dispersion \\ Slit Dimensions & 18 \({}^{\prime}\times\) 120\({}^{\prime\prime}\), 60\({}^{\prime\prime}\), or 30\({}^{\prime\prime}\) \\ Grating Dimensions & 31 \(\times\) 31 mm \\ Grating Radius & 86.1 mm \\ Grating groove density & 1713.4 gr mm\({}^{-1}\) \\ Mirror Coating & A1 + MgF\({}_{2}\) \\ Grating Coating & Bare Al \\ CCD Pixel Size & 13.5 \(\mu\)m \\ CCD Format & 515 \(\times\) 2048 active area \\ CCD Readout Time & 32 s \\ \hline \end{tabular} \end{table} Table 1: CUTE Optical Summary cular mirror fitting into the same volume and was designed to maximize the instrument's light-collecting area in an otherwise small payload volume. The four non-diffractive mirrors are coated in Al + MgF\({}_{2}\) while the grating is coated in bare Al. The primary mirror serves as the mounting structure for both the secondary mirror and the spectrograph. Light from the secondary mirror passes through a ridge-baffled central spire and reflects off of a 45\({}^{\circ}\) fold mirror before reaching a slit at the Cassegrain focus. The slit, manufactured by OSH Stencils, is 18\({}^{\prime}\) long in the spatial dimension with three separate sky-projected widths, 30\({}^{\prime\prime}\), 60\({}^{\prime\prime}\), and 120\({}^{\prime\prime}\) that were chosen to accommodate differently crowded target fields. We have placed the star in the middle of the 60\({}^{\prime\prime}\)section for all observations. A spectral projection of the slit on the CCD is shown in Figure 3. After passing through the slit, light is diffracted off of the bare Al coated, holographically ruled, aberration-correcting grating from Horiba-JY and is additionally focused with a cylindrical fold mirror before the spectrum is recorded on a NUV-enhanced back-illuminated e2v CCD42-10 with a 2048 \(\times\) 515 active area (CCD details are in Section 3.2 and Nell et al. (2021)). The ridge baffling inside the central spire and additional baffles inside of the spectrograph (Figure 2) mitigate strong scattered light paths, but there are additional signatures of scattered light we identified once in orbit (see Section 3.3). Optical design details are provided in Table 1. The CCD is passively cooled; a copper heatsink and thermal strap made of several silver-coated copper wires connect the CCD to a radiator on the side of the spacecraft chassis (Egan et al. (2022)). The payload is housed in a Blue Canyon Technologies (BCT) XB1: a 6U spacecraft with 4U housing the payload, 0.5U for instrument electronics, and 1.5U for avionics including the attitude determination and control system (ADCS), batteries, and radios. Four 2U \(\times\) 3U solar panels provide power to the spacecraft bus. ## 3 Instrument Performance The \(CUTE\) instrument was assembled and tested in LASP vacuum chamber facilities (France et al. (2016), Egan et al. (2020)). Table 2 details performance parameters between laboratory and in-flight measurements. Bandpass, spectral and spatial resolution, and effective area were all measured with two calibration stars, Castor and \(\zeta\) Puppis, and compared against \(IUE\) data early in \(CUTE\)'s commissioning phase. These stars were chosen for their brightness and visibility. Background rates, limiting flux, thermal cycles, and pointing stability were measured in-flight using dark, bias, and science frames from \(CUTE\)'s first science target, WASP-189b. As \(CUTE\) operates without a shutter, CCD pixels remain exposed to light during the 32s readout, and the CCD region above the stellar spectrum contains both background counts and residual spectral counts. An example of this is shown in Figure 4. The spectrum is seen in the center of the image, and the residual readout exposure is evident above the spectrum. The normalized one-dimensional cross-section plotted on the right of Figure 4 illustrates the residual readout exposure. In Figure 2: CAD renderings of the \(CUTE\) telescope and spectrograph. **Top**: front view of the Cassegrain telescope. **Middle**: view of the spectrograph internals. **Bottom**: view of the fully closed-out spectrograph, including the detector and the thermal strap attached to the spacecraft radiator. this example, the increase in counts above the spectrum due to the residual readout is on the order of 2.2% of the total CCD counts from an observation. For a typical \(CUTE\) observation with a 300s exposure time and an average of 150 counts per pixel, the readout time introduces an additional 0.031 counts to each pixel. This is about 0.90% of the read noise as measured from the blank CCD pixels, and thus the error from the exposed readout is well below the error from other background and noise sources. ### Spectrograph In this section, we characterize \(CUTE\)'s in-flight spectrograph and present the measured effective area, the bandpass, and the spectral and spatial resolutions. We used \(IUE\) observations of Castor to measure the spectral and spatial resolution; \(CUTE\)'s two-dimensional spectrum of Castor and one-dimensional dispersion profiles are shown in Figure 5. The tilt of the spectral trace, the asymmetric out-of-focus two-dimensional spectrum, the bandpass, and the dispersion have different values in-flight than measured in the laboratory, indicating shifts in the optical system which likely occurred during launch. The tilt of the spectral trace is largely a result of the fold focusing mirror's position that was set during the focusing process, though the CCD placement also affects the footprint of the spectrum on the detector. The fold focusing mirror (Figure 2, middle panel) has three adjustment screws on three of the four corners that were used to focus the spectrograph (see Egan et al. (2020) for more details). The spectrograph's best focus was found with a spectral trace tilt of 1.08\({}^{\circ}\); in-flight, the trace's tilt measures 0.85\({}^{\circ}\). The spectrograph defocused during launch and now has a double-lobe feature at the blue end that merges into a single lobe at the red end. Despite the change in profile across the detector, the total spectral extraction Figure 4: **Left**: 2048 \(\times\) 515 CCD image of Castor’s spectra with the the background subtracted. The spectrum is the bright green line in the image’s center. Below the spectrum is residual background subtraction noise, and above the spectrum is residual spectral light as the CCD is read out while the shutter remains open. CCD register clocking moves charges to the right and vertical clocking moves the charges down. **Right**: One-dimensional cross-section of the frame to illustrate the readout streak. The vertical dashed line marks the median value of the readout residual exposure at 2.2% of the normalized counts cross-section. Figure 3: A 2048 \(\times\) 515 \(CUTE\) CCD image with the slit fully illuminated with a diffuse, uncollimated Hg light source, obtained during thermal vacuum testing. The projected image of the slit is shown at several mercury lines, including 2536 Å, 2967 Å, and a doublet at 3125 and 3131 Å. The exposure time for this image was set to sample the fainter spectral features of the light source; as a result, many pixels in the 2536 Å line reached saturation levels and some side effects of this saturation can be seen in the image. region to fully capture the extent of the cross-dispersion spectrum remains at a constant value of about 25 pixels across the bandpass for observations with jitter less than 6\({}^{\prime\prime}\) RMS, compared to a 13-pixel extraction region measured in the laboratory. For each additional row added to the extraction region, the total noise increases by about 4%. We used \(IUE\) observations of \(\zeta\) Puppis to measure the in-flight bandpass; the vertical tick marks in Figure 7 show the stellar features used to that end. The measured in-flight bandpass is 2480 - 3306 A, a change from the laboratory-measured bandpass of 2479 - 3322 A. The spectrograph's dispersion was also affected by launch; in the laboratory, the bandpass-averaged dispersion was measured between 0.47 A/pixel to 0.39 A/pixel, and in-flight it is measured to be 0.45 A/pixel to 0.35 A/pixel. Altogether, the change in trace tilt, the shifted bandpass, the change in dispersion, and the spectrograph's defocus indicate shifts occurring in the optical chain after the Cassegrain focus. The end-to-end effective area of the optical system was not directly measured prior to launch, but rather calculated using a combination of measured and provided quantities: the telescope's geometric collecting area, optical reflectivities from coating witness samples provided by Nu-Tek for the four non-diffractive reflecting optics, the measured detector quantum efficiency (QE) from Nell et al. (2021), and laboratory-measured efficiency of the bare Al-coated diffraction grating (Egan et al. (2020)). The in-flight effective area was measured using a combination of \(IUE\) flux-calibrated spectra of \(\zeta\) Puppis for \(\lambda<3100\) A and a linear model for \(\lambda>3100\) A. \(IUE\) sensitivity falls off significantly after 3100 A and therefore our initial calibration program could not directly measure \(CUTE\)'s effective area beyond 3100 A. We instead made the assumption that the slope of the stellar flux from \(\lambda\approx 2500\) - 3100 A remained the same for \(\lambda\approx 3100\) - 3306 A and used that slope to approximate the stellar flux red-ward of 3100 A. We combined the \(IUE\) flux and the model to calculate \(CUTE\)'s effective area. The \(IUE\)\(\zeta\) Puppis spectra were obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute and can be accessed via 10.17909/a8wa-vc91. To arrive at the \(CUTE\) effective area in units of cm\({}^{2}\), the \(CUTE\) two-dimensional \(\zeta\) Puppis spectrum image was background subtracted, the spectral trace was extracted and summed into a one-dimensional counts spectrum, converted into erg s\({}^{-1}\) A\({}^{-1}\), and finally divided \begin{table} \begin{tabular}{l c c} \hline \hline \multicolumn{1}{c}{ Metric} & Laboratory Value & In-flight Value \\ \hline Bandpass & 2480 – 3322 Å & 2480 – 3306 Å \\ Spectral Tilt & 1.05\({}^{\circ}\) & 0.85\({}^{\circ}\) \\ Intrinsic Spectral Resolution\({}^{a}\) & 2.1 Å & 2.9 Å \\ Cross-Dispersion Resolution\({}^{a}\) & 12.5\({}^{\prime\prime}\) & 30\({}^{\prime\prime}\) \\ A\({}_{eff}\) at 2500 Å & 28.8 cm\({}^{2}\) & 27.5 cm\({}^{2}\) \\ Background Limiting Flux\({}^{b}\) & N/A & 5 \(\times\) 10\({}^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\) \\ \hline \end{tabular} \({}^{a}\) Average resolution over the bandpass. In-flight value has ADCS jitter effects removed. \({}^{b}\) In 300s, measured on orbit only, evaluated at 3000Å. \end{table} Table 2: \(CUTE\) Laboratory and measured in-flight performance Figure 5: **Top**: A 2048 x 515 \(CUTE\) spectral image of Cas-tor with background effects removed to show the spectrum. **Middle left**: Zoom of a blue portion of the \(CUTE\) spectrum to highlight the out-of-focus lobes. **Middle Right**: Zoom of a red portion of the \(CUTE\) spectrum to highlight the less-out-of-focus lobes. **Bottom**: One-dimensional cross-sections of the middle panels. by the \(IUE\) spectra in units of erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\). The resulting flux-calibrated \(CUTE\) spectrum of \(\zeta\) Puppis is shown in Figure 7. The effective area ranges from 19.0 - 27.5 cm\({}^{2}\) across the bandpass and a smoothed effective area curve is shown in Figure 6. The in-flight measured effective area is lower than the laboratory-calculated values by a median of 12% across the bandpass. We attribute this change to two possible causes: a contamination layer deposited onto the CCD and optics during the TEC failure (Section 1.2, Egan et al. (2022)) and/or atmospheric contamination of the optics during the two month period when \(CUTE\) sat in the launch dispenser. However, \(CUTE\) science measurements are relative in nature and an individual light curve is made from observations taken within \(\approx\)24 hours of each other; changes in the instrument's sensitivity between delivery and science operations do not affect \(CUTE\) light curve creation, other than the raised noise floor from the TEC failure and launch-induced defocus. The spectral and spatial resolution were measured using \(IUE\) observations of Castor. We define the spatial resolution as the full width at half-maximum (FWHM) of the spectrum in the cross-dispersion direction. This varies by a few pixels across the bandpass (as shown in Figure 5) with an average of 30\({}^{\prime\prime}\) at 3000 A for observations with jitter less than 6\({}^{\prime\prime}\) RMS. \(CUTE\)'s spectral resolution as measured from its one-dimensional spectrum comes from the combination of the spectrograph's intrinsic resolution and spectral smearing from spacecraft jitter. Assuming a Gaussian profile for both the spectral line shape and the jitter distribution, the FWHM of each add in quadrature to arrive at the spectral resolution measured directly from \(CUTE\) one-dimensional spectra. We convolved \(IUE\) spectra of Castor until it matched \(CUTE\)'s spectra. Using the observation's jitter of 6\({}^{\prime\prime}\) RMS, we arrive at a 2.9 A intrinsic spectral resolution element. ### CCD Characterization The detector has a 2200 \(\times\) 515 pixel format with the following layout for each row: 51 true overscan pixels, 50 blank pixels, 2048 active pixels, 50 blank pixels, and 1 null element (a readout artifact used in testing to simplify waveform memory implementation in the FPGA), as can be seen in the full frame images in Figure 8. The 100 blank pixels are not able to sample dark current. We use the horizontal blank and overscan pixels to measure the read noise and bias levels and implement a 5-pixel buffer to avoid charge transfer effects at the active/blank pixel boundary. For example, a full 2200 \(\times\) 515 CCD image will have on one side a 50 \(\times\) 515 blank region, and a 40 \(\times\) 505 sub-region is used to calculate bias and read noise levels. The same 5-pixel buffer enacted on the 102 \(\times\) 515 blank + overscan side produces a 92 \(\times\) 505 region. The height of these regions is reduced to 90 pixels for a TRIM2D frame. The read noise is considered to be the RMS of these sub-regions and the bias level is calculated from the median. The CCD has a single output channel; register clocking moves charges from the blue end to the red end of the detector, and vertical clocking Figure 6: Effective area of the \(CUTE\) instrument as proposed (blue), estimated with laboratory measurements (orange), and calculated in-flight and smoothed (black). The in-flight effective area beyond \(\lambda\approx\) 3100 Å is estimated based on a model fit to the \(IUE\)\(\zeta\) Puppis one-dimensional spectrum. Figure 7: Flux-calibrated spectrum of \(\zeta\) Pup. A \(CUTE\) spectrum is in black, plotted over the blue \(IUE\) spectrum. Black vertical tick marks note the spectral features used to calculate \(CUTE\)’s wavelength solution. \(IUE\) sensitivity falls off beyond 3100 Å, so a model was fit to the \(IUE\) data in order to calculate \(CUTE\)’s effective area and flux spectrum. moves charges from the top to the bottom of the images shown in Figures 3, 4, and 5. Much of the CCD characterization took place in the laboratory and is detailed in Nell et al. (2021). Quantities including gain, photon-response non-uniformity (PRNU), and non-linearity were not able to be measured in flight due to the time and budget constraints of a suborbital-class mission. In-flight flat fields cannot be obtained as there is no onboard calibration lamp and true flats cannot be obtained with a dispersing element. \(CUTE\) science and calibration targets fill the pixel wells to about 5% of full capacity, levels where non-linearity is not a concern. PRNU, gain, and any dust effects are accounted for by (1) the flux calibration and (2) the nature of light curves being fundamentally a relative measurement. ### Background A single dark frame has contributions from dark noise, read noise, detector bias, and the sky due to the unshuttered spectrograph. The background exhibits additional variation dependent on (1) the CCD's temperature-dependent dark rate and (2) a scattered light feature with a brightness that is correlated with the telescope's elevation angle; an extreme example of this scattered light feature is shown in Figure 8. Both dark and bias frames have some level of contribution to all of the above phenomena. Figure 10 shows the detector's periodic temperature cycle over several orbits, cycling between \(\approx-11^{\circ}\)C and \(-6^{\circ}\)C. These temperature swings are evident in the average background frame count rates, shown in Figure 9. The background count rate increases as temperature increases, which is the expected dark rate behavior of the detector. However, the background count rate additionally varies at similar temperatures due to scattered light entering the spectrograph; the angles between the telescope and the Earth's limb, the Sun, and the Moon all have an effect on the level of scatted light present in background frames. An example is shown in Figure 9 where the Sun's elevation angle is correlated with increased background counts. In CCD regions that are more prone to scattered light (e.g. the left side of the CCD as shown in the top panel of Figure 8), background count rates differ on the order of about 0.15 photons s\({}^{-1}\) pixel\({}^{-1}\) when telescope elevation is maintained above \(10^{\circ}\). Background subtraction for a single science frame must consider the orbital and thermal environment of the exposure. The temperature- and pointing-dependent nature of the background (e.g. Figure 9) means that each science frame must have tailored cal Figure 8: Two 2200 \(\times\) 515 \(CUTE\) frames of different telescope elevation angles, \(3^{\circ}\) and \(12^{\circ}\), to demonstrate the scattered light feature present at low telescope elevation angles. The elevation of each exposure is marked in the left-side overscan region. The top and middle frames compare the \(3^{\circ}\) and \(12^{\circ}\) elevation angles with the same colorbar. The bottom frame shows the \(12^{\circ}\) frame with a colorbar scaled to the frame. Typical frame features are evident on the \(12^{\circ}\) frame: an increase in counts from the bottom to the top of the frame and hot pixel streaks due to the 32s CCD readout time. The horizontal dark bars in the top image are missing data packets that were unsuccessfully downlinked. Figure 9: \(CUTE\) CCD dark frame background rates for the detector, plotted against detector temperature and colored by the elevation of the Sun with respect to the telescope boresight. In general, higher CCD temperatures produce higher background count rates. Background rates are additionally influenced by the position of astronomical bodies like the Sun as exemplified here, as well as the Moon and Earth. At the time of writing, we are still exploring sources of contribution to background rates. ibration frames. Efforts are underway to model each pixel's value as a function of both its intrinsic behavior and its environment in order to create individual background frames for a given science exposure, based on the frame's exposure conditions; these efforts will be described in detail in Egan et al. 2023 -in prep and we currently use a combination of median-combined dark and bias frames with similar telescope pointings and CCD temperatures to approximate the background in each science frame. Finally, we present \(CUTE\)'s background flux limit. After a science frame is background-subtracted, we define a background region below the spectrum that has the same size as the frame's spectral extraction region and calculate the background flux limit, or minimum flux below which a source cannot be detected above the background. The counts in the background region are converted to flux and then averaged for 67 \(\times\) 300s WASP-189 science frames. The background flux limit is \(\approx\) 5 \(\times\) 10\({}^{-14}\) erg s\({}^{-1}\) cm\({}^{-2}\) A\({}^{-1}\). Background subtraction and handling of hot pixels are discussed in more detail in Sreejith et al. (2022). ### Pointing Stability The quality of \(CUTE\) spectra is strongly influenced by the spacecraft pointing jitter. Stability about the commanded coordinates can vary between 3\({}^{\prime\prime}\) and more than 20\({}^{\prime\prime}\) and thereby smear the spectrum across a greater CCD area, increase the spectral extraction region, and reduce the signal-to-noise of the one-dimensional spectrum. Significant jitter can cause vignetting of the telescope point-spread-function by the slit mask and render those data unusable for transit spectroscopy. We currently use a 6\({}^{\prime\prime}\) jitter cutoff for science frames to undergo data reduction and observations with higher jitter are not used Sreejith et al. (2022). Figure 11 shows a histogram of ADCS jitter in all of the WASP-189 science frames which include complete jitter telemetry; jitter is less than 6\({}^{\prime\prime}\) for about 56% of \(CUTE\) observations. In our preliminary data reduction and light curve creation, we remove frames with higher jitter to eliminate low signal-to-noise observations; this cutoff will be honed as we continue to refine our data reduction pipeline (Sreejith et al. (2022)). ## 4 Mission Operations \(CUTE\)'s standard mission operation sequence involves observing a given science target for 6\(-\)10 transits. Each visit's duration is centered on the planet's mid-transit time and lasts for a span of time equal to five times the transit duration in order to establish an out-of-transit stellar baseline pre- and post-transit. \(CUTE\) target transits last between 3 and 6 hours, or about 2 to 4 \(CUTE\) orbits, while the duration of each visit is typically less than 24 hours. As discussed in Section 3, \(CUTE\) data quality has pointing and temperature dependencies: (1) CCD exposures are captured without a shutter, meaning that calibration frames are not truly "dark" and include sky background; (2) CCD dark rates scale with the thermal changes present due to the passive cooling method; (3) there is a pointing-dependent scattered light feature present in all frames. We create observation plans that take these into account which are detailed below. The roll angle for a given science target observing campaign is set with two considerations. First, we prioritize Figure 11: Jitter histogram for 254 \(\times\) 300s science exposures from the WASP-189b campaign. The pointing sample rate is 5s for each 300s observation. Histogram bin widths are 0.5\({}^{\prime\prime}\) for jitter \(<\) 10\({}^{\prime\prime}\) and 1\({}^{\prime\prime}\) for jitter \(>\) 10\({}^{\prime\prime}\). A vertical dashed line indicates the cutoff used to eliminate low signal-to-noise observations from light curve analysis. Figure 10: A typical CCD temperature profile over several \(CUTE\) orbits. keeping the CCD as cool as possible by orienting the radiator panel into space and away from the Earth and Sun. Once that angle is set, we additionally check the target field to make sure no bright stars other than the target are within the slit; this has been the case for all science observations obtained since launch. The roll angle is maintained for all calibration frames taken adjacent to a given set of science exposures. To capture scattered light features from the Sun, Moon, or Earth limb that may appear in science frames (Figure 9), dark and bias calibration frames are planned to occur at approximately the same orbital position as science frames with a pointing offset from the target star of 0.75\({}^{\circ}\); this has an additional benefit of obtaining a calibration frame in a similar thermal state. Fully characterizing the background is a continued effort. Background trends related to scattered light and telescope pointing did not become apparent until a sufficient number of science and dark frames from the WASP-189 observing campaign were analyzed. Modeling efforts are underway to model each pixel's behavior in its environment to better produce calibration frames for background subtraction (Egan et al. 2023 - in prep.). ## 5 Conclusion \(CUTE\) is currently obtaining NUV transit spectroscopy of short-period exoplanets around bright stars. We have detailed the NUV payload's spectroscopic performance as calculated within the first six months of calibration and science observations, including the spectral and spatial resolution, the effective area, and the limiting flux level. We have demonstrated stable pointing for about 56% of our observations. Science operations will continue through June 2023, during which we will conduct additional calibration campaigns to measure time-dependent sensitivity, as well as continue to home our understanding of the background. \(CUTE\) is expected to reenter the atmosphere within 3 years from launch. **Acknowledgments:**\(CUTE\) was developed and operated with the support to two NASA/APRA awards to the Laboratory for Atmospheric and Space Physics at the University of Colorado Boulder, NNX17AI84G and 80NSSC21K1667s. A. G. S. was supported by a Schrodinger Fellowship through the Austrian Science Fund (FWF) [J 4596-N] and additionally acknowledges financial support from the Austrian Forschungsforderungsgesellschaft FFG project 859718 and 865968. The \(CUTE\) team acknowledges the numerous invaluable discussions with colleagues excited about ultraviolet transit science and the potential to do science with small satellites. The \(CUTE\) team wishes to specifically recognize the amateur radio operator community for hosting numerous telemetry tracking tools that have improved the mission's ability to recover from faults and understand long-term spacecraft trends much more efficiently than would have been otherwise possible.
2308.15679
The porous envelope and circumstellar wind matter of the closest carbon star, CW Leonis
Recent abrupt changes of CW Leonis may indicate that we are witnessing the moment that the central carbon star is evolving off the Asymptotic Giant Branch (AGB) and entering into the pre-planetary nebula (PPN) phase. The recent appearance of a red compact peak at the predicted stellar position is possibly an unveiling event of the star, and the radial beams emerging from the stellar position resemble the feature of the PPN Egg Nebula. The increase of light curve over two decades is also extraordinary, and it is possibly related to the phase transition. Decadal-period variations are further found in the residuals of light curves, in the relative brightness of radial beams, and in the extended halo brightness distribution. Further monitoring of the recent dramatic and decadal-scale changes of this most well-known carbon star CW Leonis at the tip of AGB is still highly essential, and will help us gain a more concrete understanding on the conditions for transition between the late stellar evolutionary phases.
Hyosun Kim, Ho-Gyu Lee, Youichi Ohyama, Ji Hoon Kim, Peter Scicluna, You-Hua Chu, Nicolas Mauron, Toshiya Ueta
2023-08-30T00:31:16Z
http://arxiv.org/abs/2308.15679v1
[ ###### Abstract Recent abrupt changes of CW Leonis may indicate that we are witnessing the moment that the central carbon star is evolving off the Asymptotic Giant Branch (AGB) and entering into the pre-planetary nebula (PPN) phase. The recent appearance of a red compact peak at the predicted stellar position is possibly an unveiling event of the star, and the radial beams emerging from the stellar position resemble the feature of the PPN Egg Nebula. The increase of light curve over two decades is also extraordinary, and it is possibly related to the phase transition. Decadal-period variations are further found in the residuals of light curves, in the relative brightness of radial beams, and in the extended halo brightness distribution. Further monitoring of the recent dramatic and decadal-scale changes of this most well-known carbon star CW Leonis at the tip of AGB is still highly essential, and will help us gain a more concrete understanding on the conditions for transition between the late stellar evolutionary phases. stars: AGB and post-AGB, (stars:) binaries: general, stars: carbon, (stars:) circumstellar matter, stars: evolution, stars: individual (CW Leonis), stars: late-type, stars: mass loss, stars: winds, outflows The porous envelope and circumstellar wind matter of the closest carbon star, CW Leonis] The porous envelope and circumstellar wind matter of the closest carbon star, CW Leonis H. Kim et al.] Hyosun Kim\({}^{1}\), Ho-Gyu Lee\({}^{1}\), Youichi Ohyama\({}^{2}\), Ji Hoon Kim\({}^{3}\), Peter Scicluna\({}^{4}\), You-Hua Chu\({}^{2}\), Nicolas Mauron\({}^{5}\) and Toshiya Ueta\({}^{6}\) 2022 370 370 Winds of Stars and Exoplanets A. A. Vidotto, L. Fossati & J. Vink, eds. ## 1 Introduction Many pre-planetary nebulae (PPN) consist of newly-formed inner bipolar/multipolar lobes and outer spirals/rings/arcs that are the fossil records of stellar wind matter accumulated during the asymptotic giant branch (AGB) phase. The coexistence of two such morphologically distinct circumstellar structures is a mystery; however, it is widely believed that binaries play a key role. The most direct clue to resolving the mystery of the shape transition along stellar phase evolution may be offered by catching the moment when an AGB star is evolving off the current phase toward the PPN phase. Recent dramatic changes of CW Leonis likely indicate that we are witnessing the moment of transition between these late stellar evolutionary phases. ## 2 Previous Views on CW Leonis CW Leonis is the closest (distance of about 123 pc; Groenewegen et al. 2012) and the most well-studied carbon-rich AGB star (or carbon star). Multi-wavelength observations suggest that CW Leonis is likely a binary system (e.g., Jeffers et al. 2014; Decin et al. 2015). Its non-concentric ring-like pattern over 200 arcsec is remarkable (Mauron & Huggins 2000), which can be modeled by a spiral-shell structure introduced by an eccentric orbit binary at the center (e.g., Cernicharo et al. 2015). However, neither the carbon star nor the companion has been identified because of obscuration by the dense circumstellar matter ejected from this extreme carbon star at the tip of AGB. Several near-infrared observations were executed with adaptive optics and speckle interferometry in 1995-2003 (Tuthill et al. 2000; Osterbart et al. 2000; Weigelt et al. 2002; Murakawa et al. 2005) achieving high-resolutions (\(<0.1\) arcsec) but losing stellar positional information at the cost of field sizes; several clumps were revealed, but their relationship with the central star was unclear. The vigorous debate about which clump corresponds to the carbon star ended in vain through a monitoring study over 2000-2008 showing that the clumps faded out around 2005 (Stewart et al. 2016). In the optical, before 2011, the core region exhibited an extended bipolar nebula without any distinct point source (Haniff & Buscher 1998; Skinner et al. 1998; Leao et al. 2006), from which this object was thought to have an invisible star residing in a dusty disk lying perpendicular to the bipolar structure. The bipolar-like structure, however, disappeared in the latest Hubble Space Telescope images taken in 2011 and 2016 (Kim et al. 2015, 2021), suggesting a completely different view for CW Leonis. ## 3 Recent Dramatic Changes and Porous Envelope Scenario Surprisingly, the latest optical images of CW Leonis, taken using Hubble Space Telescope, in 2011 and 2016 revealed dramatic changes in the core region of the circumstellar envelope from those taken about 10-year earlier in 1998 and 2001 (Figure 1). Besides the disappearance of the long-believed bipolar-like nebula, several important features are identified and become evidences for a porous envelope of the central star (Kim et al. 2021). In contrast to its absence at the previous epochs, a local brightness peak appears exactly at the expected stellar position (Figure 2, top middle) and it is identified as the reddest spot in the color map (Figure 2, top left). It is compact; its full width at Figure 1: Temporal change of brightness in the central 1.5 arcsec region of CW Leonis. The star symbol indicates the proper-motion-corrected position of the star at each epoch, denoted at the top of each panel. The bipolar-like structure (black line) before 2011, which had likely given a misleading impression for the origin and evolutionary phase of CW Leonis, disappeared in the 2011 and 2016 epochs. From left to right, the Hubble Space Telescope images are taken with the F606W filter (\(\sim 0.6\,\mu\)m) at the epochs of 1998-03-30 (Prop. ID: 6856, PI: J. Trauger), 2001-01-07 (Prop. ID: 8601, PI: P. Seitzer), 2011-06-04 (Prop. ID: 12205, PI: T. Ueta), and 2016-05-17 (Prop. ID: 14501, PI: H. Kim). half maximum above the adjacent diffuse emission is slightly larger than the standard point-spread function. This red compact spot at a local peak is interpreted as the direct starlight, escaping through one of the gaps in the clumpy envelopes shrouding the star. This radial beam may pulsate around the line of sight with a small angle and be coincidentally aligned with the line of sight at the observed epoch in 2016. The outer part of the observed image exhibits eight straight lines of brightness that are radially stretched from the central star (Figure 2, top right). In the context of the porous envelope scenario, these searchlight beams indicate the trajectories of starlight penetrating the holes in the inner envelopes, along which adjacent dust particles in the circumstellar envelopes are illuminated. Any other interpretation for their origin is precluded because of the straightness of these beams regardless of the considerably fast stellar proper motion. The extended halo brightness distribution becomes fairly symmetric about the central star in the 2016 image, compared to the elongated distribution to the northwest in the 2011 image (Figure 2, bottom panels). This change is again beyond the scope of dynamics of matter that would require a very long time to move the whole halo with an extremely large extent. It is speculated that one of the radial beams, that is related to the new emergence of central red compact brightness peak, alters its angle toward the line of sight in 2016 from a slightly misaligned angle pointing toward the northwestern direction in 2011, explaining the redistribution of the halo brightness. The radial beams appearing in the plane of the sky do not seem to shift their positions Figure 2: The Hubble Space Telescope image of CW Leonis taken with the F814W filter (\(\sim 0.8\,\mu\)m) in 2016 (bottom right), compared to the 2011 epoch image (bottom left). The radial beams and multiple rings appearing prominent in the central 5 arcsec region are the intriguing features (top right). The color map for central 1 arcsec region shows the reddest spot coincides with the stellar position, marked by black contours of the F814W brightness (top left) same as the image in the top middle panel. The color bars range the F814W brightness in logarithmic scales, but for the color map in the top left panel being in linear scales for the magnitude difference between the F606W and F814W images. much with time (Figure 3). Their position angles with respect to the predicted stellar positions at the individual epochs are almost fixed. Therefore, it is natural that we assume the precession angle for the radial beam relevant to the extended brightness distribution is small. In order to verify this scenario, another epoch imaging observations with the same setup is anticipated. In contrast, the relative brightnesses of the radial beams significantly change and the period seems to be about 10 years (Figure 3). The brightest beams are toward north in the 2001 and 2011 epochs (downward in the figure) while toward south in the 2016 epoch (upward). Indeed, besides the stellar pulsation of 640-day period, a decadal variation has been suggested based on near-infrared and optical photometric data (Dyck et al., 1991; Kim et al., 2021). In particular, the increases of \(K\)-band flux and the point source contribution in it during 1980-1990 are quite similar to the event found in 2016. To assess whether the variations are indeed periodic, more frequent longer-term monitoring observations are desired. ## 4 Anisotropic Wind Expansion induced by an Eccentric Binary The multiple shell pattern wrapping around the star is one of the most intriguing features of CW Leonis. Our analysis using differential proper motion of the pattern indicates expansion of shells of ejected material from the star. The derived speeds of the expanding shells depend on the direction (Figure 4, left). The expansion speeds vary not only across different position angles within the Hubble Space Telescope image (about \(7\,\mathrm{km\,s^{-1}}\) faster to the south), but their average speed in the plane of the sky was also about \(2\,\mathrm{km\,s^{-1}}\) slower than the wind speed along the line of sight that is derived from molecular line observations in radio wavelengths. This variation of measured speeds indicates an overall nonspherical geometry of the wind matter. We further find that these observations (Figure 4, left panel) are compatible with a binary model having an eccentric orbit (see the right panel of Figure 4, from Kim, 2022). We regard, however, the velocity measurement was somewhat uncertain. The 2016 image was not as deep as the 2011 image, reducing the number of rings used in the analysis. Another obstacle was the relatively small expansion length (the average of positional difference of individual rings between the 2011 and 2016 epochs) due to the short 5-year interval, which was only slightly larger than the size of point-spread function. With these reasons, high-resolution high-sensitivity imaging monitoring is further needed. Figure 3: Searchlight beams of CW Leonis are fixed in the relative positions with respect to the proper-motion-corrected stellar position and their relative brightnesses are varying with time. The snapshots are taken from an animation in the Research Gallery of [http://hubblesite.org](http://hubblesite.org). Image credit: ESA/Hubble, NASA, Toshiya Ueta (Univ. of Denver), Hyosun Kim (KASI). ## 5 Conclusion The recent drastic changes in optical images suggest that the previously-seen bipolar-like structure could not be a concrete structure but could possibly be parts of searchlight beams with varying relative brightnesses along time. These radial beams reveal the pathways of starlight illuminating dusty material after escaping through the gaps in the clumpy envelope enshrouding the star. The appearance of a distinct brightness peak exactly at the predicted stellar position and the abnormal shift of large halo distribution both can be explained by a hypothesized radial beam toward us that is slowly precessing with a small angle and aligned with the line of sight at the latest observation epoch. Although the complexity of CW Leonis has been well known since its discovery, the three dimensional morphology of its central core and evolving beams is uniquely revealed with recent Hubble Space Telescope optical monitoring of the core images. It also allows to trace the expansion velocity of shells and clumps as seen in the evolving light of the central star. Furthermore, on-going efforts on three dimensional hydrodynamic models fitting the data likely suggests a small inclination of the orbit (close to face-on) with the pericenter of the mass-losing star at North. Further systematic monitoring of this canonical high mass-loss carbon star is mandatory to strengthen our interpretation coupled with our modelling. Hubble Space Telescope allows us to be very near of establishing a robust understanding of the mass loss of this strategic but mysterious AGB (or soon post-AGB) star. ## Acknowledgements HK acknowledges support by the National Research Foundation of Korea (NRF) grant (No. 2021R1A2C1008928) and Korea Astronomy and Space Science Institute (KASI) grant (Project No. 2022-1-840-05), both funded by the Korea Government (MSIT).
2305.07281
The Spatially Resolved Properties of the GW170817 Host Galaxy
GW170817 is the unique gravitational-wave (GW) event that is associated to the electromagnetic (EM) counterpart GRB 170817A. NGC 4993 is identified as the host galaxy of GW170817/GRB 170817A. In this paper, we particularly focus on the spatially resolved properties of NGC 4993. We present the photometric results from the comprehensive data analysis of the high spatial-resolution images in the different optical bands. The morphological analysis reveals that NGC 4993 is a typical early-type galaxy without significant remnants of major galaxy merger. The spatially resolved stellar population properties of NGC 4993 suggest that the galaxy center has passive evolution with the outskirt formed by gas accretion. We derive the merging rate of the compact object per galaxy by a co-evolution scenario of supermassive black hole and its host galaxy. If the galaxy formation is at redshift 1.0, the merging rate per galaxy is $3.2\times 10^{-4}$ to $7.7\times 10^{-5}$ within the merging decay time from 1.0 to 5.0 Gyr. The results provide the vital information for the ongoing GW EM counterpart detections. The HST data analysis presented in this paper can be also applied for the Chinese Space Station Telescope (CSST) research in the future.
Yubin Li, Jirong Mao, Jianbo Qin, Xianzhong Zheng, Fengshan Liu, Yinghe Zhao, Xiao-Hong Zhao
2023-05-12T06:55:30Z
http://arxiv.org/abs/2305.07281v1
# The Spatially Resolved Properties of the GW170817 Host Galaxy ###### Abstract GW170817 is the unique gravitational-wave (GW) event that is associated to the electromagnetic (EM) counterpart GRB 170817A. NGC 4993 is identified as the host galaxy of GW170817/GRB 170817A. In this paper, we particularly focus on the spatially resolved properties of NGC 4993. We present the photometric results from the comprehensive data analysis of the high spatial-resolution images in the different optical bands. The morphological analysis reveals that NGC 4993 is a typical early-type galaxy without significant remnants of major galaxy merger. The spatially resolved stellar population properties of NGC 4993 suggest that the galaxy center has passive evolution with the outskirt formed by gas accretion. We derive the merging rate of the compact object per galaxy by a co-evolution scenario of supermassive black hole and its host galaxy. If the galaxy formation is at redshift 1.0, the merging rate per galaxy is \(3.2\times 10^{-4}\) to \(7.7\times 10^{-5}\) within the merging decay time from 1.0 to 5.0 Gyr. The results provide the vital information for the ongoing GW EM counterpart detections. The HST data analysis presented in this paper can be also applied for the Chinese Space Station Telescope (CSST) research in the future. gravitational waves -- (stars:) binaries: general -- galaxies: evolution 0000-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-00002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-002-0002-0002-002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-002-0002-0002-0002-0002-0002-0002-0002-0002-0002-0002-002-0002-0002-0002-0002-0002-0002-0002-0002-002-0002-002-0002-0002-002-0002-0002-0002-0002-0002-0002-002-0002-0002-0002-0002-0002-0002-002-0002-002-0002-0002-0002-0002-002-0002-0002-002-002-0002-002-0002-002-0002-0002-0002-0002-002-002-002-0002-002-0002-0002-0002-0002-0002-0002-0002-002-0002-0002-0002-0002-002-0002-002-002-0002-002-002-0002-0002-002-0002-002-002-002-0002-002-002-0002-002-002-0002-0002-002-0002-002-002-0002-002-0002-002-002-0002-002-002-0002-002-002-0002-0002-0002-002-0002-0002-0002-002-0002-002-002-002-002-002-0002-002-002-002-0002-002-0002-002-002-002-002-002-002-002-002-002-002-002-002-002-002-002-002-002-002-002-002-002-0202-002-002-02-002-002-02-0202-02-02-0202-02-02-0202-02-0202-02-0202-02-022-022-022-022-022-022-022-022-022-022-022-022-022-022-022-0222-022-022-022-0222-22-222-222-222-222-2222-222-222-2222-222-222-222-2222-222-222-2222-2222-2222-2222-22222-22222-22222-22222-22222- ## 1 Introduction High-frequency gravitational waves (GWs) are originated from compact object mergers. Electromagnetic radiation is accompanied with gravitational wave release (Nakar, 2020). Short-duration gamma-ray bursts (GRBs) are produced by compact object mergers(Berger, 2014). Thus, GRBs are usually considered as GW sources. GW170817 is the unique source of the gravitational wave that has the confirmed electromagnetic (EM) counterpart, and GRB 170817A accompanied with GW170817 has multi-wavelength observation. It is confirmed from both GW and multi-wavelength EM data that GW170817/GRB 170817A arises from neutron star merger (Abbott & et al., 2017). It is also important to note that the environment of the compact object merger plays a vital role on both the occurrence and the evolution of the GW source with the EM counterpart. Fortunately, GW170817 was occurred at only 41 Mpc from the earth. The host galaxy of the source, named NGC 4993, in such short distance, was clearly recognized (Hjorth et al., 2017). It has been found that many host galaxies of the short GRBs usually follow the sequence of the star-forming galaxies (D'Avanzo et al., 2009; Leibler & Berger, 2010; Fong et al., 2013). However, NGC4993 is an early-type galaxy, and its shape is almost symmetric that can be labelled as S0 style (Palmese et al., 2017). The galaxy has a stellar mass of log(\(M/M_{\odot}\))=10.49, and its star formation rate is as low as 0.003 \(M_{\odot}\) per year (Pan et al., 2017). This is a very unusual case in the catalog of the short GRB host galaxies (Nugent et al., 2022). Moreover, the compact merger system provides a kind of nucleosynthesis process that is called rapid neutron-capture process (\(r\)-process). By this process, the elements heavier than iron can be produced during the merging time. The \(r\)-process elements was successfully examined at the merging site in NGC 4993 (Pian et al., 2017). However, the \(r\)-process source may have a delayed timescale of larger than 4 Gyrs, while the binary merger may not be the only way to produce the \(r\)-process in NGC 4993 (Skuladotti & Salvadori, 2020). It is necessary to comprehensively perform photometric analysis and obtain the global properties of the GW170817 host galaxy. In order to further investigate the physical properties in the neighborhood of the merging source compared to those of the whole galaxy, the spatially resolved photometric measurements to the GW170817 host galaxy are also required. Furthermore, for the binary merger case in NGC 4993, spectral analysis has been well performed both at the merger region and in the galactic center (Pian et al., 2017; Blanchard et al., 2017; Levan et al., 2017). However, in principle, the physical information for each location inside NGC 4993 should be provided. Although integral field unit can be adopted to obtain two-dimensional spectrum for a galaxy, to study faint GW host galaxies detected in the future still lies on the photometric measurements. One may perform direct GW EM counterpart search in the high-energy band, but GW location has huge error circle. The potential host galaxy selection seems more suitable in the optical band for the GW EM counterpart search. The ranked host candidates can be selected by semi-analytic methods or full simulations from galaxy evolution models (Perna et al., 2022; Mandhai et al., 2022). GRB 170817A as the EM counterpart of GW170817 is the only GW EM counterpart that has been identified so far, and the detailed investigations of the photometric analysis for NGC 4993 can provide important information for the GW EM follow-up observation in the future. For example, some strategies to select GW EM counterpart among many celestial candidates were established by accurate photometric observations to obtain galaxy properties (Ducoin et al., 2020). In general, the study of the compact merger system can be put in the framework of galaxy formation and evolution in the universe (Gehrels et al., 2016; Toffano et al., 2019; Adhikari et al., 2020). Some host galaxy properties of merging objects, such as star formation rate and stellar mass, are related to the merging rate per galaxy (Artale et al., 2019). The delay time from binary formation to binary merger is dependent on the host galaxy properties (Mapelli et al., 2018; Safarzadeh and Berger, 2019; McCarthy et al., 2020). The star formation history of the host galaxies can be adopted to estimate the merger formation rate (Rose et al., 2021). Thus, a compact merger system is naturally linked to the formation and evolution of its host galaxy. For the host galaxy of GW170817, the shell-like structure has been identified in NGC 4993, and this indicates that NGC 4993 was formed by the galactic merger from 400 Myr ago (Ebrova et al., 2020). In the meanwhile, we note that the accretion activity in the center of NGC 4993 shows some typical features of low-luminous active galactic nucleus (Contini, 2018; Wu et al., 2018). External gas accretion may supply a S0 galaxy (Raimundo, 2021). Moreover, many globular clusters have been identified in NGC 4993 (Lee et al., 2018). As the globular clusters have long lifetime, we may expect that the budge of NGC 4993 may exempt from the major galactic mergers in the past evolution. If NGC 4993 has a monolithic evolution during long cosmic time, compared to the galaxy merger evolution, the monolithic evolution mode may take different effects on the occurrence of the compact merger system. In this paper, we use the data from the Hubble space telescope (HST) observation to comprehensively analyze the properties of NGC 4993. Although some works on the global properties of NGC 4993 have been performed by the HST data (Palmese et al., 2017; Ebrova et al., 2020; Kilpatrick et al., 2022), we further emphasize the spatially-identified images of NGC 4993 in this paper. The stellar properties can be effectively derived from the multi-band images. Levan et al. (2017) already obtained the properties of NGC 4993 from integral filed spectroscopy observation. However, we note that space/ground-based spectroscopy observation is only suitable for bright sources. When we plan to identify faint sources during the campaign of GW EM counterpart searching in the future, photometric observation is almost the unique way to investigate the host galaxy properties of the GW EM counterpart candidates, as spectroscopy observation is hard to perform for faint targets. In this paper, we independently perform the image analysis of the Hubble observation. The method can be applied for the future GW EM counterpart searching. In the meanwhile, We expect that the research on the HST data presented in this paper is also helpful for the observation of China Space Station Telescope (CSST) in the future. The data from Pan-STARRS are also considered as a comparion to the data from Hubble space telescope. The near-infrared (IR) data are also provided in this paper. To avoid the duplication of some works used by VLT and Gemini observation, we collect the photometric data from 2MASS survey. Although 2MASS survey is relatively shallow, it is enough to provide us the IR information for NGC 4993. The spatially resolved properties, such as the color diagrams and the stellar populations, are presented in Section 2. The binary merging rate is estimated in Section 3. We draw a simple conclusion in Section 4. The details on the spatially resolved data analysis are presented in Appendix. ## 2 Data Analysis ### Profile Fitting To quantitatively characterize the photometric structure of NGC 4993, the Sersic profile of the galaxy can be measured by GALFIT (Peng et al., 2002). The results can be obtained in the different bands covering from the optical to the near-infrared wavelength. We take the images of the g, r, i, z, and y bands from the Pan-STARRS survey. The F606W image of Hubble legacy observation is also selected as a reference. It is shown that NGC 4993 has the Sersic index of 4 fitted by the Sersic profile, and it is proved that the galaxy is an early-type galaxy dominated by the bulge component. We also see that the Sersic index is increased as the observational wavelength is increased. The detailed results are listed in Table 1. The results of our measurements are consistent to those of other works (Palmese et al., 2017). We further identify that the effective radius is about 15\(\aas@@fstack{\prime\prime}\)0 that is corresponding to a linear distance of 3 kpc. According to the stellar mass of \(log(M/M_{\odot})\)=10.49 (Pan et al., 2017), NGC 4993 is slightly above but consistent with the mass-size relation of local early-type galaxies. The image of NGC 4993 can be subtracted by the fitting model. As an example, Figure 1 shows both the original image and the residual image after the fitting in the \(r\) band. Some substructures, such as dust lanes and shells, are shown in the residual image. Here, we suggest that the bulge has early star formation during passive evolution and that the outskirt has late star formation due to external gas accretion. Although the substructures indicate some possible merger activities happened in the past, we may consider the substructures as the evidences of the gas accretion. \begin{table} \begin{tabular}{c c c c c} \hline Filter & mag & \(R_{e}\) & \(n\) & \(b/a\) \\ \hline F606W & 12.163 & 17.75 & 4.12 & 0.85 \\ F814W & 11.373 & 19.45 & 4.73 & 0.85 \\ F110W & 10.746 & 25.70 & 6.30 & 0.83 \\ F140W & 10.507 & 27.22 & 6.71 & 0.83 \\ F160W & 10.453 & 24.00 & 6.34 & 0.83 \\ g & 12.926 & 14.54 & 3.28 & 0.86 \\ r & 12.104 & 15.18 & 3.70 & 0.86 \\ i & 11.706 & 14.51 & 3.81 & 0.85 \\ z & 11.385 & 15.05 & 4.14 & 0.85 \\ y & 11.115 & 15.83 & 4.21 & 0.85 \\ \hline \end{tabular} \end{table} Table 1: The magnitudes of NGC 4993 given by the GALFIT fitting. Here, \(R_{e}\) has the unit of arcsecond, \(n\) is the Sersic index, \(b/a\) indicates the ratio of short-axis and long-axis in the galaxy image. Figure 1: The image of NGC 4993 in the \(r\) band (left panel) and the residual image of NGC 4993 after the image subtraction by GALFIT (right panel). The symbol “\(+\)” represents the center of the galaxy, and the symbol “\(\times\)” represents the position of GW170817. ### Color Diagrams Detailed investigations by photometric analysis are crucial to understand stellar population properties in a galaxy. Here, we use Pan-STARRS, 2MASS, and HST data to obtain the two-dimensional color distribution of NGC 4993. The Galactic attenuation for the colormaps is corrected (Schlafly & Finkbeiner 2011). The results are shown in Figure 2 and Figure 3. It is clearly seen that NGC 4993 has a red center and a blue outskirt. Furthermore, the color at the position of GW170817 seems very similar to that in the adjacent regions. It means that the stellar population underlying the position where GW170817 occurs is identical to the stellar population of the GW170817 neighborhood. The local environment of GW170817 represents the common properties of the outskirt in NGC 4993. We can further identify the one-dimensional color profiles of NGC 4993. We utilize the elliptical annulus photometry to get the surface brightness profile in each band. The geometry parameters, such as center, ellipticity, and radius of each annulus, depend on the structure and the depth of the images. When producing the color profile, the definition for both the center and the ellipticity of the annuli should be clarified. The center and the related ellipticity of the galaxy are Figure 2: The two-dimensional distribution of the optical and near-infrared colors obtained by Pan-STARRS and 2MASS survey in NGC 4993. The symbol “\(+\)” represents the center of the galaxy, and the symbol “\(\times\)” represents the position of GW170817. not completely identical in different bands. Thus, when producing the color profile, the center and the ellipticity (0.166) are all fixed for the GALFIT fitting using the measurement result of F160W band. The radius along the major axis has the range of \(1\aas@@fstack{\prime\prime}02\)\(-\)\(37\aas@@fstack{\prime\prime}90\) in the HST images and of \(1\aas@@fstack{\prime\prime}55\)\(-\)\(24\aas@@fstack{\prime\prime}86\) in the Pan-STARRS images. When producing the 1-d surface brightness profile, the radii of the annuli are logarithmic increased instead of linear. As the radius increases, although the S/N of a single pixel decreases, more pixels are integrated, leading to the similarity of the S/N from the inner regions to the outskirts. Thus, the signal-to-noise ratio of the photometry is similar for each annulus at different radius. After the elliptical annulus photometry is performed in each band, we obtain the surface brightness profile and the one-dimensional color profile. The results are shown in Table 2, Figure 4, and Figure 5. It is seen that NGC 4993 has negative color gradients in general, and it is confirmed that NGC 4993 has a red center and a blue outskirt. Moreover, the color gradients become shallow as the radius is increased. It is indicated that the formation process for the galactic disk is not completely identical in different bands. Thus, when producing the color profile, the center and the ellipticity (0.166) are all fixed for the GALFIT fitting using the measurement result of F160W band. The radius along the major axis has the range of \(1\aas@@fstack{\prime\prime}02\)\(-\)\(37\aas@@fstack{\prime\prime}90\) in the HST images and of \(1\aas@@fstack{\prime\prime}55\)\(-\)\(24\aas@@fstack{\prime\prime}86\) in the Pan-STARRS images. When producing the 1-d surface brightness profile, the radii of the annuli are logarithmic increased instead of linear. As the radius increases, although the S/N of a single pixel decreases, more pixels are integrated, leading to the similarity of the S/N from the inner regions to the outskirts. Thus, the signal-to-noise ratio of the photometry is similar for each annulus at different radius. After the elliptical annulus photometry is performed in each band, we obtain the surface brightness profile and the one-dimensional color profile. The results are shown in Table 2, Figure 4, and Figure 5. It is seen that NGC 4993 has negative color gradients in general, and it is confirmed that NGC 4993 has a red center and a blue outskirt. Moreover, the color gradients become shallow as the radius is increased. It is indicated that the formation process for the galactic disk is not completely identical in different bands. Thus, when producing the color profile, the center and the ellipticity (0.166) are all fixed for the GALFIT fitting using the measurement result of F160W band. The radius along the major axis has the range of \(1\aas@@fstack{\prime\prime}02\)\(-\)\(37\aas@@fstack{\prime\prime}90\) in the HST images and of \(1\aas@@fstack{\prime\prime}55\)\(-\)\(24\aas@@fstack{\prime\prime}86\) in the Pan-STARRS images. When producing the 1-d surface brightness profile, the radii of the annuli are logarithmic increased instead of linear. As the radius increases, although the S/N of a single pixel decreases, more pixels are integrated, leading to the similarity of the S/N from the inner regions to the outskirts. Thus, the signal-to-noise ratio of the photometry is similar for each annulus at different radius. After the elliptical annulus photometry is performed in each band, we obtain the surface brightness profile and the one-dimensional color profile. The results are shown in Table 2, Figure 4, and Figure 5. It is seen that NGC 4993 has negative color gradients in general, and it is confirmed that NGC 4993 has a red center and a blue outskirt. Moreover, the color gradients become shallow as the radius is increased. It is indicated that the formation process for the galactic disk is not completely identical in different bands. Thus, when producing the color profile, the center and the ellipticity (0.166) are all fixed for the GALFIT fitting using the measurement result of F160W band. The radius along the major axis has the range of \(1\aas@@fstack{\prime\prime}02\)\(-\)\(37\aas@@fstack{\prime\prime}90\) in the HST images and of \(1\aas@@fstack{\prime\prime}55\)\(-\)\(24\aas@@fstack{\prime\prime}86\) in the Pan-STARRS images. When producing the 1-d surface brightness profile, the radii of the annuli are logarithmic increased instead of linear. As the radius increases, although the S/N of a single pixel decreases, more pixels are integrated, leading to the similarity of the S/N from the inner regions to the outskirts. Thus, the signal-to-noise ratio of the photometry is similar for each annulus at different radius. After the elliptical annulus photometry is performed in each band, we obtain the surface brightness profile and the one-dimensional color profile. The results are shown in Table 2, Figure 4, and Figure 5. It is seen that NGC 4993 has negative color gradients in general, and it is confirmed that NGC 4993 has a red center and a blue outskirt. Moreover, the color gradients become shallow as the radius is increased. It is indicated that the formation process for the galactic disk is not completely identical in different bands. Thus, when producing the color profile, the center and the ellipticity (0.166) are all fixed for the GALFIT fitting using the measurement result of F160W band. The radius along the major axis has the range of \(1\aas@@fstack{\prime\prime}02\)\(-\)\(37\aas@@fstack{\prime\prime}90\) in the HST images and of \(1\aas@@fstack{\prime\prime}55\)\(-\)\(24\aas@@fstack{\prime\prime}86\) in the Pan-STARRS images. When producing the 1-d surface brightness profile, the radii of the annuli are logarithmic increased instead of linear. As the radius increases, although the S/N of a single pixel decreases, more pixels are integrated, leading to the similarity of the S/N from the inner regions to the outskirts. Thus, the signal-to-noise ratio of the photometry is similar for each annulus at different radius. After the elliptical annulus photometry is performed in each band, we obtain the surface brightness profile and the one-dimensional color profile. The results are shown in Table 2, Figure 4, and Figure 5. It is seen that NGC 4993 has negative color gradients in general, and it is confirmed that NGC 4993 has a red center and a blue outskirt. Moreover, the color gradients become shallow as the radius is increased. It is indicated that the formation process for the galactic disk is not completely identical in different bands. Thus, when producing the color profile, the center and the ellipticity (0.166) are all fixed for the GALFIT fitting using the measurement result of F160W band. The radius along the major axis has the range of \(1\aas@@fstack{\prime\prime}02\)\(-\)\(37\aas@@fstack{\prime\prime}90\) in the HST images and of \(1\aas@@fstack{\prime\prime}55\)\(-\)\(24\aas@@fstack{\prime\prime}86\) in the Pan-STARRS images. When producing the 1-d surface brightness profile, the radii of the annuli are logarithmic increased instead of linear. As the radius increases, although the S/N of a single pixel decreases, more pixels are integrated, leading to the similarity of the S/N from the inner regions to the outskirts. Thus, the signal-to-noise ratio of the photometry is similar for each annulus at different radius. After the elliptical annulus photometry is performed in each band, we obtain the surface brightness profile and the one-dimensional color profile. The results are shown in Table 2, Figure 4, and Figure 5. It is seen that NGC 4993 has negative color gradients in general, and it is confirmed that NGC 4993 has a red center and a blue outskirt. Moreover, the color gradients become shallow as the radius is increased. It is indicated that the formation process for the galactic disk is not completely identical in different bands. Thus, when producing the color profile, the center and the ellipticity (0.166) are all fixed for the GALFIT fitting using the measurement result of F160W band. The radius along the major axis has the range of \(1\aas@@fstack{\prime\prime}02\)\(-\)\(37\aas@@fstack{\prime\prime}90\) in the HST images and of \(1\aas@@fstack{\prime\prime}55\)\(-\)\(24\aas@@fstack{\prime\prime}86\) in the Pan-STARRS images. When producing the 1-d surface brightness profile, the radii of the annuli are logarithmic increased instead of linear. As the radius increases, although the S/N of a single pixel decreases, more pixels are integrated, leading to the similarity of the S/N from the inner regions to the outskirts. Thus, the signal-to-noise ratio of the photometry is similar for each annulus at different radius. After the elliptical annulus photometry is performed in each band, we obtain the surface brightness profile and the one-dimensional color profile. The results are shown in Table 2, Figure 4, and Figure 5. It is seen that NGC 4993 has negative color gradients in general, and it is confirmed that NGC 4993 has a red center and a blue outskirt. Moreover, the color gradients become shallow as the radius is increased. It is indicated that the formation process for the galactic disk is not completely identical in different bands. Thus, when producing the color profile, the center and the ellipticity (0.166) are all fixed for the GALFIT fitting using the measurement result of F160W band. The radius along the major axis has the range of \(1\aas@@fstack{\prime\prime}02\)\(-\)\(37\aas@@fstack{\prime\prime}90\) in the HST images and of \(1\aas@@fstack{\prime\prime}55\)\(-\)\(24\aas@@fstack{\prime\prime}86\) in the Pan-STARRS images. When producing the 1-d surface brightness profile, the radii of the annuli are logarithmic increased instead of linear. As the radius increases, although the S/N of a single pixel decreases, more pixels are integrated, leading to the similarity of the S/N from the inner regions to the outskirts. Thus, the signal-to-noise ratio of the photometry is similar for each annulus at different radius. After the elliptical annulus photometry is performed in each band, we obtain the surface brightness profile and the one-dimensional color profile. The results are shown in Table 2, Figure 4, and Figure 5. It is seen that NGC 4993 has negative color gradients in general, and it is confirmed that NGC 4993 has a red center and a blue outskirt. Moreover, the color gradients become shallow as the radius is increased. It is indicated that the formation process for the galactic disk is not completely identical in different bands. Thus, when producing the color profile, the center and the ellipticity (0.166) are all fixed for the GALFIT fitting using the measurement result of F160W band. The radius along the major axis has the range of \(1\aas@@fstack{\prime\prime}02\)\(-\)\(37\aas@@fstack{\prime\prime}90\) in the HST images and of \(1\aas@@fstack{\prime}55\)\(-\)\(24\aas@@fstack{\prime\prime}86\) in the Pan-STARRS images. When producing the 1-d surface brightness profile, the radii of the annuli are logarithmic increased instead of linear. As the radius increases, although the S/N of a single pixel decreases, more pixels are integrated, leading to the similarity of the S/N from the inner regions to the outskirts. Thus, the signal-to-noise ratio of the photometry is similar for each annulus at different radius. After the elliptical annulus photometry is performed in each band, we obtain the surface brightness profile and the one-dimensional color profile. The results are shown in Table 2, Figure 4, and Figure 5. It is seen that NGC 4993 has negative color gradients in general, and it is confirmed that NGC 4993 has a red center and a blue outskirt. Moreover, the color gradients become shallow as the radius is increased. It is indicated that the formation process for the galactic disk is not completely identical in different bands. Thus, when producing the color profile, the center and the ellipticity (0.166) are all fixed for the GALFIT fitting using the measurement result of F160W band. The radius along the major axis has the range of \(1\aas@@fstack{\prime\prime}02\)\(-\)\(37\aas@@fstack{\prime\prime}90\) in the HST images and of \(1\aas@@fstack{\prime\prime}55\)\(-\)\(24\aas@@fstack{\prime\prime}86\) in the Pan-STARRS images. When producing the 1-d surface brightness profile, the radii of the annuli are logarithmic increased instead of linear. As the radius increases, although the S/N of a single pixel decreases, more pixels are integrated, leading to the similarity of the S/N from the inner regions to the outskirts. Thus, the signal-to-noise ratio of the photometry is similar for each annulus at different radius. After the elliptical annulus photometry is performed in each band, we obtain the surface brightness profile and the one-dimensional color profile. The results are shown in Table 2, Figure 4, and Figure 5. It is seen that NGC 4993 has negative color gradients in general, and it is confirmed that NGC 4993 has a red center and a blue outskirt. Moreover, the color gradients become shallow as the radius is increased. It is indicated that the formation process for the galactic disk is not completely identical in different bands. Thus, when producing the color profile, the center and the ellipticity (0.166) are all fixed for the GALFIT fitting using the measurement result of F160W band. The radius along the major axis has the range of \(1\aas@@fstack{\prime\prime}02\)\(-\)\ core is different to that of the galactic outskirt. Here, we identify that the inner region of the galaxy is within 0.5\(R_{e}\) and the outskirt region of the galaxy is beyond 0.5\(R_{e}\), where \(R_{e}\) is the effective radius of the galaxy. ### Stellar Population Usually HST has a better spatial resolution than ground-based optical telescopes. We obtain the Spectral Energy Distribution (SED) composed of fluxes in the HST F606W, F814W, F110W, F140W and F160W bands in each pixel of NGC 4993. we fit these SEDs using the Code Investigating GALaxy Emission (CIGALE Noll et al., 2009; Boquien et al., 2019) to obtain the galaxy properties on resolved scale in NGC4993. CIGALE combines a library of single stellar populations (SSP) and variable attenuation curves with SFH models to generate a large number of grid SED models to fit the observed data. To build a stellar composition, we use the BC03 stellar population synthesis model (Bruzual & Charlot, 2003) with the Chabrier initial mass function (Chabrier, 2003). The metallicity number that we take is from 0.2 to 2.5 \(Z_{\odot}\) (for \(Z_{\odot}=0.02\)). A delayed star formation history \(SFR(t)\propto t/\tau\times\exp(-t/\tau)\) is adopted, where t is the stellar age (varies from 1 to 13 Gyr) and \(\tau\) is the e-folding time (varies from 0.1 to 11 Gyr). For the dust attenuation, we adopt a fixed Calzetti attenuation curve (Calzetti et al., 2000) with E(B-V) varies from 0.0 to 0.3 mag. The nebular emission is also included in our SED fitting. All the modules and parameters are summarized in Table 3. CIGALE makes use of flat priors. The best-fitting parameters and the corresponding uncertainties are the likelihood-weighted mean and the standard deviation of all models, based on the probability distribution functions generated with MCMC sampling. We finally obtain the distribution of the age, the SFR, the sSFR, the extinction, and the metallicity of NGC 4993. The two-dimensional distribution and the one-dimensional profiles are shown in Figure 6 and Figure 7, respectively. Finally, the spatially resolved properties of NGC 4993 are clearly shown. insecure redshift measurements as well as remove the galaxies spectroscopically classified as quasi-stellar objects. There are 8 629 sources left after applying these selection criteria. Figure 8 shows the g\(-\)r color as a function of the stellar mass. NGC 4993 follows the normal properties of the red and early-type galaxies in the Figure. It seems that providing the ranked GW host galaxies by the galaxy property investigation is an important aspect for the GW EM counterpart detection. ## 3 Modeling estimation Merging rate of compact objects in a galaxy is the result of convolution of star formation rate in the galaxy with a certain delay time. The delay time is defined as the duration from binary formation to merger occurrence. Merging rate Figure 4: The one-dimensional optical and near-infrared color profiles of NGC 4993 derived by the images of Pan-STARRS and 2MASS surveys. The black line represents the color profile of NGC 4993, and the red point represents the color at the position of GW170817. of compact objects per galaxy is calculated by \[R=\lambda\int_{0}^{t}\frac{dp}{dt}(t-t_{d})\phi(t)dt, \tag{1}\] where \(\phi(t)\) is the star formation rate, \(dp/dt(t-t_{d})\) is the delay time distribution, and \(t_{d}\) is the minimum delay time. Usually, we have \(dp/dt(t-t_{d})\propto(t-t_{d})^{-\alpha}\)(Mapelli et al., 2018; Safarzadeh and Berger, 2019; Adhikari et al., 2020). Here, we assume that the delay time distribution can be a \(\delta\)-function as \(dp/dt=\delta(t-t_{d})\), and we obtain \(R=\lambda\phi(t_{d})\). The case of \(dp/dt=\delta(t-t_{d})\) indicates that binaries have an instant merging after we take the delay time of \(t_{d}\)(Adhikari et al., 2020). Here, we use \(\lambda=1.0\times 10^{-5}M_{\odot}^{-1}\) that was given by Safarzadeh and Berger (2019). Some globular clusters in NGC 4993 have been identified by the deep photometric measurements (Lee et al., 2018), although GW170817 did not have a globular cluster origin (Fong et al., 2019). It is indicated that the environment of GW170817 was not strongly Figure 5: The one-dimensional optical and near-infrared color profiles of NGC 4993 derived by the images obtained by HST. The black line represents the color profile of NGC 4993, and the red point represents the color at the position of GW170817. Figure 6: The two-dimensional distributions of age, SFR, mass, sSFR, \(A_{v}\), and metallicity in NGC 4993. The asymmetrical distribution of metallicity is shown in the panel (f). In our work, we have derived the metallicity error of about 0.01. This error is as large as the metallicity difference between the center and the outskirt, and it takes effect on the metallicity distribution. Figure 7: The one-dimensional profiles of age, SFR, mass, sSFR, \(A_{v}\), and metallicity in NGC 4993. affected by the galactic merger during the galaxy evolution time. Therefore, we may consider NGC 4993 as a giant S0 galaxy with a passive evolution. The calculation is in the framework of the co-evolution between the central BH and its host galaxy. We utilize the model of the co-evolution between central BH and its host galaxy provide by Granato et al. (2004). The semi-analytic model can provide star formation processes of a galaxy in a given dark matter halo at a certain redshift. The feedback Figure 8: Background gray contour shows the g\(-\)r color as a function of the stellar mass for the galaxies in the SDSS sample with \(z<0.02\). NGC 4993 is labelled as the blue star. The dashed line separates the galaxy sample into the red dataset and the blue dataset. \begin{table} \begin{tabular}{l l l} \hline \hline Module & Parameter & Value \\ \hline sfhdelay & age (Myr) & 1000, 3000, 5000, 7000, 9000, 11000, 13000 \\ & \(\tau\) (Myr) & 100, 300, 500, 1000, 3000, 5000, 7000, 9000, 11000 \\ \hline bc03 & imf & 1 (Chabrier) \\ & metallicity & 0.004, 0.008, 0.02, 0.05 \\ \hline nebular & logU & \(-2.0\) \\ & f\_esc & 0.0 \\ & f\_dust & 0.0 \\ & lines\_width (km s\({}^{-1}\)) & 300 \\ \hline dustatt\_modified & E\_BV\_lines(mag) & 0.001, 0.002, 0.005, 0.01, 0.02, 0.05, \\ \_starburst & & 0.10, 0.20, 0.30 \\ & E\_BV\_factor & 0.44 \\ & powerlaw\_slope & 0 \\ \hline \hline \end{tabular} \end{table} Table 3: Modules and input parameters with CIGALE for generating our model SEDs. from both supernovae and active galactic nuclei is involved in the model. This model has wide application. For example, a careful investigation on high-redshift star formation and absorption was performed (Mao et al. 2007). In particular, Mao et al. (2010) utilized the model to investigate the physical properties of the long-duration GRB host galaxies. Star formation process of a galaxy in a certain dark matter halo at a certain redshift is affected by the feedback of AGN and supernova. Here, we can apply this model to further constrain the merger rate in NGC 4993. The star formation rate in a galaxy is \[\phi(t_{d})=\frac{m(0)}{t_{cond}(\gamma-1/s)}[exp(-t_{d}/t_{cond})-exp(-s\gamma t _{d}/t_{cond})], \tag{2}\] where \(m(0)=18\%M_{H}\), \(M_{H}\) is the dark matter halo mass, and we set the parameter \(s=5\). The condensation timescale for the gas convert into the stars in a dark matter halo at a given redshift can be presented as \[t_{cond}=4.0\times 10^{8}(\frac{1+z}{7})^{-1.5}(\frac{M_{H}}{1.0\times 10^{12}M_{ \odot}})^{0.2}. \tag{3}\] We have \(\gamma=1-f+\beta_{SN}\), where the coefficient of the supernova feedback is simply taken to be \[\beta_{SN}=0.35(\frac{1+z}{7})^{-1.0}(\frac{M_{H}}{1.0\times 10^{12}M_{\odot}})^ {-2/3}(\frac{E_{SN}}{10^{51}\rm{ergs}}), \tag{4}\] where \(E_{SN}\) is the energy release from supernova, and we take the parameter \(f=0.3\). We assume that the merging occurs immediately after the delay time of \(t_{d}\). If the host galaxy was formed at redshift 1.0 according to the mass-weighted stellar age of about 8.0 Gyr and it has a passive evolution, we obtain the merging rate per galaxy of \(3.2\times 10^{-4}\) when we take the delay time \(t_{d}=1.0\) Gyr. The merging rate per galaxy can be about \(7.7\times 10^{-5}\) when we take the delay time \(t_{d}\) to be 5.0 Gyr. It is confirmed that compact object merging in a galaxy is a kind of rare event in the universe. The constraint provides valuable references for the future GW EM counterpart identification. ## 4 Conclusion We comprehensively perform the photometric analysis of NGC 4993. The spatially resolved properties of the galaxy are clearly presented. Although the shell of NGC 4993 was identified as an evidence of the galaxy merger, the mass of the shell seems too small to be the production of the galaxy major merger (Kilpatrick et al. 2022). We suggest that the galaxy center has passive evolution and the outskirt is formed by gas accretion. We estimate the compact binary merging rate per galaxy is \(3.2\times 10^{-4}\) to \(7.7\times 10^{-5}\) within the merging decay time from 1.0 to 5.0 Gyr. The methods of the spatially resolved data analysis and the physical constraints on the binary merging in a galaxy are very useful for the GW EM counterpart detections. The HST data analysis presented in this paper can be also applied for the Chinese Space Station Telescope (CSST) research in the future. **Acknowledgements** All the _HST_ and Pan-STARRS data used in this paper can be found in MAST: dataset [10.17909/T97P46], [10.17909/55e7-5x63], and [10.17909/s0zg-jx37]. This work is supported by the National Science Foundation of China (NSFC 11673062), China Manned Space Project (CMS-CSST2021-A06), and Yunnan Revitalization Talent Support Program (YunLing Scholar Award). J.Q. acknowledges the support from the Jiangsu Funding Program for Excellent Postdoctoral Talent (NO. 2022ZB473). X.Z.Z. thanks the support from the NSFC (11773076 and 12073078), the National Key R&D Program of China (2017YFA0402703), and the science research grants from the China Manned Space Project with NO. CMS-CSST-2021-A02, CMS-CSST-2021-A04, and CMS-CSST-2021-A07. F.L. thanks the support from the NSFC (11733006 and 12273052). Y.Z. thanks the support from the NSFC (12173079). X.Z thanks the support from the NSFC (U1831135). ## Appendix A IMAGE selection and data reduction We use both optical and near-infrared images to get the spatially-resolved properties of NGC 4993. The images are obtained from Pan-STARRS survey (Chambers et al., 2016), 2MASS survey (Skrutskie et al., 2006), and HST legacy survey (Alexander et al., 2018; Lyman et al., 2018; Margutti et al., 2018; Lamb et al., 2019; Piro et al., 2019). The reduction of these ground-based and space-based data are summarized as follows. The images obtained by Pan-STARRS survey have 5 bands (\(g\), \(r\), \(i\), \(z\), and \(y\)) and cover the wavelength range from 0.45\(\mu m\) to 1\(\mu m\). The pixel size is 0\(\aas@@fstack{\prime\prime}\)258 and the spatial resolution ranges from 1\(\aas@@fstack{\prime\prime}\)1 to 1\(\aas@@fstack{\prime\prime}\)3, indicating that 250 pc scale substructure of NGC 4993 can be identified if we accept 41 Mpc as the distance of NGC 4993 (Cantiello et al., 2018). We also obtained 2MASS images of NGC 4993 in the three near-infrared (NIR) bands (named J, H, and K). The pixel size is 1\(\aas@@fstack{\prime\prime}\)0 and the spatial resolution is 3\(\aas@@fstack{\prime\prime}\)1, corresponding to 600 pc scale substructure for NGC 4993. The morphology of NGC 4993 is more representative for the stellar mass distribution in the NIR bands than that in the optical bands. Furthermore, with the NIR observations, the spectral energy distribution (SED) can be extended towards longer wavelength, and this can give us more information on the distribution of the stellar population. The images obtained by HST have ultra-high spatial resolution due to the absence of the air turbulence. The pixel size ranges from 0\(\aas@@fstack{\prime\prime}\)040 for the optical filter WFC3/F814W to 0\(\aas@@fstack{\prime\prime}\)128 for the NIR filters (WFC3/F110W, WFC3/F140W, and WFC3/F160W). The spatial resolution ranges from 0\(\aas@@fstack{\prime\prime}\)15 to 0\(\aas@@fstack{\prime\prime}\)19 for the optical bands and from 0\(\aas@@fstack{\prime\prime}\)27 to 0\(\aas@@fstack{\prime\prime}\)42 for the NIR bands, corresponding to 30\(-\)80 pc scale substructure for NGC4993. The detailed morphology and the distribution of the stellar population for NGC 4993 can be recovered by the analysis of the HST images, and it is benefited from the \begin{table} \begin{tabular}{c c c c c c} \hline Observation time (UT) & Instrument & Filter & Exptime & Program ID & PI \\ \hline 2017-04-28 03:40:37 & ACS/WFC1 & F606W & 696.000 & 14840 & Andrea Bellini \\ 2018-01-01 13:24:13 & ACS/WFC & F606W & 2120.000 & 15329 & Edo Berger \\ 2018-03-23 21:07:37 & ACS/WFC & F606W & 2120.000 & 15329 & Edo Berger \\ 2018-07-20 08:12:50 & ACS/WFC & F606W & 2120.000 & 15329 & Edo Berger \\ 2019-03-21 17:38:21 & ACS/WFC1 & F606W & 6728.000 & 15606 & Raffaella Margutti \\ 2019-03-27 10:18:09 & ACS/WFC1 & F606W & 6728.000 & 15606 & Raffaella Margutti \\ 2017-12-06 03:20:46 & WFC3/UVIS2 & F814W & 2400.000 & 14771 & Nial Tanvir \\ 2018-02-05 15:46:33 & WFC3/UVIS2 & F814W & 2400.000 & 14771 & Nial Tanvir \\ 2017-12-08 20:33:09 & WFC3/IR & F110W & 2411.749 & 15329 & Edo Berger \\ 2017-12-08 22:03:58 & WFC3/IR & F110W & 2611.751 & 15329 & Edo Berger \\ 2017-12-08 23:39:18 & WFC3/IR & F110W & 2611.751 & 15329 & Edo Berger \\ 2017-12-06 04:56:34 & WFC3/IR & F140W & 2396.929 & 14771 & Nial Tanvir \\ 2017-12-06 14:32:06 & WFC3/IR & F140W & 2396.929 & 14771 & Nial Tanvir \\ 2017-12-06 01:45:51 & WFC3/IR & F160W & 2396.929 & 14270 & Andrew Levan \\ 2017-12-06 17:23:18 & WFC3/IR & F160W & 2411.737 & 15346 & Mansi Kasliwal \\ \hline \end{tabular} \end{table} Table 1: The summary of the images obtained by HST unprecedented high spatial resolution. To study the underlying stellar populations of NGC 4993, especially the stellar populations at the position of GW170817, the images of NGC 4993 obtained before the GW170817 occurrence are better to be used to avoid the contamination from the afterglow of GRB 170817A. However, before the GW170817 occurrence, NGC4993 was only observed by ACS/F606W as part of the Schedule Gap Pilot program (PI: Andrea Bellini ID:14840) observed on 2017 April 28th, and the exposure time is 696 seconds. We can hardly get the distribution of the stellar population within the galaxy by a single-band image. After the GW170817 occurrence, NGC 4993 was monitored by HST with different bands, and one can have the multi-band images for the purpose of the stellar population synthesis. We select the images obtained at least 50 days after the GW170817 trigger to ensure that the light of NGC 4993 is dominated by the stellar populations. At this stage, the images of NGC 4993 are only contaminated by the afterglow of the structured jet in GRB 170817A, which is fainter than 26 mag, 4 magnitudes dimmer than the kilonova (Fong et al. 2019). Therefore, the properties of the stellar population at the position of GW170817 can be precisely inferred. Table 1 lists the information of the images of NGC4993 observed by HST that we use in this work, including observing time, exposure time, instruments, filters, and proposal ID. Briefly, the images obtained by HST cover five optical and NIR bands (F606W/F814W/F110W/F140W/F160W), enable us to get the distribution of the stellar population with ultra-high resolution of NGC4993. We then stack the images obtained by the same band to enhance the signal-to-noise ratio. We illustrate the stacking process as the following procedure: 1) Remove cosmic rays and bad pixels in the images, which have been bias-subtracted and flatfield-divided. 2) Subtract the background for the images. 3) Match the astrometry of the images using the astrometry information listed in the head file to the same reference image, and this reference image was obtained on 2017-12-08 20:33:09 in the F110W filter. 4) Extract the empirical point spread function (PSF) for all the images by stacking the stars in each corresponding image. 5) Match all the PSFs to the worst/largest PSF, and the PSF was obtained from the image observed on 2017-12-06 01:45:51 in the F160W filter with a FWHM (Full Width at Half Maximum) of 0\(\aas@@fstack{\prime\prime}\)42. This ensures that the value of each corresponding pixel grid at the same position is a weighted average of the neighboring pixels produced in a same way. Thus, the same pixel grid represents the same region in the galaxy, such that we can compare and operate the same region in the different images. 6) Combine the astrometry and PSF matched images obtained in the same band using the exposure time as the weight. The value at each pixel is calculated by \[DN_{stack}(x,y)=\sum(DN_{i}(x,y)*t_{exp})/\sum t_{exp}, \tag{1}\] where \(t_{exp}\) is the exposure time, (x,y) is the coordinate in the image, DN is the digital number at a pixel. Since the DN is in the unit of electrons s\({}^{-1}\) in the calibrated images obtained by HST, the final combined image is still in the unit of electrons s\({}^{-1}\), and the photometric zeropoint of the combined image is unchanged. Finally, we get the stacked images of NGC 4993 in the five bands (F606W, F814W, F110W, F140W, and F160W) respectively. After the stacked multi-band images are recovered, we use Sersic profile to describe the morphology of the galaxy in the different bands, and we also get the colormap and the color profile of the galaxy from the images in the different bands. We take further stress on the above HST data reduction. The HST images are already drizzled products that can be downloaded from the STSCI website. Then we shift the different images to a fixed reference coordinate and stack the images at the same band using the exposure time as the weight to increase the S/N. Before stacking the images at the same band, the PSF was also matched to ensure that the images to be added have identical PSF. ## Appendix B The Extraction and Matching of the PSF We normally use point spread function (PSF) to describe the response of an optical system for pointed sources. In principle, an image of a celestial source is the convolution of the PSF with the intrinsic intensity distribution. Thus, to get the knowledge of the intrinsic surface brightness profile of a galaxy, we must obtain the PSF in the image. The existence of the PSF also means that the value at each pixel is the weighted average of the pixels around it (including the pixel itself, which usually has the maximal weight). Different images usually have different PSFs due to the variance of the air turbulence and the variance of the filters with the different observing strategies. The air turbulence is exempted if we take images from space-based telescopes. If the astrometry is matched between different images, the pixel at the same position is indeed from different regions, as the adjacent pixels contribute different weights to the same pixel. In order to do photometry at the same pixel in different images by a direct way, we should match both the astrometry and the PSF of the images before stacking the images and producing colormaps or color profiles. To match the PSF, we first extract the PSF in each image. The pointed sources in each image are adopted. When extracting the PSF, the empirical method is used to select the unsaturated pointed sources without contamination from the neighboring sources and to stack the images of the pointed sources weighted by the flux of each pointed source. For the images obtained by Pan-STARRS survey, we use the following criteria to select the pointed sources: 1) We select the sources, and each source has the photometric difference between PSFMag and KronMag less than 0.05 mag in i band. Here, PSFMag is the magnitude obtained from the PSF profile, and KronMag is the magnitude obtained from the Kron radius. This selection condition was also mentioned to distinguish the pointed source and the extended source (Farrow et al. 2014). Because pointed sources like stars can be well described by a PSF profile, and the PSFMag value is close to the KronMag value. For extended sources like galaxies, the surface brightness profile cannot be described by a PSF profile, and the magnitude obtained by PSFMag is dimmer than that obtained by KronMag. 2) We plot all the detected sources on the magnitude-half-light radius diagram as shown in Figure 11. The pointed sources have similar half-light radius which is the half-light radius of the PSF profile, while the extended sources have a larger half-light radius. Thus, the pointed sources and the extended sources lie in the different regions on the magnitude-half-light radius diagram, respectively. This is the algorithm used by PSFExtractor (Bertin 2013). We select the sources with half-light radius less than 5 pixels (corresponding to 1\(\aas@@fstack{\prime\prime}\)3) and with the magnitude brighter than 20 mag in the Pan-STARRS image. 3) We also use the parameter CLASS_STAR derived by Sextractor to select the pointed sources (Bertin & Arnouts 1996). We refer the sources having the condition of \(CLASS\_STAR\geq 0.9\). Finally, we select the pointed sources that satisfy all the three criteria above to extract PSF for the images in the different bands. Both the sources with the contamination by the neighboring ones and the saturated sources are removed. For the images obtained by the HST observation, since the field of view is not large enough (140\(\aas@@fstack{\prime\prime}\)0\(\times\)120\(\aas@@fstack{\prime\prime}\)0) to get the statistical properties of the sources in the images, the pointed sources are visually selected. These selected sources are shown in Figure 12. After the sources are selected, we use the DAOPHOT package in IRAF to get the average PSFs in the different bands. Once the PSF is obtained, the best parameterized Sersic profile convolving with the PSF for the image in each band can be achieved. Moreover, when stacking the images in each band and producing the colormaps and the color profiles among the different bands, the PSFs should be matched to ensure that the corresponding pixel in different images can be compared. We use the PSFMATCH package in IRAF to match all the PSFs to the worst PSF. The algorithm is described as \[image1=inten\otimes PSF1\] (B.1) \[image2=inten\otimes PSF2\] (B.2) Where \(inten\) represents the intrinsic intensity distribution without the effect of PSF. If PSF2 is worse than PSF1, we will find a kernel satisfying the following equation \[PSF2=PSF1\otimes kernel.\] (B.3) After the kernel is found, image1 will be convolved with this kernel as \[Image1\otimes kernel=inten\otimes PSF1\otimes kernel=inten\otimes PSF2.\] (B.4) After the convolving with this kernel, the PSF is identical for both image1 and image2. Following Equation B.3, the method to find the kernel is presented as \[kernel=(\mathcal{F})^{-1}\frac{\mathcal{F}(PSF2)}{\mathcal{F}(PSF1)},\] (B.5) where \(\mathcal{F}\) is the Fourier transformation, and \(\mathcal{F}^{-1}\) the inverse Fourier transformation. The noise in the empirical PSF by stacking the images can influence this psf-matching procedure severely. We should reduce the effect of the noise in the Figure B.1: The sources on the magnitude\(-\)half-light radius diagram. The red dots represent the sources with CLASS_STAR \(\geq\) 0.9, and the blue circles represent the sources with CLASS_STAR \(<\) 0.9. We can see that the pointed sources and the extended sources can be clearly distinguished in the diagram. matching procedure. After the Fourier transformation is performed, the random noise signals are shown in the high-frequency range in the spatial-frequency domain. We test several different methods to reduce the high-frequency noise. For the Pan-STARRS images, we fit the low-frequency and high signal-to-noise ratio components of the matching function with a Gaussian model and apply this Gaussian model to replace the entire PSF function, following the algorithm \(replace\) with the parameter \(filter\) of the PSFMATCH package in IRAF. For the HST images, a cosine bell function is applied to the PSF-matching function in spatial-frequency space, which reduces the weight of the high-frequency component, following the algorithm \(cosbell\) with the parameter \(filter\) of the PSFMATCH package in IRAF. This algorithm can also match the direction of the asterism, and it is suitable for the HST image reduction. In practice, the high-frequency component mentioned above has the contribution to the center of the PSF. When we adopt the procedure mentioned above to reduce the noise, the matched PSF is slightly larger than the original PSF. To solve this problem, we match the worst PSF to the original PSF by the same algorithm. After this additional process, the PSFs in all images are almost self-consistent. The FWHM numbers before and after PSF-matching procedure are listed in Table 1. As an example, Figure 3 shows the PSFs of both F606W and F160W images before and after psf-matching procedure. We can see that not only the FWHMs of the PSFs but also the shapes (including the asterism) of the PSFs are almost identical after the PSF-matching procedure. Figure 4 and Figure 5 shows the PSF curves before and after the psf-matching procedure. These panels indicate that the psf-matching procedure in this work is reliable.
2303.04406
Enhanced Sliding Window Superposition Coding for Industrial Automation
The introduction of 5G has changed the wireless communication industry. Whereas previous generations of cellular technology are mainly based on communication for people, the wireless industry is discovering that 5G may be an era of communications that is mainly focused on machine-to-machine communication. The application of Ultra Reliable Low Latency Communication in factory automation is an area of great interest as it unlocks potential applications that traditional wired communications did not allow. In particular, the decrease in the inter-device distance has led to the discussion of coding schemes for these interference-filled channels. To meet the latency and accuracy requirements of URLLC, Non-orthogonal multiple access has been proposed but it comes with associated challenges. In order to combat the issue of interference, an enhanced version of Sliding window superposition coding has been proposed as a method of coding that yields performance gains in scenarios with high interference. This paper examines the abilities of this coding scheme in a broadcast network in 5G to evaluate its robustness in situations where interference is treated as noise in a factory automation setting. This work shows improvements of enhanced sliding window superposition coding over benchmark protocols in the high-reliability requirement regions of block error rates $\approx 10^{-6}$.
Bohang Zhang, Zhaoujun Nan, Sheng Zhou, Zhisheng Niu
2023-03-08T06:53:43Z
http://arxiv.org/abs/2303.04406v1
# Enhanced Sliding Window Superposition Coding for Industrial Automation ###### Abstract The introduction of 5G has changed the wireless communication industry. Whereas previous generations of cellular technology are mainly based on communication for people, the wireless industry is discovering that 5G may be an era of communications that is mainly focused on machine-to-machine communication. The application of Ultra Reliable Low Latency Communication in factory automation is an area of great interest as it unlocks potential applications that traditional wired communications did not allow. In particular, the decrease in the inter-device distance has led to the discussion of coding schemes for these interference-filled channels. To meet the latency and accuracy requirements of URLLC, Non-orthogonal multiple access has been proposed but it comes with associated challenges. In order to combat the issue of interference, an enhanced version of Sliding window superposition coding has been proposed as a method of coding that yields performance gains in scenarios with high interference. This paper examines the abilities of this coding scheme in a broadcast network in 5G to evaluate its robustness in situations where interference is treated as noise in a factory automation setting. This work shows improvements of enhanced sliding window superposition coding over benchmark protocols in the high-reliability requirement regions of block error rates \(\approx 10^{-6}\). ## I Introduction With the release of 5G and the subsequent rollout of this network, the wireless industry is beginning to realize that 5G may not be as much an era for interpersonal communication but rather for communication between machines. The applications of Ultra-Reliable Low Latency Communications (URLLC) are of vital importance as it aims to create a balance between both reliability and latency. The requirements of URLLC is usually \(10^{-4}\)to\(10^{-6}\) Block Error Rate (BLER) at 1-10ms latency. Traditional methods of improving communications usually come at the cost of reliability or latency, i.e HARQ can improve upon the reliability of the communications link at the cost of latency [1] while simply shortening the error correcting code will decrease latency at the cost of error rate [2]. With this proliferation of URLLC applications, the factory automation scenario appears to be an area of great interest as it combines URLLC with the rise of machine-based communication. In particular, coding schemes are important for the attainment of URLLC as coding can determine the fundamental limits of the system. Research by others has identified LDPC as a good benchmark when dealing with the factory automation setting [4]. Most of the previous works are mainly dealing with channels where individuals have considered a system under Orthogonal Multiple Access (OMA). This may not be a suitable scheme for URLLC as one major source of delay in communications systems is the multiple access scheduling. As such, work by others has considered the applications of Non-Orthogonal Multiple Access (NOMA) for URLLC [5][6]. This application of NOMA also extends into the factory automation setting as well [7]. A major challenge of NOMA is the coding scheme used to deal with the interference from other uses and work having been done on evaluating associated coding schemes [8]. On the issue of interference, Sliding Window Superposition Coding (SWSC) had been proposed by others for use inside the interference channel [9][10]. Whereas previous works of SWSC are mainly focused on the achievable rate at a Block Error Rate of 0.1, this work will focus on regions of high-reliability requirement where message error rates are \(\approx 10^{-4}\) to \(10^{-6}\). Another challenge of URLLC and using LDPC is the issue of error floor [11] at short block length [12]. This work will take this into account when constructing associated simulations to offer a complete comparison. The main contributions of this work are as the following: 1. Propose a new decoding method for SWSC called Enhanced SWSC (ESWSC) that is able to address the issue of error propagation within SWSC to increase performance. This is done while maintaining the same coding efficiency as SWSC. 2. Evaluate the performance of SWSC and ESWSC through simulations and observe the impacts of block length, number of blocks, and block ratio \(\alpha\) (for ESWSC) on the overall performance of the codes. This work will first propose an improved decoding method for SWSC called Enhanced SWSC (ESWSC) in Section II. The benchmark protocols will be LDPC and a modified version of LDPC (mLDPC). These protocols will also be outlined in Section II. The simulation parameters are discussed in Section III where the results are also analyzed. The conclusion in Section IV will summarize the results obtained in the previous sections. ## II System Model An indoor factory automation situation is used where the information for \(N\) receivers is sent in a broadcast manner for downlink. The downlink message is sent through the standard 5G specifications on the data plane [13]. The broadcasted message contains the information intended for all receivers and each round of broadcast is considered successful if all of the \(N\) receivers are able to decode their respective messages. The proposed algorithm of ESWSC will be compared against benchmark SWSC and versions of LDPC. ### _Swsc_ For \(N\) total information packets that need to be sent, \(a_{i}\) is an individual information packet while \(a=(a_{1},...,a_{N})\) denotes all the information bits that are sent. \(b_{i}\) is the coded bits after \(a_{i}\) has gone through error correcting code encoding with \(b=(b_{1},...,b_{N})\) denotes all the bits after error correcting code. \(c_{i1}\) and \(c_{i2}\) are the first and second half of bits in \(b_{i}\) respectively so \(b_{i}=c_{i1}\parallel c_{i2}\) with \(c_{1}=(c_{11},...,c_{N1})\) and \(c_{2}=(c_{12},...,c_{N2})\) denoting the first and second halves of \(b\) respectively. For simplicity's sake, the example in Fig. 1 shows SWSC of 2 layers with \(d_{i1}\) and \(d_{i2}\) denoting the location within the layer of superposition coding but this can work for any number of layers. The bits in \(d_{11}\) will be a known sequence called \(c_{\text{clean}}\). For each superposition location pair of \(d_{i1}\) and \(d_{i2}\), the layers can be modulated together to form \(e_{i}\) with \(e=[e_{1},...,e_{N}]\) is the message that is sent through the channel. ``` 1: Assume \(N\) original information packets are to be sent with each packet being \(a_{i}\). 2:\(a_{i}\) becomes \(b_{i}\) from error correcting code. 3: Split \(b_{i}\) into \(c_{i1}\) and \(c_{i2}\) 4: 5:for\(i\in 1\to N\)do 6:if\(i=1\)then 7: Place a known sequence \(c_{\text{clean}}\) in \(d1_{1}\) location 8: Place \(c_{11}\) and \(c_{12}\) into \(d_{21}\) and \(d_{12}\) respectively 9:else 10: Place \(c_{i2}\) and \(c_{i2}\) into \(d_{i2}\) and \(d_{(i+1)1}\) respectively 11:endif 12: bits in \(d_{i1}\) and \(d_{i2}\) are modulated to form \(e_{i}\). 13:endfor ``` **Algorithm 1** SWSC Encoding Scheme The decoding of SWSC is the reverse of all the steps in the encoding process. To start, \(\hat{e_{1}}\) will decode into \(\hat{d_{11}}\) and \(\hat{d_{12}}\) where \(\hat{e_{i}}\) is the received message after \(e_{i}\) goes through the wireless channel. \(\hat{c_{11}}\) will have a higher chance of being right due to the \(c_{clean}\) in \(\hat{d_{11}}\). The existence of a known message will make decoding much easier to complete. Next, \(\hat{e_{2}}\) will decode into \(\hat{c_{12}}\) and \(\hat{c_{21}}\) respectively. \(\hat{c_{11}}\) and \(\hat{c_{12}}\) will combine to form \(\hat{b_{1}}\) which can obtain \(\hat{a_{1}}\). \(\hat{a_{1}}\) can form \(\tilde{b}_{1}\) and therefore \(\hat{c_{12}}\) which can serve as the new \(\hat{c}_{\text{clean}}\) sequence that is used for further sliding window decoding. This sequence is repeated in the manner outlined in Algorithm 2 and shown in Fig. 2. ``` 1:\(\hat{e_{i}}\) is the received message after \(e_{i}\) goes through channel 2:for\(i\in 1\to N\)do 3:if\(i=1\)then 4:\(\hat{e_{1}}\) gets back bits in \(\hat{d_{i1}}\) and \(\hat{d_{i2}}\) through demodulation 5: The bits in \(\hat{d_{11}}\) is a known sequence \(c_{\text{clean}}\). This will make the bits in \(\hat{d_{12}}\) easier to decode. 6: Demodulate \(\hat{e}_{2}\) to get the bits in \(\hat{d_{21}}\) and \(\hat{d_{22}}\) 7: Use \(\hat{c}_{11}\) from \(\hat{d}_{12}\) and \(\hat{c}_{12}\) from \(\hat{d_{21}}\) to construct \(\hat{b}_{1}\) 8: Run \(\hat{b}_{1}\) through error correcting code to obtain information packet \(\hat{a}_{1}\) 9:\(\hat{a}_{1}\) becomes \(\tilde{b}_{1}\) through error correcting code 10:\(\tilde{b}_{1}\) splits into \(\tilde{c}_{11}\) and \(\tilde{c}_{12}\) 11:\(\tilde{c}_{12}\) is placed in \(\tilde{d}_{21}\) which serves as the new clean sequence \(\hat{c}_{\text{clean}}\) 12:else 13:\(\hat{e}_{i}\) gets back bits in \(\hat{d}_{i1}\) and \(\hat{d}_{i2}\) 14: For any \(\hat{e}_{i}\), the sequence in \(\hat{d}_{i1}\) location is assumed to be \(\hat{d}_{\text{clean}}\) from the previous coding step. This will make the bits in \(\hat{d}_{i2}\) easier to decode 15: Demodulate \(\hat{e}_{(i+1)}\) to get the bits placed in \(\hat{d}_{(i+1)1}\) and \(\hat{d}_{(i+1)2}\) 16: Use \(\hat{c}_{i1}\) from \(\hat{d}_{(i)2}\) and \(\hat{c}_{i2}\) from \(\hat{d}_{(i+1)1}\) to construct \(\hat{b}_{i}\) 17: Run \(\hat{b}_{i}\) through error correcting code to obtain information packet \(\hat{a}_{i}\) 18:\(\hat{a}_{i}\) becomes \(\tilde{b}_{i}\) through error correcting code 19:\(\tilde{b}_{i}\) forms \(\tilde{c}_{i1}\) and \(\tilde{c}_{i2}\) 20:\(\tilde{c}_{i2}\) placed in \(\tilde{d}_{(i+1)1}\) to be used as new clean sequence \(\hat{c}_{\text{clean}}\) for the next sliding window 21:endif 22:endfor ``` **Algorithm 2** SWSC Decoding Scheme The major advantage that SWSC offers over current protocols is the ability to take advantage of the mutual information between the pieces of the code word from sliding window blocks. This _non-i.i.d_ nature of adjacent blocks allows for better decoding as the coding chain is constantly able to create a "clean" message that can aid in the decoding of future blocks. Fig. 1: SWSC Encoding ### _Eswsc_ Both the strength and the weakness of SWSC lie in the _non-i.i.d_ nature of the adjacent information packets during decoding. The successful decoding of the current message is dependent on the previous message being decoded successfully. This has an issue of error propagation as the unsuccessful decoding of a single package results in the rest of the code being decoded incorrectly. In order to combat this, ESWSC is proposed as an alternative method to decode the SWSC coding chain. While the first portion of the message is decoded in the same manner. A ratio of blocks \(0\leq\alpha\leq 0.5\) starting from the back of the SWSC will be decoded in reverse. The \(c_{N2}\) sequence of bits will be decoded first aided by the clean message located at \(d_{(N+1)2}\) before being combined with \(c_{N1}\) to go through error correcting code. This is shown in Fig. 3. When \(\alpha=0\), the code will be decoded in the same manner as SWSC while \(\alpha=0.5\) will be the ideal method to maximize decoding success. The reason for \(\alpha\) to have a range of values is that some of the information packets may be latency limited within the frame. This requires a set ordering of the SWSC and \(\alpha\) can be adjusted accordingly. In general, most situations are not overall latency constrained but this work takes into account the effect of \(\alpha\) in its simulations. ### _Message Error Rate_ The Message Error Rate (MER) can be calculated by finding out \(P(a\neq\hat{a})\). In order to evaluate the performance of ESWSC and SWSC under this definition, a number of probabilities will be used. The first probability is the probability of success for the first information packet to be successfully decoded during SWSC or ESWSC decoding \(P_{\text{clean}}=P(a_{1}=\hat{a_{1}})\). For any other information packets \(a_{i}\), the probability of successfully decoding it given the previous message is decoded successfully is given by \(P_{\text{assumed}}=P(a_{i}=\hat{a_{i}}|a_{(i-1)}=\hat{a}_{(i-1)})\). As a function of BLER, the MER looks like: \[\text{MER}=1-(1-\text{BLER})^{N} \tag{1}\] The BLER equivalent for some MER values are shown below in Table I. For a total number of packets \(N\), the probability of all information packets being correct for SWSC is given by: \[P_{\text{SWSC}}=P_{\text{clean}}\times(P_{\text{assumed}})^{(N-1)} \tag{2}\] At the same time, the expression for the performance of ESWSC is given by: \[\begin{split} P_{\text{ESWSC}}&=(P_{\text{clean}} \times(P_{\text{assumed}})^{(N/2-1)})^{2}\\ &=(P_{\text{clean}})^{2}\times(P_{\text{assumed}})^{(N-2)}\end{split} \tag{3}\] Since it can be known that \[P_{\text{clean}}\geq P_{\text{assumed}} \tag{4}\] Therefore \[P_{\text{ESWSC}}\geq P_{\text{SWSC}} \tag{5}\] For the purposes of MER \[\text{MER}_{\text{ESWSC}}\leq\text{MER}_{\text{SWSC}} \tag{6}\] ### _LDPC Baselines_ The current standard of 5G dictates the use of LDPC code for the data channel and this shall be used as a baseline comparison [13]. The _i.i.d_ nature of LDPC coding makes the modulation stacking order of the code irrelevant so the LDPC will be constructed in a stacked manner that is similar to SWSC but without the sliding window offset to serve as a comparison. When comparing with ESWSC, extra caution needs to be placed on the fact that \(\alpha\) will lead to a difference in total latency constraint. This means that the portion of code within the \(\alpha\) constraint can be combined together to form a longer block length LDPC code in order to take better Fig. 3: ESWSC Decoding Fig. 2: SWSC Decoding advantage of Shannon theorems. This mLPDC allows for a fair comparison with ESWSC as the latency constraints between the methods will be identical for information packets within the message. This is a modification on the works of others [14] who had demonstrated that concatenation of blocks in the short block length region will yield to improvement in error performance when keeping the overall latency the same. ## III Simulations A simple simulation will be set up where a number of receivers are scattered around a centralized controller. The channel parameters are calculated based on NYUSIM [15]. The number of required channels was randomly generated using the default parameters of an indoor simulation scenario as shown in Tables II and III. A total of \(N=100\) blocks are generated and an appropriate number of sensors are selected randomly through a uniform distribution for \(N\leq 100\) as required. The exact values of the simulator are shown in Table II and Table III. For the coding parameters themselves. A standard 5G downlink model will be used with the following values being selected for default coding parameters as shown in Table IV. As the downlink 5G data channel requires the usage of LDPC, it will be the coding scheme that is selected and appropriate CRC attachments are all following the associated 5G standard [13]. In order to ensure the same coding efficiency, the LDPC-based coding schemes will be using a coding rate that is lower than SWSC-based ones that adjusts according to the exact situations to ensure the total number of bits sent for LDPC-based schemes will \(\geq\) than those of the SWSC based ones. This is due to the additional half-block length that is needed for SWSC-based schemes. The following default parameters are used except when changes are needed for results. The optimal way to achieve MIMO (in this case a \(4\times 4\) channel) channel precoding and decoding is through Singular Value Decomposition (SVD) [16]. The simulations are completed 250 times with the default parameters shown in Table IV and the result can be observed below in Fig. 4. This can serve as the baseline for comparisons when changing coding parameters. ### _The Impact of Code Block Length_ When the block length is changed from 200 to 100 bits, the error rate of the various codes changes as seen in Fig. 5. There is an increase in the success rate of the code at SNR=14 where the requirements for URLLC is not as apparent and where MER is actually lower for shorter block length code. This result is similar to the results obtained by others in which they observed that shorter block length may lead to improvements for LDPC at certain length regions [12]. But when attention is focused on the very low MER regions required for URLLC, Fig. 4: Default Simulation Results the impacts of long block length code show how it is able to achieve a lower MER at the same SNR. ### _The Impact of Number of Blocks \(N\)_ The number of coding blocks has the greatest impact on error performance as the message is only considered successful if all the messages are decoded correctly. This shortening of the number of blocks leads to a noticeable decrease in error rate performance. The overhead of half a block that exists for SWSC-based methods did not lead to as much of a performance change as one might expect. This matches with the assumptions made in the original SWSC paper [10] as it did not find a need to take into account the additional half-block length of code. The very small number of blocks shown in Fig. 6 shows a slight improvement over a medium number of blocks shown in Fig. 7 which performs slightly better than those in the benchmark with a large number of devices in Figure 4. ### _The Impact of Block Ratio \(\alpha\)_ The \(\alpha\) value is an important factor that can impact the results of the code. In order to ensure the requirements of URLLC are met for all information packets within the message, the \(\alpha\) value is an adjustable value that can be modified for mMLC and ESWSC. The lowering of this value will lead to a decrease in error performance as observed in the following figure. The overall MER improvement from \(\alpha=0.5\) to \(\alpha=0.2\) is about 4-7% in the regions of low-reliability requirements as shown in Fig. 8. The drop is more evident in the high-reliability region on the log graph. It can be observed that in the low-reliability region, the mMLC outperforms ESWSC while the reverse is true for the high-reliability region. This suggests that ESWSC is able is a good choice for when there are high-reliability requirements like URLLC industrial automation. ## IV Conclusion In conclusion, it can be observed that SWSC and ESWSC may not always yield the best results in all success rate regions. Fig. 5: Code Block Length=100 Fig. 8: Values of \(\alpha\) at SNR=14 dB for low reliability, SNR=16dB for high reliability Fig. 6: N=20 Fig. 7: N=50 and will usually struggle against their benchmark LDPC counterparts. In the regions of low MER rate that qualifies for URLLC, ESWSC, and SWSC are usually shown to have better results than their LDPC and mLDPC counterparts. When the number of blocks decrease, all the coding schemes tended towards better performance as expected. In cases of decreased block length, the SWSC-based coding schemes were able to maintain a slight performance improvement in the high-reliability requirement regions while losing out in the low-reliability requirement region. This trend continues into the comparisons with a varying \(\alpha\) value as well where ESWSC outperforms mLPDC in the high-reliability region of BLER \(\approx 10^{-6}\) but falls short in low-reliability requirement regions of BLER \(\approx 10^{-3}\).
2308.09250
Capacity Bounds for Hyperbolic Neural Network Representations of Latent Tree Structures
We study the representation capacity of deep hyperbolic neural networks (HNNs) with a ReLU activation function. We establish the first proof that HNNs can $\varepsilon$-isometrically embed any finite weighted tree into a hyperbolic space of dimension $d$ at least equal to $2$ with prescribed sectional curvature $\kappa<0$, for any $\varepsilon> 1$ (where $\varepsilon=1$ being optimal). We establish rigorous upper bounds for the network complexity on an HNN implementing the embedding. We find that the network complexity of HNN implementing the graph representation is independent of the representation fidelity/distortion. We contrast this result against our lower bounds on distortion which any ReLU multi-layer perceptron (MLP) must exert when embedding a tree with $L>2^d$ leaves into a $d$-dimensional Euclidean space, which we show at least $\Omega(L^{1/d})$; independently of the depth, width, and (possibly discontinuous) activation function defining the MLP.
Anastasis Kratsios, Ruiyang Hong, Haitz Sáez de Ocáriz Borde
2023-08-18T02:24:32Z
http://arxiv.org/abs/2308.09250v1
# Capacity Bounds for Hyperbolic Neural Network Representations of Latent Tree Structures ###### Abstract We study the representation capacity of deep hyperbolic neural networks (HNNs) with a ReLU activation function. We establish the first proof that HNNs can \(\varepsilon\)-isometrically embed any finite weighted tree into a hyperbolic space of dimension \(d\) at least equal to \(2\) with prescribed sectional curvature \(\kappa<0\), for any \(\varepsilon>1\) (where \(\varepsilon=1\) being optimal). We establish rigorous upper bounds for the network complexity on an HNN implementing the embedding. We find that the network complexity of HNN implementing the graph representation is independent of the representation fidelity/distortion. We contrast this result against our lower bounds on distortion which any ReLU multi-layer perceptron (MLP) must exert when embedding a tree with \(L>2^{d}\) leaves into a \(d\)-dimensional Euclidean space, which we show at least \(\Omega(L^{1/d})\); independently of the depth, width, and (possibly discontinuous) activation function defining the MLP. C Capacity Bounds for Hyperbolic Neural Network Representations of Latent Tree Structures 1 Generalization Bounds, Graph Neural Networks, Digital Hardware, Discrete Geometry, Metric Embeddings, Discrete Optimal Transport, Concentration of Measure. 68T07, 30L05, 68R12, 05C05. ## 1 Introduction Trees are one of the most important hierarchical data structures in computer science, whose structure can be exploited to yield highly efficient algorithms. For example, leveraging the tree's hierarchical structure to maximize parameter search efficiency, as in branch-and-bound algorithms Land and Doig (2010) or depth-first searches Korf (1985). Consequentially, algorithms designed for trees and algorithms which map more general structures into trees, e.g. Fuchs et al. (1980), have become a cornerstone of computer science and its related areas. Nevertheless, it is known that the flat Euclidean geometry of \(\mathbb{R}^{d}\) is fundamentally different from the expansive geometry of trees which makes them difficult to embed into low dimensional Euclidean space with low distortion Bourgain (1986); Matousek (1999); Gupta (2000). This high distortion can be problematic for downstream tasks relying on Euclidean representations of such trees. This fundamental impasse in representing large trees with Euclidean space has sparked the search for non-Euclidean representation spaces whose geometry is tree-like, thus allowing for low-dimensional representation of arbitrary trees. One such family of representation spaces are the hyperbolic spaces \(\mathbb{H}^{d}\), \(d\geq 2\), which have recently gained traction in machine learning. Leveraging hyperbolic data representations with a (latent) tree-like structure has proven significantly more effective than their traditional Euclidean counterparts. Machine learning examples include learning linguistic hierarchies Nikiel (1989), natural language processing Ganea et al. (2018); Zhu et al. (2020), recommender systems Vinh Tran et al. (2020); Skopek et al. (2020), low-dimensional representations of large tree-like graphs Ganea et al. (2018); Law and Stam (2020); Kochurov et al. (2020); Zhu et al. (2020); Bachmann et al. (2020); Sonthalia and Gilbert (2020), knowledge graph representations Chami et al. (2020), network science Papadopoulos et al. (2014); Keller-Ressel and Nargang (2020), communication Kleinberg (2007), deep reinforcement learning Cetin et al. (2023), and numerous other recent applications. These results have motivated deep learning on hyperbolic spaces, of which the hyperbolic neural networks (HNNs) Ganea et al. (2018), and their several variants Gulcehre et al. (2018); Chami et al. (2019); Shimizu et al. (2021); Zhang et al. (2021) have assumed the role of the flagship deep learning model. This has led HNNs to become integral in several deep learning-power hyperbolic learning algorithms and have also fuelled applications ranging from natural language processing Ganea et al. (2018); Dhingra et al. (2018); Tay et al. (2018); Liu et al. (2019); Zhu et al. (2020), to latent graph inference for downstream graph neural network (GNN) optimization Kazi et al. (2022); de Ocariz Borde et al. (2022). Furthermore, the simple structure of HNNs makes them amenable to mathematical analysis, similarly to multi-layer perceptrons (MLPs) in classical deep learning. This has led to Figure 1.1: Minimal length curves in the hyperbolic space\({}^{1}\)(right) expand outwards exponentially quickly just as the number of nodes double exponentially rapidly in a tree (right) as one travels away from the origin/root. the establishment of their approximation capabilities in Kratsios and Bilokopytov (2020); Kratsios and Papon (2022). The central motivation behind HNNs is that they are believed to be better suited to representing data with latent tree-like, or hierarchical, structure than their classical \(\mathbb{R}^{n}\)-valued counterparts; e.g. MLPs, CNNs, GNNs, or Transformers, since the geometry of hyperbolic space \(\mathbb{H}^{d}\), \(d\geq 2\), is most similar to the geometry of trees than classical Euclidean space, see Figure 1.1. These intuitions are often fueled by classical embedding results in computer science Sarkar (2011), metric embedding theory Bonk and Schramm (2000), and the undeniable success of countless algorithms leveraging hyperbolic geometry Papadopoulos et al. (2012, 2014); Nickel and Kiela (2017); Balazevic et al. (2019); Sonthalia and Gilbert (2020); Keller-Ressel and Nargang (2020) for representation learning. Nevertheless, the representation potential of HNNs, in representing data with latent hierarchies, currently only rests on strong experimental evidence, expert intuition rooted in deep results from hyperbolic geometry Gromov (1981, 1987); Bonk and Schramm (2000). In this paper, we examine the problem of Euclidean-vs-hyperbolic representation learning when a latent hierarchy structures the data. We justify this common belief by first showing that HNNs can \(\varepsilon\)-isometrically embed any pointcloud with a latent weighted tree structure for any \(\varepsilon>0\). In contrast, such an embedding cannot exist in any Euclidean space. We show that the HNNs implementing these \(\varepsilon\)-embeddings are relatively small by deriving upper bounds on their depth, width, and number of trainable parameters sufficient for achieving any desired representation capacity. We find that HNNs only require \(\widetilde{\mathcal{O}}(N^{2})\) trainable parameters to embed any \(N\)-point pointcloud in \(n\)-dimensional Euclidean space, with a latent tree structure, into the 2-dimensional hyperbolic plane. We then return to the problem of Euclidean-vs-hyperbolic representation under a latent hierarchical structure, by proving that any MLP cannot faithfully embed them into a low-dimensional Euclidean space, thus proving that HNNs are superior to MLPs for representing tree-like structures. We do so by showing that any MLP, regardless of its depth, width, or number of trainable parameters, cannot embed a pointcloud with latent tree structure, with \(L>2^{d}\) leaves, into the \(d\)-dimensional Euclidean space with distortion less than \(\Omega(L^{1/d})\). We consider the distortion of an embedding as in the classical computer science literature, e.g. Linial et al. (1995); Bartal (1996); Gupta (2000); a formal definition will be given in the main text. OutlineThe rest of this paper is organized as follows. Section 2 introduces the necessary terminologies for hyperbolic neural networks, such as the geometry of hyperbolic spaces and the formal structure of HNNs. Section 3 formalizes latent tree structures and then numerically formalizes the representation learning problem. Our main graph representation learning results are given in Section 4.2. In Section 4.1, we first derive lower bounds for the best possible distortion achievable by an MLP representation of a latent tree. We show that MLPs cannot embed any large tree in a small Euclidean space, irrespective of how many wide hidden layers the network uses and irrespective of which, possibly discontinuous, non-linearity is used to define the MLP. In Section 4.2, we show that HNNs can represent any pointcloud with a latent tree structure to arbitrary precision. Furthermore, the depth, width, and number of trainable parameters defining the HNN are independent of the representation fidelity/distortion. Our theory is validated experimentally in Section 4.3. The analysis and proofs of our main results are contained in Section 5. We draw our conclusions in Section 6. ## 2 The Hyperbolic Neural Network Model Throughout this paper, we consider hyperbolic neural networks (HNNs) with the \(\operatorname{ReLU}(t)\stackrel{{\text{\tiny def}}}{{=}}\max\{0,t\}\) (Rectified Linear Unit) activation function/non-linearity, mapping into hyperbolic representation spaces \(\mathbb{H}_{\kappa}^{d}\). This section rigorously introduced the HNN architecture. This requires a brief overview of hyperbolic spaces, which we do now. ### The Geometry of the Hyperbolic Spaces \(\mathbb{H}_{\kappa}^{d}\) Fix a positive integer \(d\) and a _(sectional) curvature parameter_\(\kappa<0\). The (hyperboloid model for the real) hyperbolic \(d\)-space of constant sectional curvature \(\kappa\), denoted by \(\mathbb{H}_{\kappa}^{d}\), consists of all points \(x\in\mathbb{R}^{1+d}\) satisfying \[1+\sum_{i=1}^{d}\,x_{i}^{2}=x_{d+1}^{2}\text{ and }x_{d+1}>0\] where the distance between any pair of point \(x,y\in\mathbb{H}^{d}\) is given by \[d_{\kappa}(x,y)\stackrel{{\text{\tiny def.}}}{{=}}\frac{1}{ \sqrt{|\kappa|}}\,\cosh\Big{(}x_{d+1}y_{d+1}-\sum_{i=1}^{d}\,x_{i}y_{i}\Big{)}.\] It can be shown that \(\mathbb{H}_{\kappa}^{d}\) is a simply connected2 smooth manifold (Bridson and Haefliger, 1999, pages 92-93) and that the metric \(d_{\kappa}\) on any such \(\mathbb{H}_{\kappa}^{d}\) measures the length of the shortest curve joining any two points on \(\mathbb{H}_{\kappa}^{d}\) where length is quantified in the infinitesimal Riemannian sense3(Bridson and Haefliger, 1999, Propositions 6.17 (1) and 6.18). This means that, by the Cartan-Hadamard Theorem (Jost, 2017, Corollary 6.9.1), for every \(x\in\mathbb{H}_{\kappa}^{d}\) there is a map \(\exp_{x}:T_{x}(\mathbb{H}_{\kappa}^{d})\cong\mathbb{R}^{d}\to\mathbb{H}_{ \kappa}^{d}\) which puts \(\mathbb{R}^{d}\) into bijection with \(\mathbb{H}^{d}\) in a smooth manner with smooth inverse. Footnote 2: See (Jost, 2017, Section 6.4). **Remark 1**: _Often the metric \(d_{\kappa}\) is not relevant for a given statement, and only the manifold structure of \(\mathbb{H}_{\kappa}^{d}\) matters, or the choice of \(\kappa\) is clear from the context. In these instances, we write \(\mathbb{H}^{d}\) in place of \(\mathbb{H}_{\kappa}^{d}\) to keep our notation light._ For any point \(x\in\mathbb{R}^{d}\), we identify with the \(d\)-dimensional affine subspace \(T_{x}(\mathbb{H}^{d})\) of \(\mathbb{R}^{d+1}\) lying tangent to \(\mathbb{H}^{d}\) at \(x\). For any \(x\in\mathbb{H}^{d}\), the tangent space \(T_{x}(\mathbb{H}^{d})\) is the \(d\)-dimensional affine subspace of \(\mathbb{R}^{d+1}\) consisting of all \(y\in\mathbb{R}^{d+1}\) satisfying \[y_{d+1}x_{d+1}=\sum_{i=1}^{d}\,y_{i}x_{i}.\] The tangent space at \(\mathbf{1}_{n}\) plays an especially important role since its elements can be conveniently identified with \(\mathbb{R}^{d}\). This is because \(x\in T_{\mathbf{1}_{n}}(\mathbb{H}^{n})\) only if it is of the form \(x=(x_{1},\ldots,x_{n},1)\), for some \(x_{1},\ldots,x_{n}\in\mathbb{R}\). Thus, \[(x_{1},\ldots,x_{n},1)\stackrel{{\pi_{q}}}{{\to}}(x_{1},\ldots,x_ {n})\text{ and }(x_{1},\ldots,x_{n})\stackrel{{\iota_{n}}}{{\to}}(x_{1}, \ldots,x_{n},1) \tag{1}\] identify \(T_{\mathbf{1}_{n}}(\mathbb{H}^{n})\) with \(\mathbb{R}^{n}\). The map \(\exp_{x}\) can be explicitly described as the map which sends any "initial velocity vector" \(v\in\mathbb{R}^{d}\) lying tangent to \(x\in\mathbb{H}^{d}\) to the unique point in \(\mathbb{H}^{d}\) which one would arrive at by travelling optimally thereon. Here, optimally means along the unique minimal length curve in \(\mathbb{H}^{d}_{-1}\), illustrated in Figure 1b4. The (affine) tangent spaces \(T_{x}(\mathbb{H}^{d})\) and \(T_{y}(\mathbb{H}^{d})\) about any two points \(x\) and \(y\) in \(\mathbb{H}^{d}\) are identified by "sliding" \(T_{x}(\mathbb{H}^{d})\) towards \(T_{y}(\mathbb{H}^{d})\) in parallel across the minimal unique length curve joining \(x\) to \(y\) in \(\mathbb{H}^{d}\). This "sliding" operation, called _parallel transport_, is formalized by the linear isomorphism5\(P_{x\mapsto b}:T_{x}(\mathbb{H}^{n})\to T_{b}(\mathbb{H}^{n})\) given for any \(u\in T_{x}(\mathbb{H}^{n})\) by6 Footnote 4: In the Euclidean space \(\mathbb{R}^{d}\) these are simply straight lines. Footnote 5: In general, parallel transport is path dependant. However, since we only consider minimal length (geodesic) curves joining points on \(\mathbb{H}^{d}_{\kappa}\), and there is only one such choice by the Cartan-Hadamard theorem; then there is no ambiguity in the notation/terminology in our case. Footnote 6: In Kratsios and Papon (2022), the authors note that the identifications \(T_{c}(\mathbb{H}^{m})\) with \(\mathbb{R}^{m}\) are all made implicitly. However, here, we underscore each component of the HNN pipeline by making each identification processed by any computer completely explicit. \[P_{x\mapsto b}:\,u\mapsto u-\frac{\langle\log_{x}(b),u\rangle_{x}}{d_{-1}^{2}( x,b)}\,\big{(}\log_{x}(b)+\log_{b}(x)\big{)}, \tag{2}\] where the map \(\log_{x}:\mathbb{R}^{d}\to\mathbb{H}^{d}\) is defined for any \(y\in\mathbb{H}^{d}\) by \[\log_{x}:y\mapsto\frac{d_{-1}(x,y)(y-\langle x|y\rangle_{M}x)}{\|y-\langle x| y\rangle_{M}x\|_{2}},\] and where \(\|\cdot\|_{2}\) is the usual Euclidean norm on \(\mathbb{R}^{d+1}\). Typically parallel transport must be approximated numerically, e.g. Guigui and Pennec (2022), but this is not so for \(\mathbb{H}^{d}_{-1}\). The hyperbolic space is particularly convenient, amongst negatively curved Riemannian manifolds, since the map \(\exp_{x}\) is available in closed-form. For any \(x\in\mathbb{H}^{d}_{\kappa}\) and \(v\in\mathbb{R}^{d}\), \(\exp_{x}(v)\) is given by: \(\exp_{x}(v)=x\) if \(v=0\), otherwise \[\exp_{x}:v\mapsto\cosh\big{(}\sqrt{\langle v|v\rangle_{M}}\big{)}+\sinh\big{(} \sqrt{\langle v|v\rangle_{M}}\big{)}\,\frac{v}{\langle v|v\rangle_{M}}\] where \(\langle u|v\rangle_{M}\stackrel{{\text{\tiny def.}}}{{=}}-u_{d+1} v_{d+1}+\sum_{i=1}^{d}u_{i}v_{i}\) for any \(u,v\in\mathbb{R}^{d}\); see (Bridson and Haefliger, 1999, page 94). Furthermore, the inverse of \(\exp_{x}\) is \(\log_{x}\). We note that these operations are implemented in most standard geometric machine learning software, e.g. Boumal et al. (2014); Townsend et al. (2016); Miolane et al. (2020). Also, \(\mathbb{H}^{d}_{\kappa}\) and \(\mathbb{H}^{d}_{-1}\) are diffeomorphic by the Cartan-Hadamard Theorem (Jost, 2017, Corollary 6.9.1). Therefore, for all \(\kappa<0\), it suffices to consider the "standard" exponential map \(\exp_{x}\) in the particular case where \(\kappa=-1\) to map encode in \(\mathbb{R}^{d}\) into \(\mathbb{H}^{d}\) and visa-versa via \(\log_{x}\). ### The Hyperbolic Neural Network Model We now overview the hyperbolic neural network model studied in this paper. The considered HNN model contains the hyperbolic neural networks of Ganea et al. (2018), from the deep approximation theory Kratsios and Papon (2022), and the latent graph inference de Ocariz Borde et al. (2022) literatures as sub-models. The workflow of the hyperbolic neural network's layer, by which it processes data on the hyperbolic space \(\mathbb{H}^{n}\), is summarized in Figure 2.2. HNNs function similarly to standard MLPs, which generate predictions from any input by sequentially applying affine maps interspersed with non-affine non-linearities, typically via component-wise activation functions. Instead of leveraging affine maps, which are otherwise suited to the vectorial geometry of \(\mathbb{R}^{d}\), HNNs are built using analogues of the affine maps for Euclidean spaces, which are suited to the geometry of \(\mathbb{H}^{2}\). Thus, the analogues of the linear layers with component-wise \(\mathrm{ReLU}(t)\stackrel{{\mathrm{def}}}{{=}}\max\{0,t\}\) activation function, mapping \(\mathbb{H}^{n}\) to \(\mathbb{H}^{m}\) for \(n,m\in\mathbb{N}_{+}\), are thus given by \[x\mapsto\exp_{\mathbf{1}_{n}}(\mathrm{ReLU}\bullet(A\log_{\mathbf{1}_{m}}(x))) \tag{3}\] where \(\bullet\) denotes component-wise composition and \(A\) is an \(m\times n\) matrix, and7\(\mathbf{1}_{n}\in\mathbb{H}^{n}\). Thus, as discussed in (Ganea et al., 2018, Theorem 4 and Lemma 6), without the "hyperbolic bias" term, the elementary layers making up HNNs can be viewed as elementary MLP layers conjugated by the maps \(\exp_{\mathbf{1}_{n}}\) and \(\log_{\mathbf{1}_{m}}\). In this case, \(\exp_{\mathbf{1}_{n}}\) and \(\log_{\mathbf{1}_{m}}\) serve simply to respectively encode and decode the hyperbolic features into vectorial data, which can be processed by standard software. Footnote 7: Were we have the distinguished “origin” point 0 in the Poincaré disc model for \(\mathbb{H}^{m}_{\kappa}\) used in (Ganea et al., 2018, Definition 3.2) with its corresponding point in the hyperboloid model for \(\mathbb{H}^{n}_{\kappa}\) using the isometry between these two spaces given on (Bridson and Haefliger, 1999, page 86). The analogues of the addition of a bias term were initially formalized in Ganea et al. (2018) using the so-called gyro-vector addition and multiplication operators; see Vermeer Figure 2.2: **Workflow of an HNN layer:** First inputs in a hyperbolic space \(\mathbb{H}^{n}\) are mapped to a vector in \(\mathbb{R}^{n}\) in the “encoding phase”. Next, they are transformed to “deep features” in \(\mathbb{R}^{m}\) by a standard MLP layer in the “transformation phase”. Finally, the “hyperbolic decoding phase” applies a hyperbolic bias, at which point the “deep features with hyperbolic bias” are decoded, thus producing an output in the hyperbolic space \(\mathbb{H}^{m}\). (2005), which roughly states that a bias \(b\in\mathbb{H}^{n}\) can be added to any \(x\in\mathbb{H}^{n}\) by "shifting by" \(b\) along minimal length curves in \(\mathbb{H}^{n}\) using \(\exp_{b}\). Informally, \[y\mapsto\exp_{b}(P_{\mathbf{1}_{n}\mapsto b}\circ\log_{0}(y)) \tag{4}\] where \(P_{\mathbf{1}_{n}\mapsto b}:T_{\mathbf{1}_{n}}(\mathbb{H}^{n})\to T_{b}( \mathbb{H}^{n})\) linearly identifies the tangent space at \(\mathbf{1}_{n}\stackrel{{\text{\tiny def.}}}{{=}}(0,\ldots,0,1)\) with that at \(b\) by travelling along the unique minimal length curve in \(\mathbb{H}^{n}_{-1}\), defined by \(d_{-1}\), connecting \(\mathbf{1}_{n}\) to \(b\). The interpretation of (4) as the addition of a bias term dates back at least to early developments in geometric machine learning in Pennec (2006); Meyer et al. (2011); Fletcher (2013). The basic idea is that in the Euclidean space, the analogues of the \(\exp_{x}\) and \(\log_{x}\) are simply addition and subtraction, respectively. Here, we consider a generalization of the elementary HNN layers of Ganea et al. (2018), used in (Kratsios and Papon, 2022, Corollaries 23 and 24) to constructing universal deep learning models capable of approximating continuous functions8 between any two hyperbolic spaces. The key difference here is that they, and we, also allowed for a "Euclidean bias" to be added in together with the hyperbolic bias computed by (4). Similar HNN layers are also used in the contemporary latent graph inference literature de Ocariz Borde et al. (2022); Kazi et al. (2022). Incorporating this Euclidean bias, with the elementary maps (4), the hyperbolic biases (3), and the formal identifications (1) we obtain our elementary _hyperbolic_ layers \(\mathcal{L}_{a,b,c,A}:\mathbb{H}^{n}\rightarrow\mathbb{H}^{m}\) maps given by Footnote 8: Not intepolators. \[\mathcal{L}_{a,b,c,A}:x\mapsto\exp_{c}\biggl{(}\overline{\oplus}^{c}\Bigl{(} \operatorname{ReLU}\bullet\bigl{(}A\operatorname{\underline{\oplus}}_{a}\circ \log_{a}(x)+b\bigr{)}\Bigr{)}\biggr{)} \tag{5}\] where _weight matrix_\(A\) is an \(m\times n\) matrix, a _Euclidean bias_\(b\in\mathbb{R}^{n}\), and the incorporation of _hyperbolic biases_\(a\in\mathbb{H}^{n}\) and \(b\in\mathbb{H}^{m}\) are defined by 9 Footnote 9: The notation \(\overline{\oplus}^{b}\) and \(\operatorname{\underline{\oplus}}_{a}\) is intentionally similar to the gyrovector-based “hyperbolic bias translation” operation in (Ganea et al., 2018, Equation (28)) to emphasize the similarity between these operations. \[\overline{\oplus}^{c} \stackrel{{\text{\tiny def.}}}{{=}} P_{\mathbf{1}_{n}\mapsto c}\circ\iota_{m}:\mathbb{R}^{m} \to T_{c}(\mathbb{H}^{m})\] \[\operatorname{\underline{\oplus}}_{a} \stackrel{{\text{\tiny def.}}}{{=}} \pi_{n}\circ P_{a\mapsto\mathbf{1}_{n}}:T_{a}(\mathbb{H}^{n}) \rightarrow\mathbb{R}^{n}.\] The number of _trainable parameters_ defining any hyperbolic layer \(\mathcal{L}_{a,b,c,A}\) is \[\operatorname{Par}(\mathcal{L}_{a,b,c,A})\stackrel{{\text{\tiny def.}}}{{=}}\|A\|_{0}+\|a\|_{0}+\|b\|_{0}+\|c\|_{0} \tag{6}\] where \(\|\cdot\|_{0}\) counts the number of non-zero entries in a matrix or vector. We work with data represented as vectors in \(\mathbb{R}^{n}\) with a latent tree structure. Similarly to de Ocariz Borde et al. (2022); Kazi et al. (2022), these can be _encoded_ as hyperbolic features, making them compatible with standard HNN pipelines, using \(\exp_{\mathbf{1}_{n}}\) as a _feature map_. Any such feature map is regular in that it preserves the approximation capabilities of any downstream deep learning model, see (Kratsios and Bilokopytov, 2020, Corollary 3.16) for details. **Definition 2** (Hyperbolic Neural Networks): _Let \(n,d\in\mathbb{N}_{+}\). A function \(f:\mathbb{R}^{n}\to\mathbb{H}^{d}\) is called a hyperbolic neural network (HNN) if it admits the iterative representation: for any \(x\in\mathbb{R}^{n}\)_ \[f(x) =\exp_{c^{(I+1)}}\Big{(}\overrightarrow{\oplus}^{c^{(I+1)}}\big{(} A^{(I+1)}\,(\underline{\oplus}_{c^{(I)}}\circ\log_{c^{(I)}}(x^{(I)})+b^{(I+1)}) \big{)}\Big{)}\] \[x^{(i)} =\mathcal{L}_{c^{(i-1)},b^{(i)},c^{(i)},A^{(i)}}(x^{(i-1)}) \text{for }i=1,\ldots,I\] \[x^{(0)} =\exp_{c^{(0)}}(\overrightarrow{\oplus}^{c^{(0)}}\,x)\] _where \(I\in\mathbb{N}_{+}\), \(n=d_{0},\ldots,d_{I+2}=d\in\mathbb{N}_{+}\), and for \(i=1,\ldots,I+1\), \(A^{(i)}\) is a \(d_{i+1}\times d_{i}\) matrix, \(b^{(i)}\in\mathbb{R}^{d_{i}}\), \(c^{(i)}\in\mathbb{H}^{d_{i+1}}\subset\mathbb{R}^{d_{i+1}+1}\), and \(c^{(0)}\in\mathbb{H}^{n}\subset\mathbb{R}^{n+1}\)._ In the notation of Definition 2, the integer \(I+1\) is called the _depth_ of the HNN \(f\) and the _width_ of \(f\) is \(\max_{i=0,\ldots,I+1}\,d_{i}\). Similarly to (6), the total number of _trainable parameters_ defining the HNN \(f\), denoted by \(\operatorname{Par}(f)\), is tallied by \[\operatorname{Par}(f)\stackrel{{\text{\tiny def.}}}{{=}}\|c^{(0 )}\|_{0}+\sum_{i=1}^{I+1}\,\|A^{(i)}\|_{0}+\|b^{(i)}\|_{0}+\|c^{(i)}\|_{0}.\] Note that, since the hyperbolic bias \(c^{(i)}\) is shared between any two subsequent layers, in the notation of (6) \(\alpha=c^{(i-1)}\) and \(c=c^{(i)}\) for any \(i=1,\ldots,I+1\), then we do not _double count_ these parameters. ## 3 Representing Learning with Latent Tree Structures We now formally define latent tree structures. These capture the actual hierarchical structures between points in a pointcloud. We then formalize what it means to represent those latent tree structures in an ideally low-dimensional representation space with little distortion. During this formalization process we will recall some key terminologies pertaining to trees. ### Latent Tree Structure We now formalize the notation of a _latent tree structure_ between members of a pointclouds a vector space \(\mathbb{R}^{n}\). We draw from ideas in clustering, where the relationship between pairs of points is not captured by reflected by their configuration in Euclidean space but rather through some unobservable latent distance/structure. Standard examples from the clustering literature include the Mahalanobis distance Xiang et al. (2008), the Minkowski distance or \(\ell^{\infty}\) distances Singh et al. (2013), which are implemented in standard software Achtert et al. (2008); De Smedt and Daelemans (2012), and many others; e.g. Ye et al. (2017); Huang et al. (2023); Grande and Schaub (2023). In the graph neural network literature, the relationship between pairs of points is quantified graphically. In the cases of latent trees, the relationships between points are induced by a weighted tree graph describing a simple relational structure present in the dataset. The presents of an edge between any two points indicated a direct relationship between two nodes, and the weight of any such edge quantifying the strength of the relationship between the two connected, and thus related, nodes. This relational structure can be interpreted as a latent hierarchy upon specifying a root node in the latent tree. Let \(n\) be a positive integer and \(V\) be a non-empty finite subset of \(\mathbb{R}^{n}\), called a _pointcloud_. Illustrated by Figure 3.3, a _latent tree structure_ on \(V\) is a triple \((V,\mathcal{E},W)\) of a collection \(\mathcal{E}\) of pairs \(\{u,v\}\), called _edges_, of \(u,v\in V\) and an edge-weight map \(\mathcal{W}:\mathcal{E}\to(0,\infty)\) satisfying the following property: For every distinct pair \(u,v\in V\) there exists a unique sequence \(u=u_{0},\ldots,u_{i}=v\) of distinct _nodes_ in \(V\) such that the edges \(\{u_{0},u_{1}\},\ldots,\{u_{i-1},u_{i}\}\) belong to \(\mathcal{E}\); called a path from \(u\) to \(v\). Thus, \(\mathcal{T}=(V,\mathcal{E},\mathcal{W})\) is a _finite weighted tree_ with positive edge weights. Any latent tree structure \(\mathcal{T}\) on \(V\) induces a distance function, or metric, between the points of \(V\). This distance function, denoted by \(d_{\mathcal{T}}\), measures the _length_ of the shortest path between pairs of points \(u,v\in V\) and is defined by \[d_{\mathcal{T}}(u,v)\stackrel{{\text{\tiny def}}}{{=}}\inf\, \sum_{j=0}^{i-1}\,\mathcal{W}\big{(}\{u_{j},u_{j+1}\}\big{)}\] where the infimum is taken over all sequences of paths \(\{u_{0},u_{1}\},\ldots,\{u_{i-1},u_{i}\}\) from \(u=u_{0}\) to \(v=u_{i}\). If the weight function \(\mathcal{W}(\{u,v\})=1\) for any edges \(\{u,v\}\in\mathcal{E}\), then \(\mathcal{T}\) is called a _combinatorial tree_. In which case, the distance between any two nodes \(u,v\in V\) simplifies to the usual shortest path distance on an unweighted graph \[d_{\mathcal{T}}(u,v)=\inf\big{\{}i\,:\,\exists\,\{v,v_{1}\},\ldots\{v_{i-1},u \}\in\mathcal{E}\big{\}}. \tag{7}\] The _degree_ of any point, or _node/vertex_, \(v\in V\) is the number of edges emanating from \(v\); i.e. the cardinality of \(\{\{u,w\}\in\mathcal{E}:\,v\in\{u,w\}\}\). A node \(v\in V\) is called a _leaf_ of the tree Figure 3.3: Figures 3.2(a) and 3.2(b) illustrate pointclouds in \(\mathbb{R}^{2}\) with the same latent tree structure. Both of these trees seem different when comparing their structure using the _Euclidean distances_; however, instead, considering their latent tree structure reveals that they are identical as graphs. This illustrates how Euclidean geometry often fails to detect the true latent (relational) geometry describing the hierarchical structure between points in a pointcloud. \(\mathcal{T}\) if it has degree 1. E.g. in Figure 1.1a, all peripheral green points are leaves of the binary tree. ### Representations as Embeddings As in Kratsios et al. (2023), a representation, or encoding, of latent tree structure on \(V\) is simply a function \(f:V\to\mathcal{R}\) into a space \(\mathcal{R}\) equipped with a distance function \(d_{\mathcal{R}}\), the pair \((\mathcal{R},d_{\mathcal{R}})\) of which is called a _representation space_. As in Giovanni et al. (2022) representation \(f\) is considered "good" if it accurately preserves the geometry of the latent tree structure \(\mathcal{T}\) on \(V\). Following the classical computer science literature, Linial et al. (1995); Bartal (1996); Rabinovich and Raz (1998); Arora et al. (2009); Magen (2002); STO (2005), this means that \(f\) is injective, or 1-1, and its neither shrinks nor stretches the distances between pairs of nodes \(u,v\in V\) when compared by \(d_{\mathcal{R}}\). For each \(u,v\in V\) the following holds \[\alpha\,d_{\mathcal{T}}(u,v)\leq d_{\mathcal{R}}\big{(}f(u),f(v)\big{)}\leq \beta\,d_{\mathcal{T}}(u,v) \tag{8}\] where the constants \(0<\alpha\leq\beta<\infty\) are defined by \[\beta\stackrel{{\text{\tiny def.}}}{{=}}\max_{\begin{subarray}{ c}u,v\in V\\ u\neq v\end{subarray}}\,\frac{d_{\mathcal{R}}\big{(}f(u),f(v)\big{)}}{d_{V}(u, v)},\text{ and }\alpha\stackrel{{\text{\tiny def.}}}{{=}}\min_{ \begin{subarray}{c}u,v\in V\\ u\neq v\end{subarray}}\,\frac{d_{\mathcal{R}}\big{(}f(u),f(v)\big{)}}{d_{V}(u, v)}.\] These constants quantify the maximal shrinking (\(\alpha\)) and stretching (\(\beta\)) which \(f\) exerts on the geometry induced by the latent tree structure \(\mathcal{T}\) on \(V\), and Note that, since \(V\) is finite, then \(0<\alpha\leq\beta<\infty\) whenever \(f\) is injective. The total _distortion_ with which \(f\) perturbs the tree structure \(\mathcal{T}\) on \(V\) is, denoted by \(\operatorname{dist}(f)\), and defined by \[\operatorname{dist}(f)\stackrel{{\text{\tiny def.}}}{{=}}\begin{cases} \frac{\beta}{\alpha}&\text{: if $f$ is injective}\\ \infty&\text{: otherwise}\end{cases} \tag{9}\] We say that a tree \(\mathcal{T}\) can be _asymptotically isometrically represented_ in \(\mathcal{R}\) if there is a sequence \((f_{n})_{n=1}^{\infty}\) of maps from \(V\) to \(\mathcal{R}\) whose distortion is asymptotically optimal distortion; i.e. \(\lim\limits_{n\to\infty}\,\operatorname{dist}(f_{n})=1\). We note that a sequence of embeddings \(f_{n}\) need not have a limiting function mapping \(V\) to \(\mathcal{R}\) even if its distortion converges to 1; in particular, \((f_{n})_{n\in\mathbb{N}}\) need not converge to an isometry. ## 4 Graph Representation Learning Results This section contains our main result, which establishes the main motivation behind HNNs, namely the belief that they represent trees to arbitrary precision in a two dimensional hyperbolic space. ### Lower-Bounds on Distortion for MLP Embeddings of Latent Trees The power of HNNs is best appreciated when juxtaposed against the _lower_ bounds minimal distortion implementable by any MLP embedding any large tree into a low-dimensional Euclidean space. In particular, there cannot exist any MLP model, regardless of its depth, width, or (possibly discontinuous) choice of activation function, which can outperform the embedding of a sufficiently overparameterized HNN using only a two-dimensional representation space. **Theorem 4.1** (Lower-Bounds on the Distortion of Trees Embedded by MLPs): _Let \(L,n,d\in\mathbb{N}_{+}\), and fix an activation function \(\sigma:\mathbb{R}\to\mathbb{R}\). For any finite \(V\subset\mathbb{R}^{n}\) with a latent combinatorial tree structure \(\mathcal{T}=(V,\mathcal{E},\mathcal{W})\) having \(L>2^{d}\) leaves and if \(f:\mathbb{R}^{n}\to\mathbb{R}^{d}\) is an MLP with activation function \(\sigma\), satisfying_ \[\alpha\,d_{\mathcal{T}}(u,v)\leq\|f(u)-f(v)\|\leq\beta\,d_{\mathcal{T}}(u,v)\] _for all \(u,v\in V\) and some \(0<\alpha\leq\beta\) independent of \(u\) and \(v\) then, \(f\) incurs a distortion \(\operatorname{dist}(f)\) of at least_ \[\operatorname{dist}(f)\geq\Omega(L^{1/d}).\] _The constant suppressed by \(\Omega\), is independent of the depth, width, number of trainable parameters, and the activation function \(\sigma\) defining the MLP._ Theorem 4.1 implies that if \(V\subseteq\mathbb{R}^{n}\) is large enough and has a latent tree structure with \(\Omega(4^{d^{2}})\) leaves then any MLP \(f:\mathbb{R}^{n}\to\mathbb{R}^{d}\) cannot represent \((V,d_{\mathcal{T}})\) with a distortion of less than \(\Omega(4^{d})\). Therefore, if \(d\), \(\#V\), and \(L\) are large enough, any MLP must represent the latent tree structure on \(V\) arbitrarily poorly. We point out that their MLP's structure alone is not the cause of this limitation since we have not imposed any structural constraints on its depth, width, number of trainable parameters, or its activation function; instead, the incompatibility between the geometry of a tree and that of a Euclidean space, which no MLP can resolve. ### Upper-Bounds on the Complexity of HNNs Embeddings of Latent Trees Our main positive result shows that the HNN model 2 can represent any pointcloud with a latent tree structure into the hyperbolic space \(\mathbb{H}_{\kappa}^{d}\) with an arbitrarily small distortion by a low-capacity HNN. **Theorem 4.2** (HNNs Can Asymptotically Isometrically Represent Latent Trees): _Fix \(n,d,N\in\mathbb{N}_{+}\) with \(d\geq 2\) and fix \(\lambda>1\). For any \(N\)-point subset \(V\) of \(\mathbb{R}^{n}\) and any latent tree structure \(\mathcal{T}=(V,\mathcal{E},\mathcal{W})\) on \(V\) of degree at least \(2\), there exists a curvature parameter \(\kappa<0\) and an HNN \(f:\mathbb{R}^{n}\to\mathbb{H}_{\kappa}^{d}\) such that_ \[\frac{1}{\lambda}\,d_{\mathcal{T}}(u,v)\leq d_{\kappa}(f(u),f(v))\leq\lambda \,d_{\mathcal{T}}(u,v)\] _holds for each pair of \(u,v\in V\). Moreover, the depth, width, and number of trainable parameters defining \(f\) are independent of \(\lambda\); recorded in Table 1._ Theorem 4.2 considers HNNs with the typical ReLU activation function. However, using an argument as in (Yarotsky, 2017, Proposition 1) the result can likely be extended to any other continuous piece-wise linear activation function with at least one piece/break, e.g PReLU. Just as in (Yarotsky, 2017, Proposition 1), such modifications should only scale the network depth, width, and the number of its trainable parameters up by a constant factor depending only on the number of pieces of the chosen piece-wise linear activation function. Since the size of the tree in Theorem 4 did not constrain the embedding quality of an HNN, we immediately deduce the following corollary which we juxtapose against Theorem 4. [HNNs Can Asymptotically Embed Large Trees] Let \(L,n,d\in\mathbb{N}_{+}\) with \(d\geq 2\). For any finite \(V\subset\mathbb{R}^{n}\) with a latent combinatorial tree structure \(\mathcal{T}=(V,\mathcal{E},\mathcal{W})\) with \(L>2^{d}\) leaves, and any \(r>0\), there exists a curvature parameter \(\kappa<0\) and an HNN \(f:\mathbb{R}^{n}\to\mathbb{H}_{\kappa}^{d}\) satisfying \[\mathrm{dist}(f)\leq\frac{1}{L^{r}}.\] ### Experimental Illustrations To gauge the validity of our theoretical results, we conduct a performance analysis for tree embedding. We compare the performance of HNNs with that of MLPs through a sequence of synthetic graph embedding experiments. Our primary focus lies on binary, ternary, and random trees. For the sake of an equitable comparison, we contrast MLPs and HNNs employing an equal number of parameters. Specifically, all models incorporate 10 blocks of linear layers, accompanied by batch normalization and ReLU activations, featuring 100 nodes in each hidden layer. The training process spans 10 epochs for all models, employing a batch size of 100,000 and a learning rate of \(10^{-2}\). The \(x\) and \(y\) coordinates of the graph nodes in \(\mathbb{R}^{2}\) are fed into both the MLP and HNN networks, which are tasked to map them to a new embedding space. An algorithm is used to generate input coordinates, simulating a force-directed layout of the tree. In this simulation, edges are treated as springs, pulling nodes together, while nodes are treated as objects with repelling forces akin to an anti-gravity effect. This simulation iterates until the positions reach a state of equilibrium. The algorithm can be reproduced using the NetworkX library and the spring layout for the graph. Counting the neighbourhood hops as in 7 defines the distance between nodes, resulting in a scalar value. The networks must discover a suitable representation to estimate this distance. We update the networks based on the MSE loss comparing the actual distance between nodes, \(d_{true}\), to the predicted distance based on the network mappings, \(d_{pred}\): \[Loss=MSE(d_{true},d_{pred}). \tag{10}\] In the case of the MLP the predicted distance is computed using: \[d_{pred}=\|MLP(x_{1},y_{1})-MLP(x_{2},y_{2})\|_{2}, \tag{11}\] where \((x_{1},y_{1})\) and \((x_{2},y_{2})\) are the coordinates in \(\mathbb{R}^{2}\) of a synthetically generated latent tree (which may be binary, ternary or random). For the HNN we use the loss function \[d_{pred}=d_{-1}(HNN(x_{1},y_{1}),HNN(x_{2},y_{2})). \tag{12}\] In the case of the HNN, we use the hyperboloid model, with an exponential map at the pole, to map the representations to hyperbolic space. In particular, we do not even require any hyperbolic biases to observe the gap in performance between the MLP and HNN models, which are trained to embed the latent trees by respectively optimizing the loss functions 11 and 12. We conduct embedding experiments on graphs ranging from 1,000 to 4,000 nodes, and we assess the impact of employing various dimensions for the tree embedding spaces. Specifically, we explore dimensionalities multiples of 2, ranging from 2 to 8. In Figure 4.4, we can observe that HNNs consistently outperform MLPs at embedding trees, achieving a lower MSE error in all configurations. We now overview the derivation of our upper bounds on the embedding capabilities HNNs with ReLU activation function and our lower-bounds on MLPs with any activation function for pointclouds with latent tree structure. Figure 4.4: Tree embedding error surfaces for different trees using MLPs and HNNs. ## 5 Theoretical analysis We first prove Theorem 4.1 which follows relatively quickly from the classical results of Matousek (1999) and Gupta (2000) from the metric embedding theory and computer science literature. We then outline the proof of Theorem 4.2, which is much more involved, both technically and geometrically, the details of which are relegated to Section 5. Corollary 4.3 is directly deduced from Theorem 4.2. ### Proof of the Lower-Bound - Theorem 4.1 We begin by discussing the proof of the lower-bound. **Proof** [Proof of Theorem 4.1] Any MLP \(f:\mathbb{R}^{n}\to\mathbb{R}^{d}\) with any activation function \(\sigma:\mathbb{R}\to\mathbb{R}\) for which there exists constants \(0<\alpha\leq\beta\) satisfying \[\alpha\,d_{\mathcal{T}}(u,v)\leq\|f(u)-f(v)\|\leq\beta\,d_{\mathcal{T}}(u,v)\] defines a bi-Lipschitz embedding of the tree-metric space \((V,d_{\mathcal{T}})\) into the \(d\)-dimensional Euclidean space. Since \(L>2^{d}\) then we may apply (Gupta, 2000, Proposition 5.1), which is a version of a negative result of Bourgain (1986) and of Matousek (1999) for non-embeddability of large trees in Euclidean space for general trees. The result, namely (Gupta, 2000, Proposition 5.1), implies that any bi-Lipschitz embedding of \((V,d_{\mathcal{T}})\) into \((\mathbb{R}^{d},\|\cdot\|_{2})\) must incur a distortion no less than \(\Omega(L^{1/d})\); in particular, this is the case for \(f\). Therefore, \(\frac{\beta}{\alpha}\geq\Omega(L^{1/d})\). The proof of the lower-bound shows that there cannot be _any_ deep learning model which injectively maps \((V,d_{\mathcal{T}})\) into a \(d\)-dimensional Euclidean space with a distortion of less than \(\Omega(L^{1/d})\). ### Proof of Theorem 4.2 We showcase the three critical steps in deriving theorem 4.2. First, we show that HNNs can implement, i.e. memorize, arbitrary functions from any \(\mathbb{R}^{n}\) to any \(\mathbb{H}^{d}\), for arbitrary integer dimensions \(n\) and \(d\). Second, we construct a sequence of embeddings with, whose distortion asymptotically tends to \(1\), in a sequence of hyperbolic spaces \((\mathbb{H}^{d}_{\kappa},d_{\kappa})\) of arbitrarily large sectional curvature \(\kappa\). We then apply our memorization result to deduce that these "asymptotically isometric" embeddings can be implemented by HNNs, quantitatively. The proofs of both main lemmata are relegated to Section 5.3. **Step 1 - Exhibiting a Memorizing HNN and Estimating its Capacity** We first need the following quantitative memorization guarantee for HNNs. **Lemma 5.1** (Upper-Bound on the Memory Capacity of an HNN): _Fix \(N,n,d\in\mathbb{N}_{+}\). For any \(N\)-point subset \(V\subset\mathbb{R}^{n}\) and any function \(f^{\star}:\mathbb{R}^{n}\to\mathbb{H}^{d}\), there exists an HNN \(f:\mathbb{R}^{n}\to\mathbb{H}^{d}\) satisfying_ \[f(v)=f^{\star}(v)\] _for each \(v\in V\). Moreover, the depth, width, and number of trainable parameters defining \(f^{\star}\) are bounded-above in Table 1._ The capacity estimates for the HNNs constructed in Theorem 4.2 and its supporting Lemma 5.1 depend on the configuration of the pointcloud \(V\) in \(\mathbb{R}^{n}\) with respect to the Euclidean geometry of \(\mathbb{R}^{n}\). The configuration is quantified by the ratio of the largest distance between distinct points over the smallest distance between distinct points, called the _aspect ratio_ on (Krasios et al., 2023, page 9), also called the separation condition in the MLP memorization literature; e.g. (Park et al., 2021, Definition 1). \[\operatorname{aspect}(V)\stackrel{{\text{\tiny def.}}}{{=}} \frac{\max_{x,\tilde{x}\in V}\|x-\tilde{x}\|_{2}}{\min_{x,\tilde{x}\in V;x \neq\tilde{x}}\|x-\tilde{x}\|_{2}}.\] Variants of the aspect ratio have also appeared in computer science, e.g. (Goemans et al., 2001; Newman and Rabinovich, 2023) and the related metric embedding literature; e.g. (Krauthgamer et al., 2005). #### Step 2 - Constructing An Asymptotically Optimal Embedding Into \(\mathbb{H}^{2}_{\kappa}\) **Lemma 5.2** (HNNs Universally Embed Trees Into Hyperbolic Spaces): _Fix \(n,d,N\in\mathbb{N}_{+}\) with \(d\geq 2\) and fix \(\lambda>1\). For any \(N\)-point subset \(V\) of \(\mathbb{R}^{n}\) and any latent tree structure \(\mathcal{T}=(V,\mathcal{E},\mathcal{W})\) on \(V\) of degree at least \(2\), there exists a map \(f^{\star}:\mathbb{R}^{n}\to\mathbb{H}^{d}\) and a sectional curvature \(\kappa<0\) satisfying_ \[\frac{1}{\lambda}\,d_{\mathcal{T}}(u,v)<d_{\kappa}(f^{\star}(u),f^{\star}(v))< \lambda\,d_{\mathcal{T}}(u,v) \tag{13}\] _for each \(u,v\in V\). Furthermore, \(\kappa\) tends to \(-\infty\) as \(\lambda\) tends to \(1\)._ #### Step 3 - Memorizing the Embedding Into \(\mathbb{H}^{d}_{\kappa}\) with an HNN **Proof** [Proof of Theorem 4.2] Fix \(n,d,N\in\mathbb{N}_{+}\) with \(d\geq 2\) and \(\lambda>1\). Let \(V\) be an \(N\)-point subset of \(\mathbb{R}^{n}\) and \(\mathcal{T}=(V,\mathcal{E},\mathcal{W})\) be a latent tree structure on \(V\) of degree at least \(2\). By Lemma 5.2, there exists a \(\kappa<0\) and a \(\lambda\)-isometric embedding \(f^{\star}:(V,d_{\mathcal{T}})\to(\mathbb{H}^{d}_{\kappa},d_{\kappa})\); i.e. (13) holds. Since \(V\) is a non-empty finite subset of \(\mathbb{R}^{n}\), we may apply Lemma 5.1 to infer that there exists an HNN \(f:\mathbb{R}^{n}\to\mathbb{H}^{d}\) satisfying \(f(v)=f^{\star}(v)\) for each \(v\in V\). Furthermore, its depth, width, and the number of its trainable parameters is recorded in Table 1. This conclude our proof. \(\blacksquare\) **Proof** [Proof of Corollary 4.3] The the result follows upon taking \(\lambda=L^{-r/2}\) in Theorem 4.2. \(\blacksquare\) ### Details on the Proof of the Upper-Bound In Theorem 4.2 We now provide the explicit derivations of all the above lemmata used to prove Theorem 4.2. **Proof** [Proof of Lemma 5.1] **Overview:**_The proof of this lemma can be broken down into \(4\) steps. First, we linearized the function \(f^{\star}\) to be memorized by associating it to a function between Euclidean spaces. Next we memorize the transformed function in Euclidean using a MLP with \(\operatorname{ReLU}\) activation function which we then transform to an \(\mathbb{H}^{d}\)-valued function which memorizes \(f^{\star}\). We then show that this transformed MLP can be implemented by an HNN. Finally, we tally the parameters of this HNN representation of the transformed MLP. Step 1 - Standardizing Inputs and Outputs of the Function to be Memorized_ Fix any \(y\in\mathbb{H}^{d}\). Since \(\mathbb{H}^{d}_{\kappa}\) is a simply connected Riemannian manifold of non-positive curvature then the Cartan-Hadamard Theorem, as formulated in (Jost, 2017, Corollary 6.9.1), implies that the map \(\exp_{x}:T_{x}(\mathbb{H}^{d}_{\kappa})\to\mathbb{H}^{d}_{\kappa}\) is global diffeomorphism. Therefore, the map \(\log_{x}:\mathbb{H}^{d}_{\kappa}\to T_{x}(\mathbb{H}^{d}_{\kappa})\) is well-defined and a bijection. In particular, this is the case for \(x=\mathbf{1}_{d}\). Therefore, the map \(\pi_{d}\circ\log_{\mathbf{1}_{d}}:\mathbb{H}^{d}\to\mathbb{R}^{d}\) is a bijection. Consider the map \(\bar{f}:\mathbb{R}^{n}\to\mathbb{R}^{d}\) defined by \[\bar{f}\stackrel{{\text{\tiny def.}}}{{=}}\pi_{d}\circ\log_{ \mathbf{1}_{d}}\circ f^{\star}.\] Note that, since \(\pi_{d}:T_{\mathbf{1}_{d}}(\mathbb{H}^{d})\to\mathbb{R}^{d}\) is a linear isomorphism it is a bijection and \(\iota_{d}\) is its two-sided inverse. Therefore, the definition of \(\bar{f}\) implies that \[(\exp_{1_{d}}\circ\iota_{d})\circ\bar{f}=f^{\star}. \tag{14}\] _Step 2 - Memorizing the Standardized Function_ Since \(V\subseteq\mathbb{R}^{n}\) and \(\bar{f}:\mathbb{R}^{n}\to\mathbb{R}^{d}\) then we may apply (Kratsios et al., 2023, Lemma 20) to deduce that there is an MLP (feedforward neural network) with ReLU activation function \(\tilde{f}:\mathbb{R}^{n}\to\mathbb{R}^{d}\) that interpolates \(\tilde{f}\); i.e. there are positive integers \(I,n=d_{0},\ldots,d_{I+2}=d\in\mathbb{N}_{+}\), such that for each \(i=1,\ldots,I+1\) there is a \(d_{i+1}\times d_{i}\) matrix \(A^{(i)}\) and a vector \(b^{(i)}\in\mathbb{R}^{d_{i+1}}\) implementing the representation \[\begin{split}&\tilde{f}(u)=A^{(I+1)}\,u^{(I)}+b^{(I+1)}\\ & u^{(i)}\operatorname{ReLU}\bullet(A^{(i)}u^{(i-1)}+b^{(i)})\\ & u^{(0)}=u\end{split} \tag{15}\] for each \(v\in\mathbb{R}^{n}\), and satisfying the interpolation/memorization condition \[\tilde{f}(v)=\bar{f}(v) \tag{16}\] for each \(v\in V\). Furthermore, its depth, width, and number of non-zero/trainable parameters are 1. The _width_ of \(\tilde{f}\) is \(n(N-1)+\max\{d,12\}\), 2. The _depth_\((I)\) of \(\tilde{f}\) is \[\mathcal{O}\left(N\left\{1+\sqrt{N\log N}\left[1+\frac{\log(2)}{\log(n)}\left( C_{n}+\frac{\log\left(N^{2}\operatorname{aspect}(V,\|\cdot\|_{2}\right)}{\log(2)} \right)_{+}\right]\right\}\right),\] 3. The _number of (non-zero) trainable parameters_ of \(\tilde{f}\) is \[\mathcal{O}\left(N\left(\frac{11}{4}\max\{n,d\}N^{2}-1\right) \left\{d+\sqrt{N\log N}\left[1+\frac{\log(2)}{\log(n)}\right.\right.\right.\] \[\left.\left.\times\left(C_{n}+\frac{\log\left(N^{2}\operatorname{aspect }(V,\|\cdot\|_{2}\right)}{\log(2)}\right)_{+}\right]\max\{d,12\}(1+\max\{d,12 \})\right\}\right).\] **Comment:**_In the proof of the main result, the aspect ratio \(\operatorname{aspect}(V)\) will not considered with respect to the shortest path distance \(d_{\mathcal{T}}\) on \(V\), given by its latent tree structure, but rather with respect to the Euclidean distance \(\|\cdot\|_{2}\) on \(\mathbb{R}^{n}\). This is because, the only role of the MLP \(\tilde{f}\) is to interpolate pairs of points of the function \(\tilde{f}\) denoted between Euclidean spaces. The ability of an MLP to do so depends on how close those points are to one another in the Euclidean sense._ Combining (14) with (16), with the fact that \(\exp_{x}\circ_{d}\) is a bijection implies that the following \[\begin{split} f^{\star}(v)=&(\exp_{1_{d}}\circ_{d })\circ\tilde{f}(v)\\ =&(\exp_{1_{d}}\circ_{d})\circ\tilde{f}(v)\\ =&(\exp_{1_{d}}\circ_{d})\circ(\tilde{f}\circ\log_ {\mathbf{1_{n}}})\circ\exp_{\mathbf{1_{n}}}(v)\end{split} \tag{17}\] holds for every \(v\in V\). It remains to be shown that the function on the right-hand side of (17) can be implemented by an HNN. _Step 3 - Representing \(\exp_{1_{d}}\circ_{d_{d}}\circ\tilde{f}(x)\) as an HNN_ For \(i=0,\ldots,I+1\) set \(c^{(i)}\stackrel{{\mathrm{def}}}{{=}}\mathbf{1}_{d_{i}}\). Observe that, for each \(i=1,\ldots,I\), \(\exp_{c^{(i)}}\circ\log_{c^{(i)}}=1_{T_{c^{(i)}}(\mathbb{H}^{d_{i}})}\) and that the following holds \[\begin{split}\underline{\oplus}_{c^{(i)}}\circ\overline{\oplus} ^{c^{(i)}}=&\pi_{d^{(i)}}\circ P_{c^{(i)}\mapsto\mathbf{1}_{d_{i }}}\circ P_{\mathbf{1}_{d_{i}}\mapsto c^{(i)}}\circ\iota_{d^{(i)}}\\ =&\pi_{d^{(i)}}\circ P_{\mathbf{1}_{d_{i}}\mapsto \mathbf{1}_{d_{i}}}\circ P_{\mathbf{1}_{d_{i}}\mapsto\mathbf{1}_{d_{i}}} \circ\iota_{d^{(i)}}\\ =&\pi_{d^{(i)}}\circ 1_{T_{\mathbf{1}_{d_{i}}}(\mathbb{H}^{d_{i}})} \circ 1_{T_{\mathbf{1}_{d_{i}}}(\mathbb{H}^{d_{i}})}\circ\iota_{d^{(i)}}\\ =&\pi_{d^{(i)}}\circ\iota_{d^{(i)}}\\ =& 1_{\mathbb{R}^{d_{i}}}.\end{split} \tag{18}\] where (18) follows from our definition of \(c^{(i)}\) and (19) follows since parallel transport from \(T_{\mathbf{1}_{d_{i}}}(\mathbb{H}^{d_{i}})\) to itself along the unique distance minimizing curve (geodesic) \(\gamma:[0,1]\rightarrow\mathbb{H}^{d_{i}}\) emanating from and terminating at \(\mathbf{1}_{d_{i}}\); namely, \(\gamma(t)=\mathbf{1}_{d_{i}}\) for all \(0\leq t\leq 1\). Therefore, any HNN with representation of an HNN in Definition 2, with these specifications of \(c^{(0)}\), \(\ldots\), \(c^{(I+1)}\) can be represented as \[(\exp_{1_{d}}\circ_{d})\circ(g\circ\log_{\mathbf{1_{n}}})\circ\exp_{\mathbf{1 _{n}}}\] where the map \(g:\mathbb{R}^{n}\rightarrow\mathbb{R}^{d}\) is an MLP with ReLU activation function; i.e. it can be represented as \[\begin{split}& g(u)=\tilde{A}^{(I+1)}\left(u^{(I)}+\tilde{b}^{( \tilde{I}+1)}\right)\\ & u^{(i)}\operatorname{ReLU}\bullet\bigl{(}\tilde{A}^{(i)}+\tilde {b}^{(i)}\bigr{)}\\ & u^{(0)}=u\end{split} \tag{20}\] for some integers \(\tilde{I},n=d_{0},\ldots,d_{I+2}=d\in\mathbb{N}_{+}\), \(\tilde{d}_{i+1}\times\tilde{d}_{i}\) matrices \(\tilde{A}^{(i)}\) and a vectors \(\tilde{b}^{(i)}\in\mathbb{R}^{\tilde{d}_{i+1}}\). Setting \(g\stackrel{{\mathrm{def}}}{{=}}\tilde{f}\), implies that the map \((\exp_{x}\circ_{d})\circ(\tilde{f}\circ\exp_{\mathbf{1_{n}}})\circ\log_{ \mathbf{1_{n}}}\) in (17) defines an HNN. _Step 4 - Tallying Trainable Parameters_ By construction the depth and width of \(f\) respectively equal to the depth and width of \(\tilde{f}\). The number of parameters defining \(f\) equal to the number of parameters defining \(\tilde{f}\) plus \(I+1\), since \(\|c^{(i)}\|_{0}=1\) for each \(i=0,\ldots,I+1\). Our proof of Lemma 5 relies on some concepts from metric geometry, which we now gather here, before deriving the result. A Geometric realization of a (positively) weighted graph \(G\) can be seen as the metric space obtained by gluing together real intervals of lengths equal to corresponding weights at corresponding endpoints, according to the pattern of \(G\), with the shortest path distance. Following (Das et al., 2017, Definition 3.1.1), this is formalized as the following metric space. (Geometric Realization Of A Weighted Tree): _A geometric realization of a weighted tree \(\mathcal{T}=(V,\mathcal{E},\mathcal{W})\) is the metric space \((X_{\mathcal{T}},d_{X_{\mathcal{T}}})\) whose pointset \(X_{\mathcal{T}}=V\cup\left(\bigcup_{\{u,v\}\in\mathcal{E}}\left(\{(u,v)\} \times[0,W(u,v)]\right)\right)/\sim\), where \(\sim\) denotes the quotient defined by following identifications_ \[v \sim((v,u),0)\quad\text{ for all }\{u,v\}\in\mathcal{E}\] \[((v,u),t) \sim((u,v),W(\{u,v\})-t)\quad\text{ for all }\{u,v\}\in\mathcal{E} \text{ and all }t\in[0,W(\{u,v\})]\] _and with metric \(d_{X_{\mathcal{T}}}\) on \(X_{\mathcal{T}}\) maps any pair of (equivalence classes) \(((u_{0},u_{0}),t)\) and \(((v_{0},v_{1}),s)\) in \(X_{\mathcal{T}}\) to the non-negative number_ \[\min_{i,j\in\{0,1\}}|t-iW(u_{0},u_{1})|+d_{\mathcal{T}}(u_{i},v_{j})+|s-jW(v_{0 },v_{1})|.\] We call a metric space a _simplicial tree_ if it is essentially the same as a tree whose edges are finite closed real intervals, with the shortest path distance. Simplicial trees are a special case of the following broader class of well-studied metric spaces, which we introduce to synchronize with the metric geometry literature, since it formulated many results using this broader class. (\(\mathbb{R}\)-Tree): _A metric space \((X,d)\) is called an \(\mathbb{R}\)-tree if \(X\) is connected and for all \(x,y,z,w\in X\),_ \[(x,y)_{w}\geq\min\{(x,z)_{w},(z,y)_{w}\}\] _where \((x,y)_{w}\) denotes the Gromov product_ \[(x,y)_{w}=\frac{1}{2}[d(x,w)+d(w,y)-d(x,y)].\] [Valency]: _The valency of the geometric realization of a metric space \(X\) at a point \(x\) is defined as the cardinality of the set of connected components of \(X\backslash\{x\}\). (Proof of Lemma 5)_: _The proof of this lemma can be broken down into \(3\) steps. First, we isometrically embed the tree into an \(\mathbb{R}\)-tree thus representing our discrete space as a more tractable connected (uniquely) geodesic metric space. This \(\mathbb{R}\)-tree is then isometrically embedded into a canonical \(\mathbb{R}\)-tree whose structure is regular and for which embeddings are exhibited more easily. Next, we "asymptotically embed' this regular \(\mathbb{R}\)-tree into the boundary of the hyperbolic space, upon perturbing the embedding and adjusting the curvature of the hyperbolic space. We deduce the lemma upon composing all three embeddings. Step 1 - Isometric Embedding of \((V,d_{\mathcal{T}})\) Into an \(\mathbb{R}\)-Tree_ If \(V\) has only one point, then the result is trivial. Therefore, we will always assume that \(V\) has at least two points. For each vertex \(v\in V\), pick a different \(w^{v}\in V\) such that \(\{v,w^{v}\}\in\mathcal{E}\) and \(W(v,w^{v})\geq W(v,u)\) for all \(\{u,v\}\in\mathcal{E}\); i.e. \(w^{v}\) is adjacent to \(v\) in the weighted graph \(\mathcal{T}=(V,\mathcal{E},\mathcal{W})\). Consider the map \(\varphi_{1}:V\to X_{\mathcal{T}}\) defined for any vertex \(v\in V\) by \[\varphi_{1}:v\mapsto((v,w^{v}),0).\] By definition \((X_{\mathcal{T}},d_{X_{\mathcal{T}}})\) is a simplicial tree and therefore by (Das et al., 2017, Corollary 3.1.13) it is an \(\mathbb{R}\)-tree. In every \(\mathbb{R}\)-tree there is a unique shortest path (geodesic) connecting any pair of points. This is because, by (Das et al., 2017, Observation 3.2.6), all \(\mathbb{R}\)-trees satisfy the CAT\((-1)\) condition, as defined in (Bridson and Haefliger, 1999, Definition II.1.1), and in any metric spaces satisfying the CAT\((-1)\) there is exactly one shortest path connecting every pair of points by (Bridson and Haefliger, 1999, Chapter II.1 - Proposition 1.4 (1)). Moreover, (Chiswell, 2001, Chapter 3 - Lemma 1.4) implies that if \(x=x_{0},\ldots,x_{N}=y\) are (distinct) points in an \(\mathbb{R}\)-tree, for some \(N\in\mathbb{N}\), such as \((X_{\mathcal{T}},d_{X_{\mathcal{T}}})\), for lying on _the_ geodesic (minimal length path) joining \(x\) to \(y\) then \[d_{X_{\mathcal{T}}}(x,y)=\sum_{i=0}^{N-1}\,d_{X_{\mathcal{T}}}(x_{i},x_{i+1}). \tag{21}\] Since \(\mathcal{T}=(V,\mathcal{E},\mathcal{W})\) is a weighted tree, there is exactly one path joining any two nodes in a tree comprised of distinct points (a so-called reduced path in \(\mathcal{T}\)), independently of the weighting function \(\mathcal{W}\), see e.g. (Chiswell, 2001, Chapter 2 - Lemma 1.4). Therefore, for any \(v,u\in V\) there exist one such unique finite sequence \(u=u_{1},\ldots,u_{N}=v\) of distinct points (whenever \(u\neq v\) with the case where \(u=v\) being trivial). By definition of \(\varphi_{1}\) and the above remarks on \((X_{\mathcal{T}},d_{X_{\mathcal{T}}})\) being uniquely geodesic, we have that there exists exactly one geodesic (minimal length curve) \(\gamma:[0,1]\to X_{\mathcal{T}}\) satisfying \[\gamma(t_{i})=\varphi(u_{i})\] for some distinct "times" \(0=t_{0}<\cdots<t_{N}=1\) with constant speed. Therefore, (21) implies that \[d_{X_{\mathcal{T}}}(\varphi(u),\varphi(v))=\sum_{i=0}^{N-1}\,d_{X_{\mathcal{T }}}(\varphi(u_{i}),\varphi(u_{i+1})). \tag{22}\] Since, for \(i=0,\ldots,N-1\), \(u_{i}\) and \(u_{i+1}\) are adjacent in \(\mathcal{T}\), meaning that \(\{u_{i},u_{i+1}\}\in\mathcal{E}\), then \(d_{X_{\mathcal{T}}}(\varphi(u_{i}),\varphi(u_{i+1}))\) reduces to \[d_{X_{\mathcal{T}}}(\varphi(u_{i}),\varphi(u_{i+1}))= \min_{k=0,1,\,j=0,1}\big{|}0-kW(u_{i},w^{u_{i}})\big{|} \tag{23}\] \[+d_{\mathcal{T}}(w_{k},v_{j})\] \[+\big{|}0-jW(u_{i+1},w^{u_{i+1}})\big{|}\] \[= \min_{k=0,1,\,j=0,1}\big{|}0-0W(u_{i},w^{u_{i}})\big{|}\] \[+W(u_{i},u_{i+1})\] (24) \[+\big{|}0-0W(u_{i+1},w^{u_{i+1}})\big{|}\] \[= W(u_{i},u_{i+1}), \tag{25}\] where \(w_{0}=u_{i}\), \(w_{1}=w^{u_{i}}\), \(v_{0}=u_{i+1}\), \(v_{1}=w^{u_{i+1}}\) and (24) holds by definition of \(w^{u_{i}}\) and \(w^{u_{i+1}}\) together with the fact that \(\{u_{i},u_{i+1}\}\in\mathcal{E}\) which implies that \(\{u_{i},u_{i+1}\}\) is a geodesic in \((V,d_{\mathcal{T}})\). Combining the computation in (23)-(25) with (22) yields \[d_{X_{\mathcal{T}}}(\varphi(u),\varphi(v))=\sum_{i=0}^{N-1}\,W(u_{i},u_{i+1}) =d_{\mathcal{T}}(u,v), \tag{26}\] where the right-hand side of (26) holds since \((\{u_{i},u_{i+1}\})_{i=0}^{N-1}\) was the unique path in \(\mathcal{T}\) of distinct point from \(u\) to \(v\) and by definition of the shortest path distance in a graph. Consequentially, (26) shows that \(\varphi_{1}\) is an isometric embedding of \((V,d_{\mathcal{T}})\) into \((X_{\mathcal{T}},d_{X_{\mathcal{T}}})\). _Step 2 - Embedding \((X_{\mathcal{T}},d_{X_{\mathcal{T}}})\) Into A Universal \(\mathbb{R}\)-Tree_ Since \((X_{\mathcal{T}},d_{X_{\mathcal{T}}})\) has valency at-most \(1\leq\mu=\deg(\mathcal{T})<\#\mathbb{N}<2^{\aleph_{0}}\), then (Dyubina and Polterovich, 2001, Theorem 1.2.3 (i)) implies that there exists10 an \(\mathbb{R}\)-tree \((A_{\mu},d_{A_{\mu}})\) of valency at-most \(\mu\) and an isometric embedding \(\varphi_{2}:(X_{\mathcal{T}},d_{X_{\mathcal{T}}})\to(A_{\mu},d_{A_{\mu}})\). Footnote 10: The metric space \((A_{\mu},d_{A_{\mu}})\) is constructed explicitly in (Dyubina and Polterovich, 2001, Definition 1.1.1) but its existence dates back earlier to Nikiel (1989); Mayer et al. (1992). _Step 3 - The Universal \(\mathbb{R}\)-Tree At \(\infty\) In the Hyperbolic Space \((\mathbb{H}_{-1}^{d},d_{-1})\)._ By (Bridson and Haefliger, 1999, Proposition 6.17) the hyperbolic spaces \((\mathbb{H}_{-1}^{d},d_{-1})\) have the structure of a simply connected and geodesically complete Riemannian manifold and by the computations on (Jost, 2017, pages 276-277) it has constant negative sectional curvature equal to \(-1\). Now, since \(\mu<2^{\aleph_{0}}\) then the just-discussed properties of \((\mathbb{H}_{-1}^{d},d_{-1})\) guarantee that (Dyubina and Polterovich, 2001, Theorem 1.2.3 (i)). Now (Dyubina and Polterovich, 2001, Theorem 1.2.3 (i)), together with (Dyubina and Polterovich, 2001, Definition 1.2.1), imply that the following holds: There is a diverging sequence \((\lambda_{n})_{n=0}^{\infty}\) of positive real numbers such that for every \(x\in A_{\mu}\) there is a sequence \((x^{n})_{n=0}^{\infty}\) in \(\mathbb{H}^{d}\) such that for every \(\varepsilon>0\) there is an \(n_{\varepsilon}\in\mathbb{N}_{+}\) such that for every integer \(n\geq n_{\varepsilon}\) we have \[\sup_{x,y\in A_{\mu}}\Big{|}\frac{d_{-1}(x^{n},y^{n})}{\lambda_{n}}-d_{A_{\mu}} (x,y)\Big{|}<\varepsilon/2. \tag{27}\] In particular, (27) holds for all \(x,y\in\varphi_{2}\circ\varphi_{1}(V)\subseteq A_{\mu}\). Since \(V\) is finite, then so is \(\varphi_{2}\circ\varphi_{1}(V)\) and since \(\mathbb{H}^{d}\) is simply connected then for every \(x\in\varphi_{2}\circ\varphi_{1}(V)\) there exists a point \(\tilde{x}^{n_{\varepsilon}}\) for which \(d_{-1}(x^{n_{\varepsilon}},\tilde{x}^{n_{\varepsilon}})<\varepsilon/4\) and such that \(\{\tilde{x}^{n_{\varepsilon}}\}_{x\in\varphi_{2}\circ\varphi_{1}(V)}\) and \(\varphi_{2}\circ\varphi_{1}(V)\) have equal numbers of points. Since \(\varphi_{2}\) and \(\varphi_{1}\) are isometric embeddings then they are injective; whence, \(\{\tilde{x}^{n_{\varepsilon}}\}_{x\in\varphi_{2}\circ\varphi_{1}(V)}\) and \(V\) have equal numbers of points. Define \(\varphi_{3}:\varphi_{2}\circ\varphi_{1}(V)\to\mathbb{H}^{d}\) by \(x\mapsto\tilde{x}^{n_{\varepsilon}}\), for each \(x\in\varphi_{2}\circ\varphi_{1}(V)\). Therefore, the map \(\varphi_{3}:\varphi_{2}\circ\varphi_{1}(V)\to\mathbb{H}^{d}\) is injective and, by (27), it satisfies \[\max_{x,y\in\varphi_{2}\circ\varphi_{1}(V)}\left|\frac{d_{-1}(\varphi_{3}(x), \varphi_{3}(y))}{\lambda_{n}}-d_{A_{\mu}}(x,y)\right|<\varepsilon.\] Define the map \(f^{\star}:\mathbb{R}^{n}\to\mathbb{H}^{d}\) as _any_ extension of the map \(\varphi_{3}\circ\varphi_{2}\circ\varphi_{1}:V\to\mathbb{H}^{d}\). Thus, \(f^{\star}|_{V}=(\varphi_{3}\circ\varphi_{2}\circ\varphi_{1})|_{V}\) and therefore \(f^{\star}\) satisfies the following \[d_{\mathcal{T}}(u,v)-\varepsilon= d_{A_{\mu}}(\varphi_{2}\circ\varphi_{1}(u),\varphi_{2}\circ \varphi_{1}(v))-\varepsilon \tag{28}\] \[< \frac{d_{-1}(f^{\star}(u),f^{\star}(v))}{\lambda_{n}}\] \[< d_{A_{\mu}}(\varphi_{2}\circ\varphi_{1}(u),\varphi_{2}\circ \varphi_{1}(v))+\varepsilon\] \[= d_{\mathcal{T}}(u,v)+\varepsilon \tag{29}\] for each \(u,v\in V\); where the equalities (28) and (29) held by virtues of \(\varphi_{2}\) and \(\varphi_{1}\) being isometries and since the compositions of isometries is itself an isometry. _Step 4 - Selecting The Correct Curvature on \(\mathbb{H}^{d}_{\kappa}\) by Re-scaling The Metric \(d_{-1}\)._ Set \(\kappa_{\varepsilon}\stackrel{{\text{\tiny def.}}}{{=}}-\lambda_{ n_{\varepsilon}}^{2}\). By definition of \(d_{\kappa_{\varepsilon}}\), see (Bridson and Haefliger, 1999, Definition 2.10), the chain of inequalities in (28)-(29) imply that \[d_{\mathcal{T}}(u,v)-\varepsilon<d_{\kappa_{\varepsilon}}(f^{\star}(u),f^{ \star}(v))<d_{\mathcal{T}}(u,v)+\varepsilon. \tag{30}\] Set \(\delta\stackrel{{\text{\tiny def.}}}{{=}}\min_{x,\tilde{x}\in V; x\neq\tilde{x}}d_{\mathcal{T}}(x,\tilde{x})>0\) since \(V\) is finite. Note that for any \(0<\varepsilon<\delta\), the distortion of \(f^{\star}\) is at most \[\max_{\begin{subarray}{c}x,\tilde{x}\in V\\ x\neq\tilde{x}\end{subarray}}\frac{d_{\mathcal{T}}(x,\tilde{x})+\varepsilon }{d_{\mathcal{T}}(x,\tilde{x})-\varepsilon}=\frac{\delta+\varepsilon}{\delta-\varepsilon} \tag{31}\] and that the right-hand side of (31) tends to \(1\) as \(\varepsilon\to 0\). Thus, we may choose \(\varepsilon>0\) small enough to ensure that (30) holds; relabelling \(\kappa\stackrel{{\text{\tiny def.}}}{{=}}\kappa_{\varepsilon}\) accordingly. ## 6 Conclusion We have established lower bounds on the smallest achievable distortion by any Multi-Layer Perceptron (MLP) embedding a large latent metric tree into a Euclidean space, as proven in Theorem 4.1. Our lower bound holds true independently of the depth, width, number of trainable parameters, and even the (possibly discontinuous) activation function used to define the MLP. In contrast to this lower bound, we have demonstrated that Hyperbolic Neural Networks (HNNs) can effectively represent any latent tree in a 2-dimensional hyperbolic space, with a trainable constant curvature parameter. Furthermore, we have derived upper bounds on the capacity of the HNNs implementing such an embedding and have shown that it depends at worst polynomially on the number of nodes in the graph. To the best of the authors' knowledge, this constitutes the initial proof that HNNs are well-suited for representing graph structures, while also being the first evidence that MLPs are not. Thus, our results provide mathematical support for the notion that HNNs possess a superior inductive bias for representation learning in data with latent hierarchies, thereby reinforcing a widespread belief in the field of geometric deep learning. ## 7 Acknowledgment and Funding AK acknowledges financial support from the NSERC Discovery Grant No. RGPIN-2023-04482 and their McMaster Startup Funds. RH was funded by the James Steward Research Award and by AK's McMaster Startup Funds. HSOB acknowledges financial support from the Oxford-Man Institute of Quantitative Finance for computing support The authors also would like to thank Paul McNicholas and A. Martina Neuman their for their helpful discussions.
2301.05161
Electronic Kapitza conductance and related kinetic coefficients at an interface between n-type semiconductors
We calculate the Kapitza conductance, which is the proportionality coefficient between heat flux and temperature jump at the interface, for the case of two conducting solids separated by the interface. We show that for conducting solids in a non-equilibrium state, there should also arise the electrochemical potential jump at the interface. Hence to describe linear transport at the interface we need three kinetic coefficients: interfacial analogs of electric and heat conductances and interfacial analog of the Seebeck coefficient. We calculate these coefficients for the case of an interface between n-type semiconductors. We perform calculations in the framework of Boltzmann transport theory. We have found out that the interfacial analog of the Seebeck coefficient for some range of parameters of the considered semiconductors, has a high value of about $10^{-3}$ V/K. Thus this effect has the potential to be used for the synthesis of effective thermoelectric materials.
A. P. Meilakhs
2023-01-12T17:28:42Z
http://arxiv.org/abs/2301.05161v2
Electronic Kapitza resistance and related kinetic coefficients at an interface between n-type semiconductors ###### Abstract We calculate the Kapitza conductance, which is the proportionality coefficient between heat flux and temperature jump at the interface, for two electronic subsystems separated by the interface. We observe that for electronic subsystems in a non-equilibrium state, there should also arise the electrochemical potential jump at the interface. Hence to describe linear transport at the interface we need three kinetic coefficients: interfacial analogs of electric and heat conductances and interfacial analog of the Seebeck coefficient. We calculate these coefficients for the interface between n-type semiconductors. We have found out that the interfacial analog of the Seebeck coefficient for some range of parameters of the considered semiconductors, has a high value of about \(10^{-3}\) V/K. Thus this effect has the potential to be used for the synthesis of effective thermoelectric materials. ## I Introduction When heat flows through an interface between materials a temperature jump occurs at the interface. A proportionality coefficient between the heat flux and the temperature jump is called Kapitza conductance [1]. After the discovery of the phenomena, it was very soon realized that the temperature jump is due to the reflection of phonons at the interface [2]. To calculate the value of Kapitza conductance two models were developed, that differently describe the behavior of phonons at the interface. The Acoustic Mismatch Model (AMM) assumes that transmission and reflection of phonons at the interface can be described with elasticity theory [2; 3]. The Diffuse Mismatch Model (DMM) assumed that interfacial scattering is so strong that the phonon incident on the interface "forgets" its initial direction and is scattered uniformly in all directions [4; 5]. Nowadays the science of Kapitza conductance is developed in many ways. Some papers are concerned with improving understanding of the dynamics of the crystal lattice at the interface with computer simulations [6; 7; 8; 9; 10; 11] or analytically [12; 13; 14]. Others concerned phonon kinetics at the interface [15; 16; 17; 18]. Often nonequilibrium Greens function method is used for calculations [19; 20; 21]. Not only the theory is developed, but also new experiments are perpetually conducted [22; 23; 24; 25]. The reason for such development is not only the intrinsic interest of any discontinuous phenomena in physics but also the importance of applications [26; 27; 28]. Nowhere in the Kapitza conductance researches, very specific properties of phonons are used. The property that gives raise to the temperature jump at the interface is the reflection of phonons at the interface. That is just the consequence of phonons being waves. Since electrons are waves too they also reflect at the interface and that should give rise to a temperature jump between two electronic subsystems separated by an interface. Papers concerning electron transport phenomena in the local region mostly originated from the famous Landauer paper [29] describing one-dimensional transport in disordered media. Later his ideas were generalized to three dimensions [30]. Further development included taking into account an external magnetic field [31], electron-electron interaction in the interfacial region [32; 33], and some better numerical methods of calculation [34; 35]. These approaches are thoroughly reviewed in [36; 37]. However, all these papers only investigate electrical current and not heat flux. They talk about transport in a region of finite length and do not use the notion of the sharp jump, which is very natural in the context of interfacial kinetics. Also, almost all papers about electronic transport at the interface use Green's function formalism and the Kubo formula for calculations. Here we propose a formalism based on Boltzmann kinetic equation, which was developed for phonon heat transfer through interface [38; 39]. For electrons not only the energy density and energy flux but also the electric charge and electric current are conserved. More conservation laws cause two jumps at the interface instead of one. The temperature jump and the electrochemical potential jump. This point will be formulated more precisely in Section 3 of the present paper. So there are four coefficients that relate currents through the interface to jumps at the interface. By Onsager relations, two of them are equal. So we have three kinetic coefficients: interfacial analogs of electric and heat conductances and thermoelectric coefficient. The latter one is of great practical importance since during the last decade thermoelectricity has turned [40] into one of the most important subjects of applied physics [41; 42]. The goal is to produce a thermoelectric generator [43] with the largest possible figure of merit \(ZT\), and some new principles are invented to enhance the value of this parameter [44]. Recent advances and more literature can be found in the review [45]. In this paper, we will show how the interfacial thermoelectric effect can be used for the production of a material with very high values of thermoelectric coefficient. Since we suggest some new ideas, for the investiga tion we have chosen the most simple case to illustrate them. We calculate coefficients characterizing transport through the interface by electrons for the interface between n-type semiconductors. For the interface between metals, we would have to consider many-particle effects. For intrinsic semiconductors, we would have to take into account electrons and holes simultaneously. For p-type semiconductors, we would have to deal with the complicated structure of a valence band and consider both light and heavy holes. So naturally, we choose n-type semiconductors with simple band structure, particularly Ga\({}_{x}\)In\({}_{1-x}\)As with different values of \(x\) on both sides of the interface. We also choose the electron analog of DMM, to describe electrons scattering at the interface. We will formulate this model more specifically in Section 3. The DMM is not a very accurate model for phonons, but it simplifies the calculations greatly and usually can predict an order of magnitude correctly [46]. We think this model, modified accordingly, can be useful for electron transport calculations as well. ## II General theory In the kinetic theory of homogeneous media we have this type of relation between gradients and flows: \[q = L_{TT}\nabla T+L_{TE}\nabla U^{*}\] \[j = L_{ET}\nabla T+L_{EE}\nabla U^{*} \tag{1}\] Here \(U^{*}\) is effective electric potential, that is, electrochemical potential divided by the electron charge \(U^{*}=\mu/e+U=\zeta/e\). With all four \(L\)-s we can express all measurable kinetic coefficients, such as conductivity, thermal conductance, Seebeck, and Peltier coefficients [47]. For interfacial case, we can write down analogous equations. Now we have jumps instead of gradients and we write inverse proportionality coefficients, since those are the ones, that we can calculate naturally. \[\Delta T = M_{TT}\,q+M_{TE}\,j\] \[\Delta U^{*} = M_{ET}\,q+M_{EE}\,j \tag{2}\] Because of Onsager reciprocal relations, which our calculations follow with great, about 1 % accuracy, \(M_{TE}=TM_{TE}\). Here we note one more difference between the homogeneous case and the interfacial case. In homogeneous media, in the stationary case, \(\nabla U^{*}\) is actually just an ordinary electric field \(E\) since in the stationary case the current flows through an electro-neutral media, and charge can not be stored anywhere. In the case of an ideally sharp interface, \(\Delta U^{*}\) is actually \(\Delta\mu/e\), a jump of chemical potential since a finite jump of electric potential on a zero distance means an infinite electric field. Sure for the non-ideal interfaces, of a finite length, there can be contributions of both \(\Delta\mu/e\) and \(\Delta U\). We will not take into account the difference between \(\mu/e\) and \(U\), since from the point of view of kinetics they are indistinguishable [48]. Also, we should note here that we do only linear response theory. The interface between semiconductors is famous for being able to generate nonlinear current-voltage characteristics [49]. The typical expression is \(I\sim\exp(eV/kT)-1\), where \(I\) is current and \(V\) is the voltage applied to the interfacial layer. Here we only describe the region of parameters \(kT\gg eV\), so \(I\sim V\) the current-voltage characteristic is linear. In our notation it means \[\Delta T\ll T\] \[\Delta U^{*}\ll kT/e. \tag{3}\] In the next section, we will find out how to calculate those \(M\)-s in formulae (2). Now we want to introduce proper kinetic coefficients and express them with \(M\)-s. In analogy with the homogeneous case, we want to introduce three measurable parameters. The thermal Kapitza conductance \(K_{T}\) is analogous to heat conductance and is measured under condition \(j=0\). The electrical Kapitza conductance \(K_{E}\) is analogous to electric conductivity and is measured under condition \(\Delta T=0\). Finally, the analog of Seebeck coefficient \(K_{S}\), which is proportionality between \(\Delta T\) and \(\Delta U^{*}\), under the assumption that \(j=0\). Substituting mentioned conditions into (2) yields \[K_{T} = 1/M_{TT}\] \[K_{E} = \left(M_{EE}-M_{ET}*M_{TE}/M_{TT}\right)^{-1}\] \[K_{S} = M_{ET}/M_{TT}. \tag{4}\] Those are desired measurable coefficients. Let us consider the third one since its use has some interesting issues. Thermoelectricity is often referred to as a contact phenomenon. In the case of the Peltier effect, heat fluxes are generated by electric fields inside the media but it is contact between the media that is being heated. Seebeck's coefficient can only be measured with respect to Figure 1: Shown is material 1, one of its ends is a heat source and the other is a heat drain. The source is kept with temperature \(T_{H}\) the drain with temperature \(T_{C}\). Heat flux \(q_{1}\) goes through material 1. Also shown is material 2, with heat flux \(q_{2}\). Because of temperature jumps at the interface, the ends of material 2 have temperatures \(T_{H}-\Delta T\) and \(T_{C}+\Delta T\). some other material because the material of the measuring device experiences just the same temperature difference between its ends as the material that is being measured [48]. We want to distinguish the phenomenons of temperature and chemo-electric potential jumps at the interface from those aspects of usual thermoelectric phenomenons. To do so we want to consider the classic thermocouple experiment and take into account interfacial jumps. Consider the material that is being measured. One end is kept with temperature \(T_{H}\) another with temperature \(T_{C}\). The piece of material is isolated, so there is no electric current in it \(j=0\). Since the system is static but not in equilibrium there is a heat current \(q_{1}\). Now we attach the measuring device made of another material (Fig. 1). The heat flux through this material is going to be \(q_{2}\). Since the heat source and heat sink are placed in material 1, also the heat flux through the interfaces will be the same as through material 2, that is \(q_{2}\). Considering the temperature and electrochemical potential jumps at the interface, the measured voltage is \[V=2M_{ET}q_{2}-\int\limits_{T_{C}+\Delta T}^{T_{H}-\Delta T}\alpha_{2}\,dT+ \int\limits_{T_{C}}^{T_{H}}\alpha_{1}\,dT \tag{5}\] We neglect temperature dependance of \(\alpha_{1},\alpha_{2}\) so we can rewrite the previous equation \[V=2M_{ET}q_{2}+2\Delta T\alpha_{2}+\int\limits_{T_{C}}^{T_{H}}(\alpha_{1}- \alpha_{2})\,dT. \tag{6}\] Since \(\Delta T\) is proportional to q we can further rewrite it \[V=2(M_{ET}+M_{TT}\alpha_{2})q_{2}+(T_{H}-T_{C})(\alpha_{1}-\alpha_{2}). \tag{7}\] The second term on the right side is well known Seebeck effect. The first term is the additional contribution from interfaces. Now we want to express heat flux in terms of temperatures. So we write \[T_{H}-T_{C}=2M_{TT}q_{2}+l_{2}\kappa_{2}^{-1}q_{2}. \tag{8}\] Here we find \(q_{2}\) and substitute it into Eq. (7), and finally, we arrive to \[V=\left(\frac{K_{S}+\alpha_{2}}{1+\frac{2K_{T}}{l_{2}^{2}\kappa_{2}}}+\alpha_{ 1}-\alpha_{2}\right)(T_{H}-T_{C}). \tag{9}\] Here we have used expressions (4). We can see that experimentally measurable quantity is indeed expressed with \(K\)-s. The big fraction in expression (9) represents the contribution of boundaries to the total measured voltage. Looking at its denominator we can clearly see, that if the heat resistivity of the boundaries is much smaller than the resistivity of the inner part of the second material, this contribution is negligible. In the opposite case, the contribution to the thermoelectric effect of the second material is replaced by the contribution of boundaries \[V=(K_{S}+\alpha_{1})\,(T_{H}-T_{C}). \tag{10}\] If the heat source and heat drain are placed on the other sides of the interfaces, inside the material 2, we will have a different expression. However, we can obtain it from (9) just by changing indexes and signs for every kinetic coefficient. We can have a more interesting result if the heat source is placed in material 1 and the heat drain is placed in material 2 (or otherwise). For this case we can rewrite the expression (5): \[V=M_{ET}q_{2}-M_{ET}q_{1}-\int\limits_{T_{C}}^{T_{H}-\Delta T}\alpha_{2}\,dT+ \int\limits_{T_{C}+\Delta T}^{T_{H}}\alpha_{1}\,dT. \tag{11}\] With the same operations as previously, we obtain \[V=\left(\frac{K_{S}+\alpha_{2}}{1+\frac{K_{T}}{l_{2}^{2}\kappa_{2}}}-\frac{K_{ S}+\alpha_{1}}{1+\frac{K_{T}}{l_{1}^{2}\kappa_{1}}}+\alpha_{1}-\alpha_{2} \right)(T_{H}-T_{C}). \tag{12}\] Again, if the lengths of materials are sufficiently large, we will have just a classic formula for thermocouples. However, if the lengths are small, we will get: \[V=(K_{S}-K_{S})(T_{H}-T_{C})=0. \tag{13}\] Sometimes quantities \(K_{T}/\kappa_{1,2}\) with a dimension of length, are referred to as Kapitza lengths \(l_{K\,1,2}\). Typical values of \(l_{K}\) are of the order of 100 nm [50; 51]. If \(l_{1,2}\gg l_{K\,1,2}\) classical model of the Seebeck effect is applicable. Otherwise, interfaces between media become more important than the media itself. Since Kapitza lengths are about 100 nm, to make a macroscopic material, where interfaces are important, we should consider composite material with many interfaces, at a distance comparable to Kapitza lengths from each other. Let us consider the periodic structure of alternating layers of material 1 with width \(l_{1}\) and of material 2 Figure 2: Shown is the periodic material, which consists of material 1 and material 2 with lengths \(l_{1},l_{2}\) respectively. The material consists of \(N\) repeating blocks. Hot and cold ends are denoted as \(T_{H}\), \(T_{C}\). Heat flux and electric current are shown, denoted as \(q,j\). with width \(l_{2}\) (Fig. 2). The period is \(l=l_{1}+l_{2}\). The period should be sufficiently large, so the width of layers would be greater than the electron's free paths for both materials. Also, no interference caused by periodic structure should occur, so the width should be greater than typical electron wavelengths in both materials. Assuming both conditions are fulfilled, let us calculate kinetic coefficients for such a material, in a direction perpendicular to interfaces. First, we calculate the heat conductivity of the described material. We set \(j=0\). If there are \(N\) layers, the full length of the material is \[L=(l_{1}+l_{2})N. \tag{14}\] The temperature difference between hot and cold ends is \[T_{H}-T_{C}=[(\nabla T)_{1}l_{1}+(\nabla T)_{2}l_{2}+2\Delta T]\ N. \tag{15}\] we express gradients and jumps in terms of heat flux and obtain \[T_{H}-T_{C}=[l_{1}/\kappa_{1}+l_{2}/\kappa_{2}+2M_{TT}]\ qN. \tag{16}\] Now we substitute relation (14) and we have \[T_{H}-T_{C}=\left[\frac{l_{1}/\kappa_{1}+l_{2}/\kappa_{2}+2/K_{T}}{l_{1}+l_{2 }}\right]\ qL. \tag{17}\] We can think of \((T_{H}-T_{C})/L\) as an average temperature gradient in the structure. So the expression in square brackets is the inverse heat conductivity of the material. Now we want to calculate electric conductivity. We set \(\nabla T=0\). The voltage between contacts is \[V=[E_{1}l_{1}+E_{2}l_{2}+2\Delta U^{*}]\ N. \tag{18}\] We express gradients and jumps of the effective potential in terms of \(j\) with Eqs. (1, 2) and obtain \[V=\left[\frac{l_{1}}{\sigma_{1}}+\frac{l_{2}}{\sigma_{2}}+2\left(M_{EE}-\frac{ M_{ET}M_{TE}}{M_{TT}}\right)\right]\ jN. \tag{19}\] We use expressions (4, 14) to simplify this which yields \[V/L=\left[\frac{l_{1}/\sigma_{1}+l_{2}/\sigma_{2}+2/K_{E}}{l_{1}+l_{2}}\right]\ j. \tag{20}\] We can think of \(V/L\) as an average field, so the expression in square brackets is the resistivity of the material. Again we can see that this experimentally measurable quantity is expressed with one of the \(K\)-s we introduced in (4). Finally, we want to calculate the Seebeck coefficient. We again have Eq. (18), but now \(j=0\) and temperature gradients are not zero. So we have \[V=[\alpha_{1}(\nabla T)_{1}l_{1}+\alpha_{2}(\nabla T)_{2}l_{2}+2M_{ET}q]\ N. \tag{21}\] we express temperature gradients in terms of heat flux and obtain \[V/L=\left[\frac{\alpha_{1}l_{1}/\kappa_{1}+\alpha_{2}l_{2}/\kappa_{2}+2M_{ET} }{l_{1}+l_{2}}\right]\ q. \tag{22}\] We divide this effective mean electric field by the effective mean temperature gradient (17) and get \[S=\left[\frac{\alpha_{1}l_{1}/\kappa_{1}+\alpha_{2}l_{2}/\kappa_{2}+2K_{S}/K_ {T}}{l_{1}/\kappa_{1}+l_{2}/\kappa_{2}+2/K_{T}}\right]\,. \tag{23}\] Superlattices are one of the promising candidates for the fabrication of thermoelectric materials with high figures of merit [52]. However, interfaces in superlattices were commonly treated as scattering sources that affect regular kinetic coefficients, as in Eqs. (1). We suggest interfaces should be more accurately described by their own special kinetic coefficients, like in Eqs. (2). Their effects on the overall properties of superlattices are described by equations (17, 20, 23). ## III Calculations To calculate coefficients in formulae (2) we want to calculate jumps of temperature and electrochemical potential at the interface \(\Delta T\) and \(\Delta\zeta\) given heat flux and electric current through the interface. There are such jumps directly associated with the reflection of electrons at the interface \(\Delta T^{B}\) and \(\Delta\zeta^{B}\). Since the distribution function of electrons in the vicinity of the interface is perturbed by the interface, there arise additional contributions to effective jumps \(\Delta T^{L,R}\) and \(\Delta\zeta^{L,R}\), which should be calculated too. Indexes \(L,R\) denote the left or the right side of the interface. Figure 3: Shown is a wave with a unit amplitude, the black arrow, that is incident on the interface, the bald vertical line, from the left. Colored arrows represent all the reflected and transmitted waves. \(A^{L}\)-s are amplitudes of waves reflected on the left side, \(B^{R}\)-s are amplitudes of waves transmitted to the right. Those amplitudes are functions of \(\theta^{\prime}\) – the angle of incidence and \(\theta\) – angles of departure, which is also shown. Angles are calculated from \(x\) axis, which is perpendicular to the interface, shown as a gray arrow. Colored circles illustrate our model: for given \(\theta^{\prime}\) all amplitudes on one side have the same magnitude (Eq. 30). To do so, we use the system of equations introduced in [39], adapted for electrons. The core feature of this method is the introduction of matching equations for the distribution functions at the interface. We briefly explain the idea here. We seek the solution of the wave equation (Schrodinger equation in the case of electrons) as a superposition of incident wave and a set of reflected and transmitted waves with amplitudes \(A\)-s and \(B\)-s that we call reflection and transmission amplitudes (Fig. 3). We use squares of the amplitudes as coefficients for the matching equations for the distribution functions at the interface. The concept of matching equations is only applicable if electrons don't scatter inelastically in the barrier region. This gives another condition for the applicability of the method: the width of the barrier is less than the free mean pass of the electrons. Matching equations for the distribution functions of electrons at the interface are \[n^{L\leftarrow}(\theta)=\int_{0}^{1}dcos\theta^{\prime}|A^{L}_{ \theta\theta^{\prime}}|^{2}n^{L\rightarrow}(\theta^{{}^{\prime}})+\] \[+\frac{p^{R}m^{R}}{p^{L}m^{L}}\int_{0}^{1}dcos\theta^{\prime}|B^{ L}_{\theta\theta^{\prime}}|^{2}n^{R\leftarrow}(\theta^{{}^{\prime}})\] \[n^{R\rightarrow}(\theta)=\int_{0}^{1}dcos\theta^{\prime}|A^{R}_ {\theta\theta^{\prime}}|^{2}n^{R\leftarrow}(\theta^{{}^{\prime}})|+\] \[+\frac{p^{L}m^{L}}{p^{R}m^{R}}\int_{0}^{1}dcos\theta^{\prime}|B^{ R}_{\theta\theta^{\prime}}|^{2}n^{L\rightarrow}(\theta^{{}^{\prime}}), \tag{24}\] where arrows denote the direction of propagation, \(\theta^{\prime}\) - the angle of incidence, \(\theta\) - the angle of reflection or transmission, angles are counted from the \(x\) axis perpendicular to the interface. The derivation of matching equations for electrons is very alike the one for phonons [38]. This derivation will be published elsewhere, here we want to focus on kinetics only. To describe the distribution function of electrons near the interface, we use the conventional stationary Boltzmann equations in the relaxation time approximation: \[\frac{p^{L,R}}{m^{L,R}}\cos\theta\frac{\partial n^{L,R}}{\partial x}+eE^{*} \frac{\partial n^{L,R}}{\partial p}=-\frac{\chi^{L,R}}{\tau^{L,R}}. \tag{25}\] Here \(\chi^{L,R}=n^{L,R}-n_{0}^{L,R}\) is a nonequilibrium part of the distribution functions, \(n_{0}\) - is an equilibrium part, that is Fermi-Dirac distribution function. \(E^{*}\) is effective electric field \(E^{*}=\nabla U^{*}=E+\nabla\mu/e\). Alike paper [39] we introduce the Chapman-Enskog conditions [47]. Since the total number of electrons is conserved, there are two such conditions, instead of one, which was the case for phonons. To define the electrochemical potential of a non-equilibrium system, we introduce the condition: the number of particles in a non-equilibrium system is equal to the number of particles in an equilibrium system with the same electrochemical potential: \[\int\frac{d^{3}k}{(2\pi)^{3}}\chi^{L,R}=0. \tag{26}\] The temperature of a non-equilibrium system is defined as the temperature of an equilibrium system with the same energy: \[\int\frac{d^{3}k}{(2\pi)^{3}}\chi^{L,R}(\varepsilon-\zeta)=0. \tag{27}\] We also need the conservation of flows equations. Again, for electrons, not only energy but also the number of particles, or, equivalently, the electric charge is conserved. We have \[2e\int\frac{d^{3}k}{(2\pi)^{3}}v_{x}\chi^{R}=j\] \[2\int\frac{d^{3}k}{(2\pi)^{3}}v_{x}(\varepsilon-\zeta)\chi^{R}=q. \tag{28}\] In the second equation, we only count the heat energy, not the total amount of energy so we write \(\varepsilon-\zeta\), where \(\zeta\) is electrochemical potential [48]. Because of fluxes conservation at the interface, we can only write these two equations for the right side. Analogous equations for the left side at the very interface are fulfilled automatically. The system of equations (24 - 28) is the general system of equations that describe the transport through an interface. We want to solve it for a specific model which is an interface between two n-type semiconductors with a simple zone structure. We assume the bottom of the conduction band is higher for the semiconductor on the right. We call the energy difference between those bottoms an energy barrier \(V_{b}\), and we place the origin of the energy axis at the bottom of the conduction band of the left crystal. Thus, the dispersion relations for the left and right semiconductors are \[p^{L} =\sqrt{2m^{L}\varepsilon}\] \[p^{R} =\sqrt{2m^{R}(\varepsilon-V_{b})}. \tag{29}\] For matching equations (24) we choose a set of transmission and reflection amplitudes \(A^{L}_{\theta\theta^{\prime}},A^{R}_{\theta\theta^{\prime}},B^{L}_{\theta \theta^{\prime}},B^{R}_{\theta\theta^{\prime}}\) which would be an electron analog of diffuse mismatch model. To find conditions for transmission and reflection amplitudes we assume the uniform scattering at the interface. We also assume like in the paper [39], that the fraction of the energy flux dissipated in a certain direction does not depend on the angle of incidence. It yields \[|A^{L,R}_{\theta\theta^{\prime}}|^{2}=|A^{L,R}|^{2}\cos\theta^{\prime}\] \[|B^{L,R}_{\theta\theta^{\prime}}|^{2}=|B^{L,R}|^{2}\cos\theta^{ \prime}, \tag{30}\] We write down the flow conservation equation for each mode (Exactly one mode is presented in Fig. 3). After some simplifications, they take the form \[\cos\theta^{\prime}=\int_{0}^{1}dcos\theta\cos\theta|A^{L}_{\theta \theta^{\prime}}|^{2}+\] \[+\frac{p^{R}m^{L}}{p^{L}m^{R}}\int_{0}^{1}dcos\theta\cos\theta|B^{ R}_{\theta\theta^{\prime}}|^{2}\] \[\cos\theta^{\prime}=\int_{0}^{1}dcos\theta\cos\theta|A^{R}_{\theta \theta^{\prime}}|^{2}+\] \[+\frac{p^{L}m^{R}}{p^{R}m^{L}}\int_{0}^{1}dcos\theta\cos\theta|B^ {L}_{\theta\theta^{\prime}}|^{2}. \tag{31}\] We substitute equilibrium distribution functions into equation (24). Together with (30, 31), it forms the set of equations to determine values of reflection and transmission amplitudes. We find \[|A^{L}|^{2}=\frac{2p^{L^{2}}}{p^{L^{2}}+p^{R^{2}}}\] \[|A^{R}|^{2}=\frac{2p^{R^{2}}}{p^{L^{2}}+p^{R^{2}}}\] \[|B^{L}|^{2}=\frac{2m^{L}p^{L}p^{R}}{m^{R}(p^{L^{2}}+p^{R^{2}})}\] \[|B^{R}|^{2}=\frac{2m^{R}p^{L}p^{R}}{m^{L}(p^{L^{2}}+p^{R^{2}})}. \tag{32}\] Now we solve the system of equations (24 - 28) with the dispersion relations (29) and the set of amplitudes (30, 32). The method is analogous to the one presented in [39]. Boltzmann equation (25) describes the evolution of the electron distribution function near the interface. We divide the solution to the Boltzmann equation into complementary and particular parts, \(\chi^{R}=\chi^{R}_{p}+\chi^{R}_{c}\). The complementary solution is: \[\chi^{R}_{c}=\chi^{R}_{0}\exp{(-x/v^{R}_{x}\tau^{R})}, \tag{33}\] where \(\chi^{R}_{0}=\chi^{R}(x=0)\). Similarly for the left crystal \(\chi^{L}_{c}=\chi^{L}_{0}\exp{(x/v^{L}_{x}\tau^{L})}\). We observe that for incident electrons the solution of this type increases infinitely. The complementary part of the solution does not satisfy the boundedness condition, which means that the distribution function of incident electrons is determined only by a particular solution. The particular solution at the interface is \[\chi^{L,R}_{p}=-\tau^{L,R}v^{L,R}_{x}\frac{\varepsilon-\zeta}{kT^ {2}}\left(\frac{dT}{dx}\right)^{L,R}_{0}-\] \[-\tau^{L,R}v^{L,R}_{x}\frac{1}{kT}\left(\frac{d\zeta}{dx}\right) ^{L,R}_{0}. \tag{34}\] At the proximity of interface \(\left(\frac{dT}{dx}\right)^{L,R}\) and \(\left(\frac{d\zeta}{dx}\right)^{L,R}\) - are unknown functions of coordinate, since, due to the perturbation of the electron distribution functions by the interface, the temperature and electrochemical potential gradients near the interface differ from the gradients in a homogeneous media. So we have six unknown parameters that characterize the distribution function of electrons at the interface. The temperature and electrochemical potential jumps at the interface \(\Delta T^{B},\Delta\zeta^{B}\), and four gradients at the very interface on both sides of the interface. We substitute the expression (34) into matching equations (24) and obtain the distribution function for receding electrons. Now we know the full distribution function of electrons at the very interface, expressed with six unknown parameters. We substitute it into equations (26, 27, 28). Now we have a system of six equations for six unknowns. Here we can see the necessity of two jumps since otherwise, we would not have enough unknown parameters for six equations. We solve equations (26, 27, 28) and find the jumps and the gradients at the very interface. They are expressed with two externally given parameters \(q,j\) that are introduced in equations (28). Now that we know gradients at the interface, we can find effective jumps associated with them. Let us find the solution of the Boltzmann equation for the right crystal since the solution for the left one is completely analogous. From here on, we omit the index \(R\) denoting the crystal. We divide gradients into two parts: \[\left(\frac{dT}{dx}\right)=\left(\frac{dT}{dx}\right)_{p}+\left( \frac{dT}{dx}\right)_{\infty}\] \[\left(\frac{d\zeta}{dx}\right)=\left(\frac{d\zeta}{dx}\right)_{p} +\left(\frac{d\zeta}{dx}\right)_{\infty}, \tag{35}\] where index \(\infty\) denotes the gradient at an infinite distance from the interface and \(p\) stands for perturbed, which is the difference between the gradient at a given point and infinity. Now for the particular part of the solution for receding electrons, we write \[\chi_{p}=-\tau v_{x}\frac{\varepsilon-\zeta}{kT^{2}}\left[\left( \frac{dT}{dx}\right)_{p}+\left(\frac{dT}{dx}\right)_{\infty}\right]-\] \[-\tau v_{x}\frac{1}{kT}\left[\left(\frac{d\zeta}{dx}\right)_{p}+ \left(\frac{d\zeta}{dx}\right)_{\infty}\right]. \tag{36}\] We substitute this expression (36) and the expression for the complementary part of the nonequilibrium function (33) into expressions for the heat flux and electric current (28). For the heat flux, we obtain \[2\int\frac{d^{3}k}{(2\pi)^{3}}v_{x}(\varepsilon-\zeta)(\tau v_{ x}\frac{\varepsilon-\zeta}{kT^{2}}\left[\left(\frac{dT}{dx}\right)_{p}+ \left(\frac{dT}{dx}\right)_{\infty}\right]+\] \[+\tau v_{x}\frac{1}{kT}\left[\left(\frac{d\zeta}{dx}\right)_{p} +\left(\frac{d\zeta}{dx}\right)_{\infty}\right]+\chi_{c})=q. \tag{37}\] Now we observe, that integral expression behind \(\left(\frac{dT}{dx}\right)\) and \(\left(\frac{d\zeta}{dx}\right)\) are coefficients \(L_{TT}\) and \(L_{TE}\) (1) in relaxation time approximation. Since at infinity, the heat flux is the same as in the homogeneous media \[q=L_{TT}\left(\frac{dT}{dx}\right)_{\infty}+L_{TE}\left(\frac{d\zeta}{dx} \right)_{\infty}, \tag{38}\] we can subtract this from both sides. Now we obtain \[2\int\frac{d^{3}k}{(2\pi)^{3}}v_{x}(\varepsilon-\zeta)\left(\tau v_{x }\frac{d\chi_{p}}{dx}+\chi_{c}\right)=\] \[=L_{TT}\left(\frac{dT}{dx}\right)_{p}+L_{TE}\left(\frac{d\zeta}{dx }\right)_{p}. \tag{39}\] We integrate by \(x\) from zero to infinity. Since this integration of the perturbed part of the gradients gives, by definition, effective jumps, we obtain \[2\int_{0}^{\infty}dx\int\frac{d^{3}k}{(2\pi)^{3}}v_{x}(\varepsilon-\zeta)\chi_ {c}=L_{TT}\Delta T^{R}+L_{TE}\Delta\zeta^{R}. \tag{40}\] We perform analogous manipulations with the expression for electric current in (28) and obtain \[2e\int_{0}^{\infty}dx\int\frac{d^{3}k}{(2\pi)^{3}}v_{x}\chi_{c}=L_{ET}\Delta T ^{R}+L_{EE}\Delta\zeta^{R}. \tag{41}\] We substitute the expression for the complementary part of the distribution function (33) into expressions (40, 41) and perform the integration in the left part. Now, we have a system of two equations for \(\Delta T^{R},\Delta\zeta^{R}\), so we find them. We can also find \(\Delta T^{L},\Delta\zeta^{L}\) in the same manner, but the calculation is a bit longer since we should also treat electrons below the energy barrier, and substitute their distribution function into expressions for heat flux and electric current. Now we can sum and obtain the full temperature jump \(\Delta T=\Delta T^{L}+\Delta T^{B}+\Delta T^{R}\) and the full electrochemical potential jump \(\Delta\zeta=\Delta\zeta^{L}+\Delta\zeta^{B}+\Delta\zeta^{R}\). The proportionality coefficients between these deltas and fluxes \(q,j\) are coefficients from the formulae (2). We have counted all the coefficients describing linear transport phenomena at the interface between n-type semiconductors. Let us note the nonlinear one. The second equation from (28) is only equal on both sides in the linear approximation. Since a jump of \(\zeta\) at the interface occurs, heat fluxes differ on both sides of the interface. The difference between fluxes \[q^{L}-q^{R}=2\int\frac{d^{3}k}{(2\pi)^{3}}v_{x}^{R}\chi^{R}\,\Delta\zeta \tag{42}\] is released at the interface resulting in the heating of the interface. Since \(\chi\) has components, that are proportional to \(\Delta T\) and \(\Delta\zeta\), we have two components of heat release. The one proportional to \(\Delta\zeta^{2}\) is the interfacial analog of the Joule effect, and the one proportional to \(\Delta T\Delta\zeta\) is the analog of the Thomson effect. ## IV Results and discussion In the previous section, we have described, how to perform calculations for the presented model. Now we want to discuss the results of such calculations. We calculate for the interface between two samples of Ga\({}_{x}\)In\({}_{1-x}\)As with different values of \(x\): \(x^{L}\) and \(x^{R}\). The energy barrier \(V_{b}\) is given by the difference between affinities plus the difference between Fermi levels. The affinity is given by the formula \(4.9-0.83x\) eV [53]. Materials with different values of \(x\) also have different effective masses, which value is given by \(0.023+0.037x+0.003x^{2}\)\(m_{0}\)[54]. Fermi level is defined by the concentration of donors \(n_{D}\). The compact formula for the Fermi level is different for different temperatures, different donor concentrations, and different energies of donor ionization \(\varepsilon_{D}\). We will assume here that donors are shallow such as Sn, Ge, Si. Typical values of \(\varepsilon_{D}\) for shallow donors is about 5 meV [55]. Since we are interested in applications, we are mostly concerned with the temperatures about room temperature and higher. Thus concerned temperatures satisfy the condition \(kT\approx\varepsilon_{D}\) or even \(kT>\varepsilon_{D}\). For such conditions the following equation [49] holds: \[\zeta=kT\ln\frac{4\pi^{2}\hbar^{3}n_{D}}{(2\pi mkT)^{3/2}}. \tag{43}\] Figure 4: Shown are dependencies of kinetic coefficients \(K_{T},K_{S}\) (4) on the concentration of donors, and electrochemical potential \(\zeta\), respectively, at different temperatures and for different power laws of \(\tau(\varepsilon)\): \(\tau\sim\varepsilon^{\alpha}\). It is clearly seen that \(K_{T}\) depends linearly on concentration, while \(K_{S}\) depends linearly on \(\zeta\). Power law of \(\tau(\varepsilon)\) switches from \(\alpha=-1/2\) to \(\alpha=3/2\) with the growth of concentration, so real data should be approximated correctly by the \(-1/2\) law at low concentration, and by \(3/2\) law at high concentration. Also at such conditions, all donors are ionized, which means the concentration of electrons is equal to the concentration of donors. It also implies that scattering on donors is the scattering on charged impurities, not neutral ones. The difference in Fermi levels on both sides causes band bending and the occurrence of the space charge region at the interface. The region is small if the Fermi levels difference is small. If the space charge region is small compared to the electron free pass, the theory is applicable. The occurrence of space charge should in principle enhance scattering on the interface, but in our model, we assume scattering is already maximal, in some sense, so for our calculation, it changes nothing. If the space charge region is thick enough the theory should be sufficiently modified. That is the case for the well-known p-n junction theory, where transport in the space charge region is considered to be diffusive [49]. Finally, before presenting the calculation results, we want to address the relaxation times that we use for our calculations. The absolute value of relaxation time does not affect the values of computed temperatures and electrochemical potential jumps. Higher relaxation time leads to a higher gradient at the interface but also to faster relaxation of the disturbance. Two effects cancel out. It is especially obvious for a much simpler model that has been treated in Ref. [39]. Independence of \(\Delta T^{L,R}\) and \(\Delta\zeta^{L,R}\) on intrinsic properties of the materials is one reason to treat them as contributions to interfacial jumps. However, the dependence of electron relaxation time on energy does affect the result, since it affects the form of distribution function at the interface. Relaxation time dependence on energy \(\tau(\varepsilon)\) depends on the main scattering source. If it is phonons the dependance is \(\tau\sim\varepsilon^{-1/2}\), if it is charged impurities the dependance is \(\tau\sim\varepsilon^{3/2}\)[49]. For low concentrations of impurities phonons are the main source of scattering while for a high concentration of impurities, impurity scattering dominates. We should change the \(\tau(\varepsilon)\) accordingly. First, we want to present dependencies on the concentration of donors, since those are most clearly understandable. For simplicity, here we assume that the concentration of donors in both materials is made such that Fermi levels in both materials are the same. We vary concentration in the left semiconductor \(n_{D}^{L}\) in the range from \(10^{14}\) cm\({}^{-3}\) to \(10^{17}\) cm\({}^{-3}\), and adjust concentration in the right conductor accordingly. Lower values of concentration do not even produce n-type semiconductors. With higher values, the Fermi level comes too close to the conductivity band, and we have to include Fermi statistics together with possible many-body effects. All theoretic coefficients (2) depend linearly on the concentration of electrons, which is equal to the concentration of donors in the concerned conditions. That means both \(K_{T}\) and \(K_{E}\) depend linearly on concentration. However, \(K_{S}\) is a quotient of \(M_{ET}\) and \(M_{TT}\), and their dependence on concentration cancels out. \(K_{S}\) is proportional to the mean heat energy carried by the electron which is approximately \(kT-\zeta\). As it is shown by the formula (43), \(\zeta\) depends logarithmically on concentration. Both of these dependencies are presented in Figure 4. The graph of \(K_{E}\) is not presented since it is just the same as \(K_{T}\). We calculate \(K_{T},K_{E}\), and \(K_{S}\), but to characterize the efficiency of the interface for application as a thermoelectric generator, we also want to calculate the thermoelectric figure of merit. For thermoelectric material \(ZT=S^{2}\sigma T/\kappa\), where \(S\) is Seebeck coefficient, \(\sigma\) and \(\kappa\) are elecrictrical and thermal conductivity, respectively. \(ZT\) is the most important parameter of thermoelectric material [45]. In complete analogy with the homogeneous case, for the interface, we can write \[ZT=\frac{K_{S}^{2}K_{E}}{K_{T}}T. \tag{44}\] We also can express it in terms of theoreticaly com Figure 5: Shown are dependencies of kinetic coefficients \(K_{T},K_{E}\) (4) on the energy barrier height on the upper graph. And another kinetic coefficient \(K_{S}\) together with electronic thermoelectric figures of merit \(ZT\) versus energy barrier on the lower graph. We can clearly see that both \(K_{T}\) and \(K_{E}\) drop similarly as the energy barrier value grows, while \(K_{S}\) increases. This leads to the overall growth of \(ZT\). Important to note \(ZT\) here only considers the electronic transport properties and does not take into account phonon heat transfer. Consideration of the latter would make values of \(ZT\) substantially lower. putable parameters (2): \[ZT=\frac{M_{ET}M_{TE}}{M_{TT}M_{EE}-M_{ET}M_{TE}}. \tag{45}\] Important to note here, that in this paper, we only calculate the electronic part of heat conduction. We will get values of \(ZT\) an order of magnitude higher than those ever found in the experiment. That is because heat conductance by phonons for the material that we investigate is at least an order of magnitude higher than the electronic part. We only present our calculations of \(ZT\) here to demonstrate its dependence on different parameters of the interface, not to pretend that we currently know, how to make thermoelectric material an order of magnitude more efficient than all previously known. In Figure 5 we can see \(K_{T},K_{E},K_{S}\), and \(ZT\) as functions of the height of the barrier. Graphs are plotted for \(T=300\) K, \(n_{D}^{\prime}=10^{15}\), \(x^{L}=0.47\). \(x^{R}\) is varied from 0.48 up to 0.6 thus creating different heights of energy barrier. Again we adjust the concentration of donors in the right conductor so that Fermi levels in both materials are the same. We can see, that by variating the height of the barrier, we can get different values of parameters \(K_{T}\) and \(K_{E}\). We think, that for small barriers we underestimate the values of \(K_{T}\) and \(K_{E}\), since for very alike materials reflection at the interface is very small, and at the limit of similar materials should vanish, which leads to infinite values of all the \(M\)-s from formulae (2). It leads to infinite values of \(K_{T}\) and \(K_{E}\) at such a limit. Finite values of \(K_{T}\) and \(K_{E}\) at \(V_{b}=0\) are artifacts of the chosen DMM-like model. Away from zero, the DMM-like model works fine and correctly predicts the lowering of values of \(K_{T}\) and \(K_{E}\) with a growth of \(V_{b}\). It is caused by increased reflection with increased \(V_{b}\). Also, both \(K_{T}\) and \(K_{E}\) have very alike dependence on barrier height. On the other hand, from the lower graph of Figure 5, we see \(K_{S}\) grows with the growth of the barrier. And because \(K_{S}\) grows and \(K_{T}\), \(K_{E}\) lowers, the electronic part of \(ZT\) grows quite fast with the growth of \(V_{b}\). This means higher barriers are better suited to use for thermoelectric purposes. Finally, we want to present temperature dependencies. Here we can not just assume the equality of Fermi levels on both sides of the interface, since Fermi level and temperature are related by equation (43). If Fermi levels are equal at one temperature, they will not be equal at all the other temperatures. The second thing to note, as the formula (43) shows, the height of the potential barrier \(V_{b}\) vanishes at the zero temperature limit. Indeed, at zero temperature Fermi level tends to the bottom of the conductivity band, after \(\zeta\) on both sides equalizes, the bottoms of conductivity bands would be on the same level. There would still remain scattering, because of spacial charge and mismatch of effective masses on both sides. However, as we have seen on the previous graph (Fig. 5), all conductivities increase significantly with the lowering of \(V_{b}\). But changes in temperature have another effect besides changing \(V_{b}\). Heat conductivity also increases with the mean heat energy of particles, which is about \(kT-\zeta\), and grows linearly with temperature. For interfacial heat conductance \(K_{T}\), this effect turns out to be stronger, than the drop caused by \(V_{b}\) growth. So, \(K_{E}\) lowers with temperature while \(K_{T}\) grows with temperature. This result can be seen in Figure 6. We can also see \(K_{S}\) and \(ZT\) grow with temperature, which is highly unobvious of qualitative reasoning. We calculated these coefficients for a range of different values of donor concentration, heights of the energy barriers between semiconductors, and different temperatures. For \(K_{T}\) we have found typical values to be hundreds of Wm\({}^{-2}\)K\({}^{-1}\). It is a very small number compared to typical values of Kapitza conductances known for phonon transport. Even for materials with a high mismatch of acoustic properties, like diamond and copper, values of Kapitza conductance are about \(10^{6}\)Wm\({}^{-2}\)K\({}^{-1}\). That shows that we have to take into account phonon transport while calculating heat transport properties. For \(K_{E}\) typical values are about \(10^{8}\)\(\Omega^{-1}\)m\({}^{-2}\). For \(K_{S}\) they are \(10^{-4}-10^{-3}\) V/K. That is a very high value since the highest known values are about \(2*10^{-3}\), for Pb\({}_{15}\)Ge\({}_{37}\)Se Figure 6: Shown are dependencies of kinetic coefficients \(K_{T},K_{E}\) (4) on the temperature on the upper graph. Another kinetic coefficient \(K_{S}\) together with \(ZT\) versus temperature are shown on the lower graph. We can see that while \(K_{T}\) grows with temperature, \(K_{E}\) decreases. Also both \(K_{S}\) and \(ZT\) increase with temperature. Important to note \(ZT\) here only considers the electronic transport properties and does not take into account phonon heat transfer. Consideration of the latter would make values of \(ZT\) substantially lower. [56]. Moreover, while for calculations of \(K_{T}\) and \(K_{E}\), we only hope to find the correct order of magnitude, the calculation of \(K_{S}\) is probably more precise. That is because the simple model of uniform scattering at the interface inevitably produces incorrect factors for theoretic coefficients M-s (2). But the factors should be about equal for all the M-s. Since \(K_{S}\) is a quotient of two M-s, these factors cancel out and the result can be quite accurate. ## V Conclusions In this paper, we have presented a description of linear kinetic processes at the interface between n-type semiconductors in terms of three kinetic coefficients. These are interfacial analogs of electric and heat conductances \(K_{E}\) and \(K_{T}\) and the interfacial analog of the Seebeck coefficient \(K_{S}\). These coefficients are important for the description of nanostructured materials, where lengths between interfaces are small compared to the so-called Kapitza length: the quotient between the kinetic coefficient of the interface to the related kinetic coefficient of the homogeneous media. We presented the description of superlattices in terms of interfacial kinetic coefficients in Section 2 of this manuscript. In Section 3 we have presented a method for the calculation of interfacial kinetic coefficients. In Section 4 we presented the results of our calculations in the form of dependencies of kinetic coefficients on donor concentration, heights of the energy barriers between semiconductors, and temperature. We have found out that the interfacial analog of the Seebeck coefficient \(K_{S}\), for some range of parameters, has a high value of about \(10^{-3}\) V/K. There is a reason \(K_{S}\) can have a very high value in general, not just for the case considered in this manuscript, which is the interface between two samples of Ga\({}_{x}\)In\({}_{1-x}\)As with different values of \(x\). For n-type semiconductors, \(K_{S}\) is proportional to the absolute value of quasi-Fermi level \(\zeta\), counted from the bottom of the conduction band (see Fig. 4). That would mean we can increase the value of \(\zeta\) to increase the \(K_{S}\), but if the Fermi level is placed deep inside the energy gap, there exists a large number of holes. Holes produce thermoelectric current in the opposite direction to electrons and thus cancel the overall thermoelectric effect. However, we can choose two semiconductors such that an interface between them would have a low barrier for electrons and a very high barrier for holes, thus blocking hole transport. It will allow the superstructure consisting of such semiconductors to have a Fermi level very deep inside the band gap while having almost no hole current, which in turn will make a material with a very high value of Seebeck coefficient. We also can take highly acoustic mismatched materials to lower phonon heat conductance, thus increasing the thermoelectric figure of merit. All of that makes a good opportunity to produce superlattice with record high thermoelectric parameters. **ACKNOWLEDGMENTS** The author is grateful for A. Ya. Vul and C. Pastorino for their attention to the presented investigation.
2307.01089
Synthesizing Control Laws from Data using Sum-of-Squares Optimization
The control Lyapunov function (CLF) approach to nonlinear control design is well established. Moreover, when the plant is control affine and polynomial, sum-of-squares (SOS) optimization can be used to find a polynomial controller as a solution to a semidefinite program. This letter considers the use of data-driven methods to design a polynomial controller by leveraging Koopman operator theory, CLFs, and SOS optimization. First, Extended Dynamic Mode Decomposition (EDMD) is used to approximate the Lie derivative of a given CLF candidate with polynomial lifting functions. Then, the polynomial Koopman model of the Lie derivative is used to synthesize a polynomial controller via SOS optimization. The result is a flexible data-driven method that skips the intermediary process of system identification and can be applied widely to control problems. The proposed approach is used to successfully synthesize a controller to stabilize an inverted pendulum on a cart.
Jason J. Bramburger, Steven Dahdah, James Richard Forbes
2023-07-03T15:09:02Z
http://arxiv.org/abs/2307.01089v1
# Synthesizing Control Laws from Data using Sum-of-Squares Optimization ###### Abstract The control Lyapunov function (CLF) approach to nonlinear control design is well established. Moreover, when the plant is control affine and polynomial, sum-of-squares (SOS) optimization can be used to find a polynomial controller as a solution to a semidefinite program. This letter considers the use of data-driven methods to design a polynomial controller by leveraging Koopman operator theory, CLFs, and SOS optimization. First, Extended Dynamic Mode Decomposition (EDMD) is used to approximate the Lie derivative of a given CLF candidate with polynomial lifting functions. Then, the polynomial Koopman model of the Lie derivative is used to synthesize a polynomial controller via SOS optimization. The result is a flexible data-driven method that skips the intermediary process of system identification and can be applied widely to control problems. The proposed approach is used to successfully synthesize a controller to stabilize an inverted pendulum on a cart. Control Lyapunov function, Koopman operator, Extended Dynamic Mode Decomposition, lifting functions, sum-of-squares optimization ## I Introduction Nonlinear systems can be found throughout engineering, science, economics, and other domains. Often nonlinear systems must be controlled to realize useful behavior. There are a plethora of nonlinear control design methods to choose from, such as gain-scheduling, feedback linearization, integrator backstepping, sliding-mode control, and others [1]. A popular nonlinear control design method is the _control Lyapunov function (CLF)_ approach [2]. The gist of the CLF approach to control design is that, given a control affine system, a controller is sought such that a Lyapunov function candidate associated with the closed-loop system satisfies \(V(x_{*})=0\), \(V(x)>0\), \(\forall x\in\mathcal{D}\setminus\{x_{*}\}\), and \(V(x)<0\), \(\forall x\in\mathcal{D}\setminus\{x_{*}\}\) where \(x_{*}\) is an equilibrium point and \(\mathcal{D}\subseteq\mathbb{R}^{d}\) is some domain. See SSII-A for a review of the CLF approach to controller design. One means of designing a controller via the CLF approach is to employ techniques from _sum-of-squares (SOS) optimization_. Roughly speaking, by exploiting the polynomial form of the plant and the Lyapunov function candidate, a sufficient condition for the existence of a polynomial controller is the existence of a solution to a linear matrix inequality (LMI) feasibility problem [3, 4]. The SOS optimization approach to control design via a CLF is attractive because semidefinite programming can be leveraged to solve LMI feasibility problems in a simple and efficient manner. There have been numerous SOS optimization approaches to CLF-based control design. For instance, [5, 6] consider controller design using a CLF approach to ensure that the region of attraction of the closed-loop system about the equilibrium point \(x_{*}\) is as large as possible. In [7], an approximate solution to the Hamilton-Jacobi-Bellman (HJB) equation is found using SOS optimization, yielding a suboptimal controller. State- and output-feedback control design in a SOS optimization framework for parabolic PDE systems is considered in [8]. To use the CLF approach in concert with SOS optimization for control design, a model of the nonlinear system is needed. When a model is not available, but a plethora of data is available, a data-driven approach to modelling and control design is natural. The _Koopman operator_ approach to data-driven modelling of nonlinear systems has garnered significant attention recently [9, 10, 11]. The basic idea behind the Koopman operator is that a finite-dimensional nonlinear system can be expressed as an infinite-dimensional linear system using _lifting functions_[12]. The linearity of the Koopman operator is attractive because standard linear systems tools, such as the eigenspectrum [13], can be used to analyze nonlinear systems. A finite-dimensional approximation of the Koopman operator can be readily identified from data [14]. This approximate Koopman operator can then be used as the basis for control design. For instance, [15] considers model predictive control (MPC), [16] considers an active learning approach in a Koopman framework, and [17] considers discounted optimal control using the Perron-Frobenius operator, the dual to the Koopman operator. This paper proposes a data-driven approach to CLF-based control design using the Koopman operator. The CLF approach relies on the generator of the Koopman operator, the so-called Lie derivative, of the closed-loop Lyapunov function candidate being strictly negative. When the lifting functions associated with the Koopman operator are polynomials, the Lie derivative of the closed-loop Lyapunov function candidate is _almost_ polynomial. Forcing the controller to be polynomial enables the use of SOS optimization to find a suitable controller that renders the Lie derivative of the closed-loop Lyapunov function candidate negative definite. As a consequence, the equilibrium point \(x_{*}\) of the closed-loop system is made asymptotically stable. The novel contribution of this letter is nonlinear controller synthesis via the following two-step method. First, the Lie derivative of a given CLF candidate is approximated using the Koopman operator with polynomial lifting functions. Next, this Koopman representation of the Lie derivative is incorporated into a SOS optimization problem that parameterizes the controller as a polynomial of the state variables. Convex SOS optimization routines are then leveraged to find a control law that renders the equilibrium point \(x_{*}\) of the closed-loop system asymptotically stable. This letter is organized as follows. CLFs, SOS optimization, and Koopman operator theory are reviewed in SSII. The main theoretical results are presented in SSIII. A numerical example involving the control of an inverted-pendulum on a cart is provided in SSIV. The paper is drawn to a close in SSV. ## II Preliminaries In this section we provide the necessary preliminary information to present our method of synthesizing control laws from data. Throughout we will have \(\langle u,v\rangle\) denote the inner product of vectors \(u,v\in\mathbb{R}^{d}\). The temporal argument of the state \(x(t)\), the control \(u(t)\), etc., will be suppressed unless required for clarity. ### _Control Lyapunov functions_ Consider a control affine system \[\dot{x}=f(x)+\sum_{i=1}^{m}g_{i}(x)u_{i},\quad x\in\mathbb{R}^{d},\ u_{i}\in \mathbb{R}, \tag{1}\] for which we wish to specify a control input \(u=[u_{1},\dots,u_{m}]^{T}\) that forces all initial conditions belonging to some domain \(\mathcal{D}\subseteq\mathbb{R}^{d}\) into an equilibrium point \(x_{*}\) as \(t\to\infty\) under the flow of (1). To do so, one may specify a _control Lyapunov function_ (CLF) [2]\(V:\mathcal{D}\to\mathbb{R}\) satisfying \(V(x)>0\,\forall x\in\mathcal{D}\setminus\{x_{*}\}\) and \(V(x_{*})=0\), and then seek a control input so that \[\left\langle\nabla V(x),f(x)+\sum_{i=1}^{m}g_{i}(x)u_{i}\right\rangle<0,\quad \forall x\in\mathcal{D}\setminus\{x_{*}\}. \tag{2}\] Indeed, (2) guarantees that \(V\) decreases monotonically along trajectories of (1), eventually reaching the global minimum at \(V(x_{*})=0\). Crucial for our work later in this letter, the system being control affine means that \(u\) enters (2) linearly. When the inequality in (2) is not strict, only Lyapunov stability can be concluded rather than asymptotic stability. Precisely, this means that \(0\leq V(x(t))\leq V(x(0))\) for all \(t\geq 0\), meaning that the motion of \(x(t)\) is constrained by the level set \(V(x(0))\). Moreover, LaSalle's invariance principle guarantees the limiting behaviour of \(x(t)\) is contained in the set of \(x\) values for which (2) is exactly zero. Such non-strict inequalities become important when imposing a tractable relaxation of the inequality (2) in the following subsection. ### _Synthesizing controllers with semidefinite programming_ Although finding a control input \(u\) to satisfy (2) is difficult in general, under certain assumptions on (1) this process can be automated using standard optimization methods. Specifically, if \(f(x)\), \(g_{i}(x)\), and \(V(x)\) are all polynomial, then we can search for a polynomial state-dependent feedback control law \(u(x)\). Fixing the degree of \(u(x)\) allows its coefficients to be optimized to satisfy the polynomial inequality constraint (2) for all \(x\in\mathcal{D}\). Although these coefficients appear linearly in (2), tuning them to satisfy such a polynomial inequality is an NP-hard task in general [18]. To make the controller synthesis problem tractable, we replace the polynomial inequalities with the sufficient conditions that the polynomials are sum-of-squares (SOS). Precisely, a polynomial \(p(x)\) is SOS if there exists polynomials \(q_{1}(x),\dots,q_{k}(x)\) so that \[p(x)=\sum_{i=1}^{k}q_{i}(x)^{2}. \tag{3}\] Identifying an SOS representation of a polynomial trivially verifies that it is nonnegative. Furthermore, verifying an SOS representation constitutes a semidefinite program, since \(p(x)\) is SOS if and only if there exists a vector of monomials \(v(x)\) and a positive semidefinite matrix \(P\) such that \(p(x)=v(x)^{T}Pv(x)\). Recall (2) and suppose that \(\mathcal{D}\) is a semialgebraic set, meaning there exists polynomials \(\{a_{j}\}_{j=1}^{J}\), \(\{b_{\ell}\}_{\ell=1}^{L}\) so that \[\mathcal{D}=\{x\in\mathbb{R}^{d}|\ a_{j}(x)\geq 0,b_{\ell}(x)=0,\forall j, \ell\}. \tag{4}\] A sufficient condition for identifying a stabilizing polynomial state-dependent controller \(u(x)=\begin{bmatrix}u_{1}(x),\dots,u_{m}(x)\end{bmatrix}^{T}\) is [3, 4] \[\begin{split}(a)&-\left\langle\nabla V(x),f(x)+\sum_{i=1}^{m}g_{i}(x)u _{i}(x)\right\rangle\\ &+\sum_{j=1}^{J}a_{j}(x)\sigma_{j}(x)+\sum_{\ell=1}^{L}b_{\ell}(x )\rho_{\ell}(x)\ \text{is SOS},\\ (b)&\sigma_{j}(x)\ \text{is SOS},\end{split} \tag{5}\] for polynomials \(\sigma_{j}\) and \(\rho_{\ell}\). Note that "is SOS" means there exists an SOS representation of the given polynomial. By fixing the degrees of \(u_{i}\), \(\sigma_{j}\), and \(\rho_{\ell}\), the SOS constraints can be translated into semidefinite programs by freely available software packages like YALMIP [19]. Solvers such as MOSEK [20] can efficiently solve these semidefinite programs to determine the polynomial coefficients. ### _The Koopman operator_ Let \(\Phi(t;x):\mathbb{R}_{+}\times X\to X\) represent the flow of a dynamical system at time \(t\geq 0\) with the initial condition \(\Phi(0;x)=x\). The _Koopman operator_ is a linear transformation that \(\mathcal{K}_{t}\) that for each \(t\) maps a lifting function \(\varphi:X\to\mathbb{R}\) to \[\mathcal{K}_{t}\varphi=\varphi(\Phi(t;x)),\quad\forall x\in\mathbb{R}^{d}. \tag{6}\] Koopman lifting functions are also often called _observables_, but are unrelated to the concept of observability. The family \(\{\mathcal{K}_{\epsilon}|\ t\in\mathbb{R}_{+}\}\) is a one-parameter semigroup whose generator is referred to as the _Lie derivative_, acting on differentiable lifting functions via \[\mathcal{L}\varphi=\lim_{t\to 0^{+}}\frac{\mathcal{K}_{t}\varphi-\varphi}{t}. \tag{7}\] Intuitively, the Lie derivative represents a derivative of the lifting function \(\varphi\) in the direction of the flow of the system. Lyapunov functions are examples of lifting functions whose values decrease along trajectories. Moreover, given a candidate Lyapunov function \(V:\mathcal{D}\rightarrow\mathbb{R}\) for (1), the Lie derivative of \(V\) is evaluated using the chain rule to be exactly \[\mathcal{L}V=\bigg{\langle}\nabla V(x),f(x)+\sum_{i=1}^{m}g_{i}(x)u_{i}\bigg{\rangle}, \tag{8}\] which demonstrates a connection between the Koopman operator and Lyapunov functions. ## III Main Result Our method of synthesizing control laws from data comes as a two-step process. First, we estimate the Lie derivative from data using well-developed techniques for approximating the action of the Koopman operator on finite _dictionaries_ of lifting functions [14]. Second, we integrate our estimated Lie derivative into the SOS framework of SSII-B and provide an SOS optimization problem that determines control laws directly from data. ### _Estimating Lie derivatives from data_ Begin by supposing that snapshots of a control system are given in the form of triples \(\{(x_{k},u_{k},y_{k})\}_{k=1}^{n}\subset\mathbb{R}^{d}\times\mathbb{R}^{m} \times\mathbb{R}^{d}\). Here, \(y_{k}\) is the state of the system exactly \(\tau>0\) time units after having state and control values \((x_{k},u_{k})\). We then consider two dictionaries of polynomial lifting functions of the state, \(\phi_{1},\dots,\phi_{p}\) and \(\psi_{1},\dots,\psi_{q}\). Denote \[\phi=\begin{bmatrix}\phi_{1}&\cdots&\phi_{p}\end{bmatrix}^{T},\quad\psi= \begin{bmatrix}\psi_{1}&\cdots&\psi_{q}\end{bmatrix}^{T}. \tag{9}\] Lyapunov functions are assumed to belong to \(\mathrm{span}\{\phi\}\), while their image under the Lie derivative is projected into \(\mathrm{span}\{\psi\}\). Two dictionaries are required because the Lie derivative associated with a polynomial system is expected to be of a higher degree than the original lifting function it is being applied to. To see why this is the case, let \(F\) denote the right-hand-side of our (assumed polynomial) control-affine system (1). Note that the degree of \(\mathcal{L}V=\langle\nabla V,F\rangle\), the Lie derivative (8), is \[\deg(\nabla V)+\deg(F)=\deg(V)-1+\deg(F)\geq\deg(V), \tag{10}\] with a strict inequality when \(F\) is nonlinear, that is, \(\deg(F)>1\). In practice one should aim to have \(\mathrm{span}\{\phi\}\subseteq\mathrm{span}\{\psi\}\) for the approximation of the Lie derivative later in this section. Following the method of EDMD [14], define the \(p\times n\) matrix \[\Phi=\begin{bmatrix}\phi(y_{1})&\phi(y_{2})&\cdots&\phi(y_{n})\end{bmatrix}, \tag{11}\] and the \(q(m+1)\times n\) matrix \[\Psi=\begin{bmatrix}\psi(x_{1})&\psi(x_{2})&\cdots&\psi(x_{n})\\ \psi(x_{1})u_{1,1}&\psi(x_{2})u_{1,2}&\cdots&\psi(x_{n})u_{1,n}\\ \vdots&\vdots&\ddots&\vdots\\ \psi(x_{1})u_{m,1}&\psi(x_{2})u_{m,2}&\cdots&\psi(x_{n})u_{m,n}\end{bmatrix}, \tag{12}\] where we assume that the dynamics are control affine, leading to the specific form of \(\Psi\)[21]. With these matrices we can approximate the Koopman operator by first obtaining the matrix \(K\in\mathbb{R}^{p\times q(m+1)}\) as the solution to the minimization problem \[K=\Phi\Psi^{\dagger}=\operatorname*{arg\,min}_{K}\|\Phi-K\Psi\|_{F}, \tag{13}\] where \((\cdot)^{\dagger}\) denotes the Moore-Penrose pseudoinverse and \(\|\cdot\|_{F}\) denotes the Frobenius norm. The matrix \(K\) can be broken up into \(m+1\) size \(p\times q\) matrices \[K=\begin{bmatrix}A&B_{1}&\cdots&B_{m}\end{bmatrix}, \tag{14}\] representing the lifted components corresponding to \(f\) and \(g_{1},\dots,g_{m}\) in a control affine system (1). Then, for any given control input \(u=\begin{bmatrix}u_{1},\dots,u_{m}\end{bmatrix}^{T}\), the matrix \(K\) leads to an approximation of the Koopman operator \(\tilde{\mathcal{K}}\) acting on lifting functions \(\varphi=\langle c,\phi\rangle\in\mathrm{span}\{\phi\}\) by \[\tilde{\mathcal{K}}(\varphi):=\langle c,A\psi\rangle+\sum_{i=1}^{m}\langle c,B _{i}\psi\rangle u_{i}, \tag{15}\] which one can verify belongs to \(\mathrm{span}\{\psi\}\) for each \(u\in\mathbb{R}^{m}\). With the approximate Koopman operator \(\tilde{\mathcal{K}}\) we can further estimate the Lie derivative as a finite-difference approximation of (7). Precisely, \(\tilde{\mathcal{L}}\) acts on \(\varphi=\langle c,\phi\rangle\in\mathrm{span}\{\phi\}\) by \[\tilde{\mathcal{L}}(\varphi)=\frac{\tilde{\mathcal{K}}(\varphi)-\varphi}{\tau}. \tag{16}\] Then, using (15), it follows that \[\tilde{\mathcal{L}}(\varphi)=\tau^{-1}\langle c,A\psi-\phi\rangle+\tau^{-1} \sum_{i=1}^{m}\langle c,B_{i}\psi\rangle u_{i}, \tag{17}\] for a given control input \(u\in\mathbb{R}^{m}\). Having \(\mathrm{span}\{\phi\}\subseteq\mathrm{span}\{\psi\}\) guarantees that \(\tilde{\mathcal{L}}:\mathrm{span}\{\phi\}\rightarrow\mathrm{span}\{\psi\}\), just like \(\tilde{\mathcal{K}}\). We refer the reader to [22] for convergence proofs regarding the Lie derivative approximation \(\tilde{\mathcal{L}}\) in the limits of infinite data (\(n\rightarrow\infty\)), sampling rates (\(\tau\to 0^{+}\)), and dictionaries (\(q\rightarrow\infty\)). ### _Synthesizing control laws from data_ The work in the previous subsection allows one to estimate the Lie derivative direction from data. We now leverage this method to incorporate it into a SOS-driven method for synthesizing control laws. Let us begin by assuming that \(V\in\mathrm{span}\{\phi\}\), that is, there is some \(c\in\mathbb{R}^{p}\) so that \(V=\langle c,\phi\rangle\) is our candidate control Lyapunov function. The Lie derivative condition (2) is then replaced with the data-driven Lie derivative condition, \[\tilde{\mathcal{L}}(V)=\tau^{-1}\langle c,A\psi-\phi\rangle+\tau^{-1}\sum_{i=1 }^{m}\langle c,B_{i}\psi\rangle u_{i}<0. \tag{18}\] Since \(c\) is fixed because \(V\) is given, (18) is an affine constraint for the control input \(u\). Moreover, since \(\phi\) and \(\psi\) are dictionaries of polynomial lifting functions, it follows that all terms in \(\tilde{\mathcal{L}}(V)\) are polynomials in the state variable \(x\). Thus, we now follow a similar procedure to SSII-B by considering \(u\) as a polynomial function of the state variable \(x\) and relaxing the inequalities to SOS conditions. In detail, we consider a third dictionary of polynomials in \(x\), denoted \[\chi=\begin{bmatrix}\chi_{1},\ldots,\chi_{r}\end{bmatrix}^{T}, \tag{19}\] and consider \(u=C\chi\) for some coefficient matrix \(C\in\mathbb{R}^{m\times r}\). Hence, the data-driven SOS relaxation for synthesizing control laws on the semialgebraic set \(\mathcal{D}\) as in (4) is given by \[\begin{split}(a)&-\tau^{-1}\langle c,A\psi-\phi\rangle- \tau^{-1}\sum_{i=1}^{m}\langle c,B_{i}\psi\rangle[C\chi]_{i}\\ &+\sum_{j=1}^{J}a_{j}(x)\sigma_{j}(x)+\sum_{\ell=1}^{L}b_{\ell}(x) \rho_{\ell}(x)\text{ is SOS},\\ (b)&\sigma_{j}(x)\text{ is SOS},\end{split} \tag{20}\] which comes from replacing the exact Lie derivative, \(\mathcal{L}V\) in (5), with that approximated from data, \(\tilde{\mathcal{L}}(V)\), in (18). The goal is then to determine the coefficients of the matrix \(C\) appropriately to derive a control law from data. Since there could be many choices for \(C\), we propose the following convex optimization problem: \[\min_{C\in\mathbb{R}^{m\times r}}\{h(C):\text{(\ref{eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq To implement the synthesis of a control law \(u(x_{1},x_{2},x_{3})\) as an optimization problem, we use the state space \[\mathcal{D}=\{(x_{1},x_{2},x_{3})\in\mathbb{R}^{3}|\;\eta^{2}-x_{2}^{2}\geq 0,\;1 -x_{1}^{2}-x_{2}^{2}=0\} \tag{27}\] for some parameter \(\eta\in(0,1)\), which must be built into our SOS programs as in (5) and (20). The equality condition \(1-x_{1}^{2}-x_{2}^{2}=0\) was discussed above, while the inequality condition \(\eta^{2}-x_{2}^{2}\geq 0\) excludes from the domain a strip on the cylinder where \(x_{1}=0\) corresponding to \(\theta=\frac{\pi}{2},\frac{3\pi}{2}\). The reason that this is excluded from the domain is that the control input to the system, given by \(\cos(\theta)u=x_{1}u\), vanishes at this point and the Lie derivative is not necessarily negative for all \(x_{2}=\pm 1\) and \(x_{3}\in\mathbb{R}\). In practice, we take \(\eta\) as close to 1 as possible to encapsulate as much of the the full state space as we can. Numerical results presented in this demonstration use \(\eta^{2}=0.95\) and we promote sparsity in the controller using the optimization objective \(h(C)=\|C\|_{1}\), as presented previously. Experiments reveal that increasing \(\alpha\) causes a proportional increase in the exponential rate of convergence of the pendulum arm to the upright position \((\theta,\dot{\theta})=(0,0)\). As an example, Figure 1 presents controlled solutions with \(\alpha=100\). The initial condition is taken to be close to the hanging down position so that the movement of the cart is forced to swing the pendulum up into the upright position. The synthesized state-dependent control law is given by \[u_{*}(x_{1},x_{2},x_{3})=212.5755x_{1}x_{2}+54.1296x_{1}x_{3}, \tag{28}\] which, in terms of the original state variables \((\theta,\dot{\theta})\), is given by \[u_{*}(\theta,\dot{\theta})=212.5755\cos(\theta)\sin(\theta)+54.1296\cos( \theta)\dot{\theta}. \tag{29}\] We see in Figure 1 this control law leads to a quick jerk of the cart to the left and then right to swing the pendulum up, after which the cart ceases to move. Notice further that the controller (29) vanishes at both equilibria \((\theta,\dot{\theta})=(0,0)\) and \((\theta,\dot{\theta})=(\pi,0)\), meaning that the control is inactive at the hanging down (\(\theta=\pi\)) position. To circumvent this issue, a discontinuous control would be needed [25, 26, 27], which is not explored herein. Nonetheless, this is little issue from a practical point of view as one may randomize an initial disturbance in the cart to throw oneself away from the hanging down state and then activate the control law to stabilize the system. ## V Conclusion In this letter we have presented a simple, flexible, and efficient method for synthesizing control laws directly from data. We emphasize that our method does not require the intermediary step of identifying the nonlinear system dynamics or performing parameter estimation. Instead, we use the EDMD framework to produce an approximation of the Koopman operator on a dictionary of polynomial lifting functions. The result is a linear description of the dynamics in the lifted coordinates which can then be integrated with well-established CLF methods and convex SOS optimization routines. We therefore provide a two-step data-driven method that can be applied broadly to problems in control. Although we have explored the application to continuous-time data in this letter, the results can equivalently be applied to discrete-time processes as well by simply fixing \(\tau=1\). Moreover, the EDMD framework will equally approximate the Koopman operator for general stochastic processes with no modifications to the method and similar convergence guarantees [22]. This means that our results could also be applied to data generated by stochastic systems, with the only minor difference being that the Lie derivative condition \(\mathcal{L}V\leq 0\) is now considered in expectation [28]. The ability to apply these methods to stochastic systems could be promising for overcoming the inevitable noise that is produced when gathering real-world data. A report on the applicability and noise-robustness of the method on laboratory data will be left to a follow-up investigation. Fig. 1: Controlled solutions of the inverted pendulum on a cart model (22). Top: The controlled arm angle \(\theta\) and its derivative \(\dot{\theta}\), forced to converge to the unstable upright position at \((\theta,\dot{\theta})=(0,0)\). Bottom: The monotonic approach inside the Lyapunov function (23) (scaled by \(1/10\) for interpretability) according to the controller, \(u(\theta,\dot{\theta})\), given by (29).
2306.07909
Observational Signatures of Circumbinary Discs I: Kinematics
We present five morphological and kinematic criteria to aid in asserting the binary nature of a protoplanetary disc, based on 3D hydrodynamical simulations of circumbinary discs post-processed with Monte Carlo radiative transfer. We find that circumbinary discs may be identified by i) a central cavity, ii) spiral arms both in and outside of their central cavities, iii) non-localised perturbations in their iso-velocity curves, iv) asymmetry between the lines of maximum speed of the blue and red-shifted wings and v) asymmetry between the area of the blue and red-shifted wings. We provide quantitative metrics for the last two criteria that can be used, in conjunction with the morphological criteria, to signal whether a protoplanetary disc is likely to be a circumbinary disc.
Josh Calcino, Daniel J. Price, Christophe Pinte, Himanshi Garg, Brodie J. Norfolk, Valentin Christiaens, Hui Li, Richard Teague
2023-06-13T17:01:58Z
http://arxiv.org/abs/2306.07909v1
# Observational Signatures of Circumbinary Discs -- I: Kinematics ###### Abstract We present five morphological and kinematic criteria to aid in asserting the binary nature of a protoplanetary disc, based on 3D hydrodynamical simulations of circumbinary discs post-processed with Monte Carlo radiative transfer. We find that circumbinary discs may be identified by i) a central cavity, ii) spiral arms both in and outside of their central cavities, iii) non-localised perturbations in their iso-velocity curves, iv) asymmetry between the lines of maximum speed of the blue and red-shifted wings and v) asymmetry between the area of the blue and red-shifted wings. We provide quantitative metrics for the last two criteria that can be used, in conjunction with the morphological criteria, to signal whether a protoplanetary disc is likely to be a circumbinary disc. keywords: protoplanetary discs -- circumstellar matter -- methods: numerical -- hydrodynamics ## 1 Introduction Recent observations of protoplanetary discs, from optical and near infrared, to centimetre wavelengths, have revealed an abundance of substructures such as, spiral arms, rings, gaps, and cavities (e.g. Dong et al., 2018; Long et al., 2018; Andrews et al., 2018; Norfolk et al., 2021; van der Marel et al., 2021). Discerning a single origin for these structures has remained a challenge. More often than not they have been attributed to the interaction of companions, of planetary or stellar mass, with the gas and dust content in the disc (e.g. see Dong et al., 2015; Calcino et al., 2019; Baruteau et al., 2019; Calcino et al., 2020; Veronesi et al., 2020). Point-like features seen with direct imaging provide the most compelling evidence of companions. In all but a few cases (e.g. PDS 70 Keppler et al. (2018), HD 142527 Biller et al. (2012); Lacour et al. (2016), and HD 100453 Beniily et al. (2017); Rosotti et al. (2020); Gonzalez et al. (2020)) convincing evidence is lacking. Emission from the central star and scattering from the protoplanetary disc make such detections difficult. One may try to infer the existence of perturbing bodies in protoplanetary discs by matching scattered light and/or continuum observations of these discs (e.g. Dipierro et al., 2014; Dong et al., 2015; Dipierro et al., 2018; Calcino et al., 2019; Baruteau et al., 2019; Calcino et al., 2020; Veronesi et al., 2020), but models are degenerate. For example, Calcino et al. (2019) explained the substructures of IRS 48 using a stellar mass companion, while van der Marel et al. (2013) and Zhu & Stone (2014) argued for a planet. Hence one cannot always rule out models. A more robust way is to use kinematics (e.g. Pinte et al., 2018; Teague et al., 2018; Pinte et al., 2019, 2020; Calcino et al., 2022). The idea is to detect planets influencing the surrounding disc material by observing rotational line transitions from species such as carbon monoxide (Perez et al., 2015; Perez et al., 2018; Teague et al., 2018). However these methods to date have focused on inferring planetary mass companions, and not much can be said about more massive bodies. Companions of stellar mass produce large perturbations on the disc and open wide, deep cavities which can introduce fast radial flows as the outer disc material accretes onto the binary (Casassus et al., 2013; Rosenfeld et al., 2014). If inclined with respect to the outer disc, they can also produce warps and disc tearing (Facchini et al., 2013), which will leave peculiar signatures in the kinematics. However these features can also be produced by planetary mass companions (e.g. see Nealon et al., 2018; Zhu, 2019), so they are not necessarily a sign-post for circumbinary discs. (Price et al., 2018) and Calcino et al. (2019) computed circumbinary disc signatures in HD 142527 and IRS 48, respectively, and showed that large perturbations are introduced, particularly inside the cavity. Price et al. (2018) showed that the fast radial flows seen in HD 142527 (Casassus et al., 2015) naturally occur due to the observed stellar companion. Calcino et al. (2019) showed that asymmetries in the velocity map, as well as non-localised devi ations in isovelocity curves of individual channel maps, can hint at the circumbinary nature of a disc. In this paper we expand on these findings by exploring observational signatures of circumbinary discs around intermediate mass ratio binary stars. Our aim is to derive kinematic criteria that signify the circumbinary nature of a disc in a quantitative fashion. We leave the application of these criteria to observations to the second paper in this series. The structure of this paper is as follows: We describe our modelling and synthetic observation methods in Section 2, and describe our resulting hydro simulations in Section 3. We introduce morphological and kinematic signatures robustly seen in the synthetic observations of our circumbinary discs in Sections 4 and 5. We derive and test kinematic criteria that quantify asymmetries in the velocity maps in Section 6. We discuss the applicability and caveats of our criteria in Section 7, summarise our results in Section 8. ## 2 Methods ### SPH Simulations We simulated 11 circumbinary and circumstellar discs using the 3D smoothed particle hydrodynamics (SPH) code Phantom(Price et al., 2018). We did not include any dust component in our simulations since we are primarily concerned with the distribution and dynamics of the gas. In all simulations we used \(N_{\rm part}=5\times 10^{6}\) SPH particles to model the gas disc. The central star and companion were modelled as sink particles (Bate et al., 1995), which experience their mutual gravitational attraction, as well from the gas disc. Gas particles are free to accrete onto both sink particles provided they are within a specified accretion radius and are gravitationally bound. Owing to the large parameter space of companion orbital parameters, we restricted our analysis to only a few orbital configurations. We consider companions on both co-planar and inclined, as well as circular and eccentric orbits. We kept the parameters of the gas disc fixed where feasible. Specific orbital and disc parameters used in this study are listed in Table 1, along with the reference names of each simulation. The gas discs in our simulations are initialised such that the surface density \(\Sigma(R)\propto R^{-P}\) for \(R_{\rm in}<R<R_{\rm out}\), where we set \(p=1\). The temperature profile of the disc is locally isothermal with \(T(R)\propto R^{-2qr}\), with \(q_{T}=0.25\). The aspect ratio of the disc is set to \(H/R_{\rm eff}\) at \(R_{\rm eff}\), with specific values listed in Table 1. The central sink particle is set to have a mass of 2 M\({}_{\odot}\), while the companion has a mass ratio of \(q=M_{\rm C}/M_{\rm P}\), where the \(q\) values for each simulation are listed in Table 1. We use the SPH artificial viscosity \(\sigma_{AV}\) to produce a Shakura & Sunyaev (1973) alpha viscosity according to (Lodato & Price, 2010) \[\alpha_{SS}\approx\frac{\alpha_{AV}}{10}\frac{\langle h\rangle}{H}, \tag{1}\] where \(\langle h\rangle\) is the mean smoothing length around a cylindrical annulus and \(H\) is the disc scale height. This prescription means that \(\alpha_{SS}\) is a function of position since (Lodato & Pringle, 2007) \[H=\frac{c_{g}}{\Omega}\propto R^{3/2-qr} \tag{2}\] and \[\langle h\rangle\propto\left(\frac{\Sigma}{H}\right)^{-1/3}\propto R^{(p-qr)/ 3+1/2}. \tag{3}\] Our choice of \(p\) and \(q_{T}\) implies that \(\langle h\rangle\,/H\propto R^{-1/2}\), and hence \(\alpha_{SS}\) increases with decreasing radius. Our quoted values of \(\alpha_{SS}\) are an average, which is obtained by finding the binned average of \(\langle h\rangle\,/H\) as a function of \(R\) and averaging this over all bins. We use a value of \(\alpha_{\rm SS}=5\times 10^{-3}\) for all of our simulations except for the model containing a circumbinary over-dense lump (model OD, see Table 1), where \(\alpha_{\rm SS}=1.5\times 10^{-3}\). As the discs evolve the surface density decreases with time. However, this does not result in a substantial change in \(\alpha_{SS}\) even in our longest duration simulation, No Over-density (NOD), which was evolved for 1,100 orbits of the companion. Our \(\alpha_{SS}\) diverges most significantly from the initial value inside the cavity, where \(\alpha_{SS}\) can reach \(\sim 0.1\). Despite such a large viscosity the radial velocity induced by accretion is still much smaller than the radial velocities induced by the binary companion. Hence the high viscosity does not have a significant effect on interpretation of the kinematics of our circumbinary discs. We discuss this further in Section 7.5. Both simulations with co-planar companions on circular orbits (models over-density, OD, and no over-density, NOD) initially form an over-dense feature orbiting the cavity edge at the Keplerian frequency. It was shown in Ragusa et al. (2020) that this over-density is generated during a phase of rapid growth in disc eccentricity. This eccentricity growth is thought to arise due to either the \((m,l)=(1,1)\) outer circular Lindblad resonance or the \((m,l)=(3,2)\) eccentric Lindblad resonance, which are located at \[R_{\rm L}=\left(\frac{m\pm 1}{l}\right)^{2/3}a_{\rm bin}\approx 1.59\ a_{\rm bin}, \tag{4}\] where \(a_{\rm bin}\) is the binary orbital separation. Only the outer resonances (i.e. the \(m+1\)) resonances lead to growth in eccentricity, while the inner ones (\(m-1\)) damp it. Both simulations are initialised with \(R_{\rm in}\) close to this location, and hence they both develop an over-density. The feature in model NOD persists robustly for roughly 300 orbits of the companion, while in model OD the feature is seen well beyond 800 orbits. The reason why this feature dissipates in one model much earlier than the other is not fully understood (Ragusa et al., 2020), but is likely related to a combination of the SPH resolution at the cavity edge as well as the viscosity. Since model OD is initialised with a disc extending to only 120 au (compared to 400 au in NOD), a higher resolution (and hence lower viscosity) is maintained. We show model OD at an earlier time evolution than model NOD since we are interested in how the over-dense feature changes the kinematic profile of the disc. Of the three inclined models used in this study, two of them (models light inclined companion, LIC, and heavy inclined companion, HIC) are initialised with companions that are not in equilibrium with the disc. As such, the binary in these models strongly torques the disc which results in alignment of the disc and the binary. Previous literature suggests that such misalignments can be maintained as the disc undergoes oscillations around a stable configuration and may persist for thousands of binary orbits (Martin & Lubow, 2017; Smallwood et al., 2019; Rabago et al., 2023). Thus the inclusion of misaligned, unstable binaries is justified, as such objects are expected to exist (Bate, 2018; Wurster et al., 2019). Evolving these simulations for a similar duration as, for example, models OD and NOD, would lead to the discs becoming significantly misaligned from their initial orbits. We only evolve these simulations for 20 binary orbits so that their discs remain close to their initial inclination. This is long enough for a quasi-steady state to develop for the dynamic structure in and near the cavity, but not too long for the disc inclination to change substantially. For all of our circumbinary disc models the orbital elements of the binary change less than 1% compared with the initial values listed in Table 1. For the planet (P) simulation, the semi-major axis reduced to 79.2 au and had negligible change in eccentricity. For the multiple planets simulation, the semi-major axes were reduced to \([74,128]\) au and the eccentricity increased to \([0.026,0.094]\). The eccentric planet (EP) simulation had in increase in semi-major axis to 82 au and a decrease in eccentricity to 0.33. ### Radiative Transfer Modelling and Synthetic Observations We generated synthetic observations of our SPH simulations using the Monte Carlo radiative transfer code accost(Pinte et al., 2006, 2009). Since our simulations did not include the evolution of dust grains, the dust population was assumed to follow the gas in our radiative transfer calculations. The grains were set to have a power-law grain size distribution \(dn/ds\propto s^{-3.5}\) for \(0.03\mu\leq s\leq 1\)mm with gas-to-dust ratio of 100. The gas mass from the simulations is adopted. The grains are assumed to be spherical, homogeneous, and composed of astronomical silicate (Weingartner and Draine, 2001). We used \(10^{8}\) Monte Carlo photon packets to compute the temperature and specific intensities at each wavelength. Images were then produced by ray-tracing the computed source function. We arbitrarily assume an inclination of \(i=30^{\circ}\), a position angle PA \(=270^{\circ}\), and a source distance of 100 pc. When generating CO isotopologue observations we assumed that \(T_{\rm gas}=T_{\rm dust}\) and all molecules are at Local Thermodynamical Equilibrium (LTE), along with constant abundance ratios across the disc relative to the gas mass. The ratios adopted were \({}^{12}\)CO/H\({}_{2}=1\times 10^{-4}\), \({}^{13}\)CO/H\({}_{2}=2\times 10^{-6}\), and C\({}^{18}\)O/H\({}_{2}=1\times 10^{-7}\). These abundances are altered by photo-dissociation and CO freeze out (\(T=20\) K) following Appendix B of Pinte et al. (2018). We assume that the primary star in every simulation has an effective temperature of \(T_{\rm eff}=8000\) K and radius \(R=1.8\) R\({}_{\odot}\), giving a blackbody luminosity of \(\sim 12\) L\({}_{\odot}\), typical for Herbig Ae/Be stars. The stellar properties for each companion are calculated from their final mass (almost identical to those listed in Table 1) from the stellar tracks by Siess et al. (2000) assuming an age of 3.5 Myr. For companions with planetary mass, their luminosity is adopted from Allard et al. (2001). The final images are produced with a pixel resolution of \(0.03^{\prime\prime}\). Individual channels for the CO isotopologues are created at a separation of 50 ms\({}^{-1}\). We mimic the finite spectral resolution by linearly interpolating over 5 channels to produce 101 images between the first and last channel. These images were averaged after weighting by a Hann window function producing a width and separation of 250 ms\({}^{-1}\). The channels were then smoothed with a Gaussian beam assuming a beam size of \(0.15\times 0.15\) arcseconds.1 We choose this beam size as it is the standard beam size obtained from the MAPS survey (Oberg et al., 2021). Footnote 1: The code used to conduct these calculations, pymcfost, is available at [https://github.com/cpinte/pymcfost](https://github.com/cpinte/pymcfost). When adding white noise to our simulated observations, we assumed a specific noise levels of \(F_{\rm noise}=[1,2.5,5,10]\) mJy. These noise levels correspond to an average peak signal-to-noise ratio of approximately SNR \(=[170,70,35,18]\) for the CO (3-2) line emission in a single channel, across all of the models. The noise levels we assume are readily achievable with a few hours of integration on source, however the signal-to-noise ratio of the lowest noise model is quite optimistic given we assume bright and hot central stars that produce brighter CO emission than would be seen around fainter stars. The \(F_{\rm noise}=2.5\) mJy produces a channel signal-to-noise ratio closer to what has been obtained with previous ALMA observations (e.g. the MAPS sample, Oberg et al., 2021). We leave a more detailed study of the different noise levels and how they affect our kinematic criteria derived in Section 6.1 in the Appendix. The noise is generated using a random Gaussian with a mean of zero which we then convolve with a Gaussian beam.The convolved noise is then rescaled such that it has a final FWHM of F\({}_{\rm noise}\). The noise is then added to the convolved observations to produce the final synthetic observations. We used the code bettermoments(Teague and Foreman-Mackey, 2018) to generate moment maps of our synthetic observations and ALMA CO observations. We apply noise cuts when generating our moment maps. For the F\({}_{\rm noise}=1\) mJy noise level, we apply a 5 RMS noise cut, while for the other noise levels the cut is 7 RMS. We used the first moment to generate our velocity maps, but also discuss and test other methods in the Appendix. The source distance assumed, along with the general size of our discs in Table 1 and the adopted beam size imply that our discs are very well resolved. We assumed this to present a best case scenario of what kinematic signatures are and will be possible to observe with current generation interferometers such as ALMA. We did not take into account the image artefacts that can arise due to sparse \(uv\)-coverage. We test how changing the beam size and disc inclination affect our kinematic criteria in the Appendix. Some of the models listed in Table 1 contain binary configurations that lead to extremely depleted central cavities (e.g. models OD, NOD, and EC). Since the resolution of an SPH simulation is related to the mass of the gas at a specific location, some regions inside of the cavity are less resolved than others. In Appendix D \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline Ref. & \(q\) & \(a\) (au) & \(e\) & \(i\) & \(\omega\) & \(M_{\rm obs}\) (M\({}_{\odot}\)) & \(H/R_{\rm eff}\) & \(R_{\rm eff}\) & \(R_{\rm in}\) (au) & \(R_{\rm out}\) (au) & \(\rm{eqs}\) & \(N_{\rm obs}\) \\ \hline No Companion (NC) & - & - & - & - & - & 0.020 & 0.066 & 100 & 1 & 400 & \(5\times 10^{-3}\) & 20 \\ Planet (P) & \(2.5\times 10^{-3}\) & 80 & 0.0 & 0.0 & 0.0 & 0.010 & 0.066 & 100 & 10 & 400 & \(5\times 10^{-3}\) & 60 \\ Multiple Planets (MP) & \([2.5,1.25]\times 10^{-3}\) & \([75.6,130]\) & 0.0 & 0.0 & 0.010 & 0.066 & 100 & 100 & 400 & \(5\times 10^{-3}\) & 60 \\ Eccentric Planet (EP) & \(2.5\times 10^{-3}\) & 80 & 0.4 & 0.0 & 0.0 & 0.010 & 0.066 & 100 & 100 & 400 & \(5\times 10^{-3}\) & 60 \\ No Over-density (NOD) & 0.25 & 40 & 0.0 & 0 & 0.010 & 0.066 & 100 & 63 & 400 & \(5\times 10^{-3}\) & 1100 \\ Over-density (OD) & 0.2 & 30 & 0.0 & 0 & 0 & 0.005 & 0.05 & 45 & 45 & 120 & \(1.5\times 10^{-3}\) & 500 \\ Eccentric Companion (EC) & 0.1 & 40 & 0.4 & 0 & 0.010 & 0.066 & 100 & 90 & 400 & \(5\times 10^{-3}\) & 80 \\ Light Inclined Companion (LLC) & 0.15 & 40 & 0.5 & 30 & 0.010 & 0.066 & 100 & 90 & 400 & \(5\times 10^{-3}\) & 20 \\ Heavy Inclined Companion (MIC) & 0.25 & 40 & 0.5 & 30 & 0.010 & 0.066 & 100 & 90 & 400 & \(5\times 10^{-3}\) & 20 \\ Polar Companion (PC) & 0.2 & 40 & 0.5 & 90 & 0.010 & 0.066 & 100 & 90 & 400 & \(5\times 10^{-3}\) & 60 \\ Gravitationally Unstable (GI) & - & - & - & - & 0.75 & 0.05 & 100 & 10 & 400 & - & 30 \\ \hline \end{tabular} \end{table} Table 1: A summary of the initial conditions of the models presented in this paper. Note that model OD is taken from Calcino et al. (2019), but the disc parameters have been scaled. The duration of the simulations is shown in the final column and is measured in the number of orbits of the companion. For models NC and GI the number of orbits is defined at \(R_{\rm out}\), while for the multiple planets simulation it is the number of orbits of the outer planet. we show that decreasing the SPH particle number does not significantly change our kinematic criteria. The main reason for this is that the less resolved portions of the disc do not produce a significant amount of CO flux compared with the higher density and better resolved portions. Furthermore, since we include the effects of photo-dissociation in our radiative transfer calculations, the ratio of CO in the low density regions is much lower than the prescribed ratios listed above, further reducing the observed CO flux. The addition of artificial noise to our simulated channel maps also ensures that no measurable level of flux is coming from the unresolved portions of the disc. This is evident in the velocity maps of Figure 10, where there is a lack of signal inside of the cavity of most simulations. ## 3 Results ### Hydrodynamical Models Figure 1 shows the surface density and velocity components for models OD, NOD, and EC, and models LIC, HIC, and PC in Figure 3. The velocity components are the velocity in the radial direction, \(v_{r}\), the deviation from Keplerian rotation assuming single point mass at the binary centre of mass, \(\Delta V_{K}\), and the velocity in the vertical direction, \(v_{z}\). All velocity components are measured from a thin slice about the mid-plane of the disc model. Keplerian velocity is computed using \[v_{K}=\left(\frac{G(M_{P}+M_{C})}{r_{\rm CM}}\right)^{1/2}, \tag{5}\] where \(r_{\rm CM}\) is the radial location of the gas parcel with respect to the binary centre of mass. #### 3.1.1 Co-planar models Starting with the co-planar models in Figure 1, we observe the presence of an over-dense feature in model OD, while one is lacking in model NOD. Neglecting the presence of the over-dense feature for a moment, the morphology is similar between both models. Both discs are eccentric at the cavity edge, a feature which is seen in other studies of low eccentricity, co-planar binaries (Papaloizou et al., 2001; Ragusa et al., 2017; Hirsh et al., 2020; Ragusa et al., 2020). Model EC (eccentric companion) contains a co-planar companion with a modest eccentricity (\(e=0.4\)) which is lower in mass than the other two models. Non-circular Keplerian motion of the gas is apparent in the radial and azimuthal velocity components. These velocity profiles are consistent with our expectations of gas particles on an eccentric orbit; their azimuthal velocity component reaches a maximum at the pericentre of the disc, while it is at a minimum at the apocentre. The gas particles also have a large radial component to their velocity along the cavity edge due to their eccentric orbit. The presence of the over-dense feature changes both the density and velocity structure of the disc. Spiral arms emanating off the over-dense feature maintain a relatively high pitch angle as they propagate radially outwards. The over-dense feature also increases the accretion rate onto the sink particles in the cavity, as shown in previous studies (Farris et al., 2014; Miranda et al., 2017). The spirals are clearly visible in the radial and azimuthal velocity components. The radial component arises owing to the outward radial propagation of the spiral density waves (Rafikov, 2002; Bollati et al., 2021). To better explore these features, in Figure 4 we show multiple timesteps of model OD. Here we can clearly see the outwards radial propagation of the spiral density waves. Another feature of interest is the radially increasing deviation in azimuthal velocity seen across the over-dense feature. A change in velocity on the order of 500 ms\({}^{-1}\) occurs between the start and end of the over-dense feature in the radial direction. Although less obvious this is also evident in the other timesteps. At first impression one might assume this radial change in \(v_{\phi}\) across the over-density is due to the gas pressure support. The change in velocity arising due to the gas pressure support can be derived from the Navier-Stokes equation assuming \(v_{r}<<v_{\phi}\) and the gas is in a circular orbit (e.g. Pringle, 1981) \[v_{\phi}^{2}-v_{K}^{2}=\frac{c_{s}^{2}r}{\rho}\frac{\partial\rho}{\partial r}, \tag{6}\] where \(c_{s}\) is the sound speed. If we take a slice along \(x=0\) in the left panel of Figure 4, the change in velocity owing to the gas pressure support is not large enough to explain the observed change in \(v_{\phi}\). Since the only other force present in our simulations is gravity, the change in velocity gradient must be arising due to the over-dense feature interacting with the time-varying gravitational potential. The cavity in all three circumbinary models is depleted by a factor of at least \(10^{4}\), barring the occasional accretion stream entering the cavity which feeds the primary and secondary sink particles. This drop in density is consistent with the drops found in many transitional discs (van der Marel et al., 2015; Garg et al., 2021). In comparison, the co-planar planet models shown in Figure 2 mostly show smaller velocity perturbations than the co-planar circumbinary models (note the change in the scale of the colourbar). Gas depletion co-located with the planets is much lower than in the circumbinary models. The eccentric planet (EP) model shows larger perturbations than the other two planet models due to the eccentricity of the planet, which causes eccentricity in the gas. Compared with the circumbinary models, model EP has a much lower gas depletion rate inside of the cavity. #### 3.1.2 Inclined models Figure 3 shows the surface density and velocity components for our inclined models. Models LIC (light inclined companion) and HIC (heavy inclined companion) display prominent spiral structure both inside and outside of the cavity. The spirals inside the cavity are excited by a combinations of Lindblad resonances and accretion streams. These two models are similar to the model presented in Poblete et al. (2020) which reproduced the spiral arms inside AB Aurigae (Tang et al., 2017). We leave the interested reader to refer to Sections 3.1 and 3.2 of Poblete et al. (2020) for a more complete description of the time evolution of the spirals. The spiral arms outside of the cavity are caused by a low amplitude over-dense feature orbiting the cavity similar to model OD. The velocity in the \(z\)-direction is non-zero due to the binary torque on the disc creating a warp. We do not present simulated observations of model LIC due to the similarities with model HIC. Model PC (polar companion) contains a companion on a polar orbit. This model has the lowest disc cavity radius of all the models presented, which is inline with both theoretical and numerical studies on eccentric and inclined binaries (Miranda and Lai, 2015; Hirsh et al., 2020). We can also see that this particular orbital configuration is not as efficient at clearing gas inside the cavity, particularly compared with the co-planar models. ## 4 Morphological signatures Figure 5 shows our simulated CO emission for most of our circumbinary disc models (excluding model LIC), while Figure 6 shows CO emission for our no companion and planet models. The columns, from left to right, show the integrated density from the simulation, and the CO (3-2), \({}^{13}\)CO (3-2), and C\({}^{18}\)O (3-2) integrated emission. ### The Cavity The appearance of a cavity in CO isotopologues depends sensitively on the gas mass of the disc, the CO isotopologue abundances, and the temperature profile of the disc. It also depends on the nature of the binary. Inspecting the simulated CO observations in Figure 5, CO emission ranges from optically thin to optically thick. In general, the models with a co-planar companion are more efficient at clearing material in the cavity, and hence have a more optically thin cavity. Inclined models tend to allow more material into the cavity which in the case of model HIC leads to no cavity at all in \({}^{12}\)CO. In all models the cavity is far more prominent in less abundant CO isotopologues, with C\({}^{18}\)O most faithfully tracing the gas surface density. We also note that the cavity size increases as the CO isotopologue abundance decreases. When the cavity is eccentric and the main source of illumination is offset from the centre of the ellipse (as it is in a Keplerian orbit), the cavity wall is not uniformly illuminated. The cavity edge closest to the source of illumination is hotter than more distant regions, producing a temperature difference which can manifest as a brightness asymmetry. For example, in the over-density model the brightest region of the cavity edge is not the over-density, but the region closer to the source of illumination. In comparison with the planet models in Figure 5, the CO emission in the cavity is much lower in the circumbinary disc models. A cavity is present in the eccentric planet (EP) model, however significantly more gas is present inside the cavity than in the circumbinary disc models causing only the less abundant C\({}^{18}\)O emission to display a cavity. With this comparison we can confidently state that circumbinary discs will contain a cavity depleted in either CO or \({}^{13}\)CO emission. ### Spirals Inside The Cavity Spiral structure is observed across the isotopologues inside the cavity, and are composed of spiral density waves excited by Lindblad resonances as well as accretion streams. The morphology of these spirals sensitively depends on the binary orbital parameters relative Figure 1: The surface density (_left panel_), radial velocity (_second from left_), deviation from Keplerian rotation (_second from right_), and vertical velocity (_right panel_) for a selection of our co-planar models listed in Table 1. The white points in the left panel show the position and accretion radius of the sink particles. Both model OD and NOD have highly eccentric discs, which is seen in the velocity maps. In particular, gas motion is super-Keplerian at the pericentre of the eccentric disc, and sub-Keplerian at the periastron. The presence of an over-dense feature in model OD leads to the generation of spiral structure in the gas surface density and velocity perturbations, while only minor spiral structure is faintly seen outside of the cavity of model NOD. Thus we can distinguish between the spirals induced directly from the binary and spirals generated by the over-dense feature. Model EC also displays prominent spiral structure around the cavity in both surface density and velocity. In all co-planar models the velocity in the \(z\)-direction is negligible, and is dominated by noise (i.e. low particle resolution) in the very inner most regions of the cavity. to the location of the disc. Highly inclined and/or eccentric companions tend to result in two prominent inner spirals that appear as a \(m=2\) spiral mode. Our polar companion (PC) model produces a single prominent spiral inside the cavity. Our planet models in Figure 6 produces some faint spiral structure, however their contrast ratio is much lower than in the circumbinary models. ### Spirals Outside The Cavity Spirals outside of the cavity, if present, tend to be tightly wound and spatially co-located with the edge of the cavity. They are more evident in the models containing inclined and/or eccentric companions (i.e. EC, HIC, and PC) in surface density, but are not clearly visible in \({}^{12}\)CO or \({}^{13}\)CO integrated emission. Scattered light observations appear to be a better method of observing these spirals, which has been done in the circumbinary discs of GG Tau A (Keppler et al., 2020) and HD 142527 (Fukagawa et al., 2006; Rodigas et al., 2014; Avenhaus et al., 2014, 2017). These spirals arise due to Lindblad resonances between the secondary companion and the disc and dissipate as they propagate radially outward. The appearance of spirals in CO will be sensitive to the temperature profile of the disc. Our hydrodynamical models are assumed to be locally isothermal, and temperature gradients due to shocks and stellar radiation on the disc surface are not taken into account. These two effect could enhance the scale height at the location of the spirals, allowing them to intercept more stellar radiation than in our radiative transfer models and enhance their visibility in integrated emission. Additional spiral structure outside of the cavity is seen in model OD that are a result of the orbiting over-dense feature. Contrary to spirals generated by the binary, these spiral structures are less tightly wound and emanate a substantial distance from the cavity. They are clearly seen in \({}^{12}\)CO and \({}^{13}\)CO. Faint spiral-like structures and also be seen in model HIC which contains a low amplitude over-dense feature orbiting the cavity. ## 5 Kinematic Signatures The kinematic profile of the disc is a valuable resource for determining the dynamic processes occurring inside of a protoplanetary disc. In the case of a circumbinary disc, the kinematics of the gas is heavily influenced by the interaction between the primary and companion. Therefore, searching for common kinematic signatures in numerical simulations of circumbinary discs can help shed light on the unknown dynamical processes occurring protoplanetary discs by observing the kinematic profiles. ### Channel Maps In Figure 7 we show the channel maps for most models. Starting with model OD (third row of Figure 7), the individual channels are reduced in radial extent compared with other models owing to the smaller outer radius used in this model (see Table 1). The channels are significantly perturbed from the channels of a circular Keplerian Figure 2: Same as Figure 1 but for our planet models. Perturbations in the velocity field are substantially lower than in the co-planar stellar mass companion models of Figure 1 for all models. The eccentric planet (EP) shows larger perturbations than the other planet models owing to the eccentricity of the companion which drives eccentric gas motion primarily inside of its’ orbit. disc. For example, the presence of wiggles or'velocity kinks' (see Calcino et al., 2022, for a definition) across the iso-velocity curves are seen across all the channels shown. These kinks are a result of the perturbations in the velocity profile of the disc created by the over-dense feature, as shown in Figures 1 and 4. We show the robustness of the appearance of wiggles in the channel maps in Figure 8, where we rotate each of the models in the azimuthal direction for the \(v=0.0\) km/s channel. Any model that shows wiggles at a particular viewing angle tends to show wiggles across multiple viewing angles. Thus the appearance of wiggles is robust to the viewing angle but depends on the orbital properties of the companion. We now compare the channels of model OD with model NOD. We begin by noting that azimuth angles \(\phi=0^{\circ}\) and \(\phi=180^{\circ}\) are orientated in such a way that the eccentricity vector of the disc is pointing toward and away from the observer, respectively. At orientations \(\phi=90^{\circ}\) and \(\phi=270^{\circ}\) the eccentricity vector is perpendicular to the line of sight. As the disc is rotated from \(\phi=0^{\circ}\) to \(\phi=90^{\circ}\), the portion of the isovelocity curve on the cavity edge stops pointing toward the projected centre of the disc. We have annotated the perturbation in the iso-velocity curves for model OD in Figure 8. Given our prescribed position angle when conducting the radiative transfer calculations (see Section 2.2), the \(v=0\) km/s iso-velocity curves for a Keplerian disc would point north to south. However we can see that in model OD that this is not true for some orientations. This effect was described in Calcino et al. (2019) and attributed to the eccentricity of the disc. We can better understand this phenomenon by comparing these channels to the velocity components in Figure 1. When \(\phi=0^{\circ}\), the northern iso-velocity curve is tracing the CO along the most distance surface of the disc to the observer, which in this case traces the \(x>0\) side of the models in Figure 1. The \(v_{r}\) component on this side of the disc is strongly negative and is moving towards to the centre of mass, which is in the direction to the observer. Thus the emission close to the edge of the cavity appears spatially located where we would expect to see emission from blue-shifted material and not material with no motion with respect to the line-of-sight. There is a strong gradient in \(v_{r}\) close to the cavity which is seen in the isovelocity curves as the tilt shown in Figure 8. This is also seen at \(\phi=270^{\circ}\). Perturbations are also seen in models EC, HIC, and PC. Close to the cavity, these perturbations are a result of a combination of spiral arms and radial inflows into the cavity. The spiral arms outside of the cavity also appear in the channel maps, particularly in model HIC. With the exception of model OD, the spirals tend to be spatially located close to the cavity. In Figure 9 we show the \(v_{\rm los}=0.0\) km\({}^{-1}\) channel for three models (MP, HIC, GI) and each CO isotopologue. Here it is seen that although all models contain kinks, the circumbinary model contains a large kink in proximity to the cavity. Although this could also be observed in a planet hosting model, it is not likely in a gravitationally unstable disc since as the gas density is so high that the CO should remain optically thick. Figure 3: Same as Figure 1 but for our inclined models. Models LIC and HIC display abundant spiral structure inside and outside of the cavity. The spirals inside the cavity, and close to the cavity edge, are arising due to binary. In both model there is a non-negligible velocity component in the \(z\)-direction due to a slight warp of the disc. Model PC also contains spiral structure, though not as striking as the non-polar cases. Gas flowing inside the cavity is dragged away from the disc mid-plane by the companion, resulting in a large \(v_{z}\). Substantial radial flows inside the cavity are also present in all models. ### Velocity Maps We present our velocity maps of most models in Figure 10. In some models (e.g. model no over-density) there is a lack of signal in CO emission which results in the white regions inside the cavity. Comparing to model NC, significant deviations from Keplerian rotation are see in all circumbinary models, particularly in the regions close to and within the cavity (which has a projected radius of \(\sim 1"\)). Inspecting Figures 1 and 3 (which have the models at the same azimuthal angle as the velocity maps), the majority of the deviation is likely due to the \(\sim\pm 1\) km/s radial velocities inside the cavity. Asymmetries in the maximum velocity on each wing of the velocity map are quite common. For a disc with Keplerian rotation we expect \(v_{\rm max}\approx-v_{\rm min}\), however in our circumbinary disc models the difference between maximum and minimum velocity can be as great as a factor of 2. In models OD and NOD the differences are most noticeable. In both of these cases the high velocity material in the redshifted (right) wing of the velocity map corresponds with the accretion streams being sent to the apastron of the cavity by the binary in Figure 1. We test the robustness of these deviations to viewing angle in Figure 11. When azimuth angle \(\phi=90^{\circ}\) and \(\phi=270^{\circ}\), models OD, NOD, EC, and HIC are orientated such that the eccentricity vector of their circumbinary discs is orientated tangential to the observer-disc line of sight. In these orientations the major deviations in the velocity map arise from the azimuthal velocity component of each disc, and is largely the result of each disc either being modestly (OD and NOD) or slightly (EC and HIC) eccentric. In orientations \(\phi=0^{\circ}\) and \(\phi=180^{\circ}\) most of the deviations are due to the radial velocity component. ## 6 Kinematic Criteria Our models suggest that it is common for the velocity maps of circumbinary discs to display large deviations away from Keplerian rotation. One way to study these deviations is to subtract a best fit Keplerian disc, warped disc, or flared disc model (e.g. Teague et al., 2018, 2019; Casassus & Perez, 2019; Hall et al., 2020). However this then makes the deviations model dependent, and several arte Figure 4: The time evolution of model OD with the eccentricity, \(e\), of the gas particles included. The over-dense feature orbits once roughly every 7 binary orbits (\(T_{B}\)). As the over-dense feature orbits, outer density spirals trail it and propagate radially. These spirals perturb the velocity profile of the disc, and the separation of successive spirals is roughly determined by the orbital frequency of the over-dense feature. The azimuthal velocity is strongly perturbed in the radial direction through the over-density, where the velocity becomes increasingly sub-Keplerian with increasing radius. This is apparent in every snapshot presented, but appears stronger at apastron. This change in azimuthal velocity is mostly created by the change in eccentricity of the gas particles across the over-density. facts can present themselves after model subtraction (Teague et al., 2018; Yen and Gu, 2020). The deviations created by our circumbinary models tend to be so large that subtracting a rotation model is unnecessary to study them. We identify two ways these deviations are manifested in the velocity maps, defined as \(V(x,y)\): substantial changes in velocity along the major-axis of the disc (which should follow a roughly \(r^{-1/2}\) profile), and differences in the area enclosed in the map by a specific velocity. These form the basis of the two kinematic criteria we define below, and are displayed graphically in Figure 12. Figure 5: The integrated CO emission for several of the circumbinary models in Table 1 for the F\({}_{\rm noise}=1\) mJy noise level. The left column shows the SPH integrated density, while the columns show deprojected \({}^{12}\)CO (3-2), \({}^{13}\)CO (3-2), and C\({}^{18}\)O (3-2) integrated emission, in order. Due to the large depletion in gas in our circumbinary simulations, the cavity becomes progressively optically thin with less abundant isotopologues, but may remain optically thick in \({}^{12}\)CO. The transition from an optically thick to optically thin cavity depends on the orbital parameters of the companion, our assumed initial gas mass, and the temperature profile of the disc. Spiral structure is observed both within and outside of the cavity. ### Formulation Our first criteria is used to quantify significant deviations from axi-symmetric, circular Keplerian rotation as a function of position close to the semi-major axis of the disc \[V_{\rm Ratio}(s)=\frac{v_{\rm max}(s_{+})-|v_{\rm min}(s_{-})|}{v_{\rm max}(s_{ +})+|v_{\rm min}(s_{-})|}, \tag{7}\] where \(v_{\rm max}(s_{+})\) and \(v_{\rm min}(s_{-})\) are the maximum and minimum velocities in \(V(x,y)\) after subtraction of the systemic velocity of the system, and \(s_{+}\) and \(s_{-}\) are paths along the line \(s\) in the positive and negative wings of the velocity map, respectively. Note that \(v_{\rm min}\) is a negative quantity so we take the absolute magnitude and by definition \(V_{\rm Ratio}(s)\) is sought for an unperturbed disc. The path \(s\) is defined as the radial positions close to the semi-major axis where the absolute velocity is the highest (see Figure 12). This makes \(V_{\rm ratio}(s)\) more general than simply obtaining velocities along the semi-major axis, since disc flaring and warps can shift the maximum and minimum velocity a significant amount away from the semi-major axis (for example, as seen in HD 163296 Qi et al., 2015; Isella et al., 2018). The path \(s\) is limited by \(0\leq|s|\leq\min({\rm max}|s_{+}|,{\rm max}|s_{-}|)\) where \(s_{+}\) and \(s_{-}\) are the paths on the blue and redshifted sides of the velocity map, respectively, and the maximum of these quantities is their greatest radial extent from their starting position close to the centre of \(V(x,y)\). We also remove any points in \(s_{+}\) and \(s_{-}\) that are within one semi-major axis of the beam to reducing artificially inflated values of \(V_{\rm Ratio}(s)\) due to beam smearing. Thus we compare the velocity along the red and blueshifted sides of the disc. Small values of \(s\) corresponds to regions close to the centre of the velocity map, while large values corresponds to the edge of the disc. There is no assumption on the centre of the disc or the binary centre of mass in the path \(s\), however a central point can be defined by taking the point in the middle of the last points in \(s_{+}\) and \(s_{-}\). We describe our procedure for obtaining the path \(s\) in Appendix A. We verified that a more complicated technique of measuring the velocity difference of every pixel on Figure 6: As in Figure 5 but for the no companion and planet models. Aside from the eccentric planet model, no models show a cavity in any CO isotoplogue (the central small holes are artificial and created by the central sinks). Figure 7: The isovelocity curves for several of the models in Table 1 for the F\({}_{\rm noise}=1\) mJy noise level. In almost every case kinks or wiggles are seen across all of the channels in the circumbinary models. This is in contrary to planet-hosting discs, where the isovelocity curves are only strongly perturbed in the neighbourhood of the perturbing body (Fintte et al., 2018, 2019). Models OD contains a much larger number of perturbations than model NOD (where very few are present), which are a direct result of the orbiting over-dense feature. each side of the velocity wing does not provide a better metric than our method explained above. Our second criteria is related to the area of emission over a specific velocity threshold. In some of our models we find that although \(V_{\rm Ratio}\) may be small, the area enclosed by a specific absolute velocity can vary. To quantify this type of asymmetry we define the ratio \[V_{\rm Area}(v)=\frac{n}{n+N_{\rm beam}}\frac{\sum_{ij}\Gamma(\mathrm{V}(x_{i},y _{j}),v)-\sum_{ij}\Gamma(-\mathrm{V}(x_{i},y_{j}),v)}{\sum_{ij}\Gamma(\mathrm{ V}(x_{i},y_{j}),v)+\sum_{ij}\Gamma(-\mathrm{V}(x_{i},y_{j}),v)} \tag{8}\] where \(n\) is the number of pixels satisfying \(|V(x_{i},y_{i})|\geq v\), \(N_{\rm beam}\) is the number of pixels in the beam, and \[\Gamma(\mathrm{V}(x_{i},y_{j}),v)=\begin{cases}1,&\mathrm{if}\ \mathrm{V}(x_{i},y_{j}) \geq v\\ 0,&\mathrm{otherwise}.\end{cases} \tag{9}\] Note that the denominator of equation (8) is actually just \(n\), so \(V_{\rm Area}(v)\) can be simplified as \[V_{\rm Area}(v)=\frac{1}{n+N_{\rm beam}}\left[\sum_{ij}\Gamma(\mathrm{V}(x_{i },y_{j}),v)-\sum_{ij}\Gamma(-\mathrm{V}(x_{i},y_{j}),v)\right]. \tag{10}\] Figure 8: The \(v_{\rm los}=0.0\) kms\({}^{-1}\) channels for the circumbinary disc models at various viewing angles \(\phi\), using the F\({}_{\rm noise}=1\) mJy noise level. The position angle of the disc is not changed between each column and is set such that for a Keplerian disc the isovelocity curves should point straight north and south. The direction of the isovelocity curves for our circumbinary discs is shifted compared to the expected orientation. This is particularly noticeable in the co-planar disc models where the disc eccentricity is high. We have annotated the perturbed iso-velocity curve for model OD with white dashed lines. For a Keplerian disc these lines should point along the North/South direction given the position angle of the disc. Since model OD has an eccentric disc, the iso-velocity curves for the \(v_{\rm los}=0.0\) kms\({}^{-1}\) point away from this expected direction, and the columns show this is robust to the viewing angle. They also show that the velocity kinks in the vicinity of the cavity are robustly seen in most models regardless of the viewing angle. Figure 10: The Moment 1 velocity maps for all models (excluding LIC) using the channel maps presented in Figure 7 and the no companion model. The white sections in the centre arises due to a lack of signal inside the cavity of some of our models arises due to the large depletion of gas inside of the respective circumbinary disc model. We see that in all circumbinary disc models there are substantial deviations away from Keplerian rotation (top left panel). Figure 9: The \(v_{\rm{max}}=0.0\) kms\({}^{-1}\) channels for the Multiple Planets, Heavy Inclined Companion, and Gravitational Instability models (columns) for each CO isotopologue (rows). Although all models show some degree of kinks, only the circumbinary model shows a kink in proximity of the cavity in CO, however the a depleted inner region does appear in the Multiple Planets. Figure 11: The moment 1 velocity maps for a selection of models across different orientations. In every circumbinary disc model a significant deviation from Keplerian rotation is seen in the central regions of the disc compared to model NC in Figure 10. The range of the velocity is limited to \(0\leq v\leq\max(v_{\max}-\Delta v,-v_{\min}+\Delta v)\) and sample \(v\) in steps \(\Delta v\) equal to the spectral resolution of the observation. The addition of \(\Delta v\) in the reduction of the range of \(v\) ensures we do not sample the spectrally unresolved portions of the very inner disc, which can cause spurious values of \(V_{\rm Area}\). We weight the area ratio by \(n/(n+N_{\rm beam})\) so that unresolved portions of \(V(x,y)\) do not dominate the quantity. For a Keplerian disc \(V_{\rm Area}\) should be close to nought for any value of \(v\). This criteria also measures velocity asymmetries in the disc, however as opposed to \(V_{\rm Ratio}(s)\), it measures these asymmetries in terms of their spatial distribution. ### Weighting and Variance Protoplanetary discs come in wide range of sizes, masses, and environmental conditions. For our criteria to be robust more against these varying conditions, we weight our functions when computing the variance in them. As our results in Section 3.1 show, most perturbations in the kinematics are spatially coincident with the cavity. Hence when computing our criteria, we should down-weight the regions of the disc we do not expect to contain perturbations that arise due to the binary, and up-weight those that do. By doing so, we down-weight the regions over which the variance in outer edges of the disc which can be perturbed by outer companions, flybys, or infalling material. We find the weighted variance of the points in \(V_{\rm Ratio}\) by defining \[\sigma_{\rm Ratio}^{2}=\frac{1}{N_{s}}\sum_{i}w(s_{i})\;V_{\rm Ratio}(s_{i}) ^{2}, \tag{11}\] where \(N_{s}\) is the number of points in \(s\), and \(w(s_{i})\) is a weighting function. For our purposes, we choose a cosine weighting function with \[w_{B}(s_{i})=\begin{cases}1,&\text{if }s_{i}\leq 2\times r_{\rm cavity}\\ \cos^{2}\left(\frac{\pi}{2}\frac{s_{i}-2r_{\rm cavity}}{3r_{\rm cavity}}\right)& \text{if }2\times r_{\rm cavity}<s_{i}\leq 3\times r_{\rm cavity}\\ 0,&\text{if }s_{i}>3\times r_{\rm cavity},\end{cases} \tag{12}\] where \(r_{\rm cavity}\) is the radius of the cavity, which we choose as the peak of the gas surface density.2 A limit of \(3\times r_{\rm cavity}\) is chosen based on the results of Section 3.1. If line emission is not detected up to this radius, we truncate the weighting function by \(r_{\rm eff}\), which is the effective radius of the disc. We obtain \(r_{\rm eff}\) using \(f_{\nu}(r_{\rm eff})=xF_{\nu}\), where \(x=0.9\), \(f_{\nu}\) is the cumulative intensity profile, and \(F_{\nu}=f_{\nu}(\infty)\)(e.g. see Andrews et al., 2018; Long et al., 2019; Long et al., 2022). If the disc does not contain a cavity, then the weighting function \(w_{B}(s)=0\), and thus the variance is also nought. This is desired since, as stated in Section 4.1, we expect all circumbinary discs to contain a cavity. Later in this work we also use the flat weighting function Footnote 2: Often it is not possible to measure the peak of the gas surface density in observations. In place of this, the peak of the continuum offers a suitable replacement since dust grains concentrate at gas pressure maxima which coincides with the peak gas surface density (e.g. Sierra et al., 2019). \[w_{F}(s_{i})=\begin{cases}1,&\text{if }s_{i}\leq r_{\rm eff},\\ 0,&\text{otherwise},\end{cases} \tag{13}\] for demonstrating the effect of the weighting function \(w_{B}(s_{i})\). We find the weighted variance in \(V_{\rm Area}\) by defining \[\sigma_{\rm Area}^{2}=\frac{1}{n_{\nu}}\sum_{i}V_{\rm Area}(v_{i})^{2} \tag{14}\] where \(n_{\nu}\) is the number of sampled velocities. We slightly adjust the way \(V_{\rm Area}(v_{i})\) is computed to apply our weighting. We replace \(\sum_{ij}\Gamma(\mathrm{V}(x_{i},y_{j}),v)-\sum_{ij}\Gamma(-\mathrm{V}(x_{i},y_{j}),v)\) in equation (10) with \(\sum_{ij}\Gamma_{\nu}(\mathrm{V}(x_{i},y_{j}),v)-\sum_{ij}\Gamma_{\nu}(- \mathrm{V}(x_{i},y_{j}),v)\), where \(\Gamma_{\nu}(\mathrm{V}(x_{i},y_{j}),v)\) \[\Gamma_{\nu}(\mathrm{V}(x_{i},y_{j}),v)=\begin{cases}w(x_{i},y_{j}),&\text{if } \mathrm{V}(x_{i},y_{j})\geq v\\ 0,&\text{otherwise}.\end{cases} \tag{15}\] where \[w(x_{i},y_{i})=\begin{cases}1,&\text{if }r_{i}\leq 2\times r_{\rm cavity}\\ \cos^{2}\left(\frac{\pi}{2}\frac{r_{i}-2r_{\rm cavity}}{3r_{\rm cavity}}\right)& \text{if }2\times r_{\rm cavity}<r_{i}\leq 3\times r_{\rm cavity}\\ 0,&\text{if }r_{i}>3\times r_{\rm cavity}.\end{cases} \tag{16}\] Figure 12: A graphical representation of the quantities \(V_{\rm ratio}(s)\) (left panel) and \(V_{\rm Area}\) (right panel). and \(r_{i}=\sqrt{x_{i}^{2}+y_{i}^{2}}\), where \(x_{i}\) and \(y_{i}\) are the deprojected disc coordinates. In this way asymmetries inside and close to the cavity are weighted more than asymmetries in the outer disc which do not arise due to the inner binary (see Section 7.4). Similar to our weighting procedure on \(\sigma^{2}_{\rm Ratio}\), we also compute the variance measurement \(\sigma^{2}_{\rm Area}\) with both our cosine weighting function and uniform weighting function. To easily distinguish between which weighting function as been used, we write \(w_{B}\sigma^{2}_{\rm Ratio}\) and \(w_{B}\sigma^{2}_{\rm Area}\) when the binary weighting function has been used, and simply \(\sigma^{2}_{\rm Ratio}\) and \(\sigma^{2}_{\rm Area}\) when a flat weighting function has been used. ### Testing the Kinematic Criteria For this section we show the results for the \(F_{\rm noise}=2.5\) mJy noise level since the resulting signal-to-noise level is readily achievable with ALMA. We plot \(V_{\rm Ratio}(s)\) in Figure 13 for our disc models, but only include the azimuth angle \(\phi=0^{\circ}\) to avoid a cluttered appearance. We have normalised \(s\) on the \(x\)-axis of the Figure. The position at nought on the \(x\)-axis is close to the centre of the velocity map. Since signal is lacking in this area for many of our models, the path \(s\) can end on the cavity edge, most prominent in model NOD, where \(V_{\rm Ratio}(s)\) is no longer defined inside of \(s\lesssim 0.1\). In our circumbinary disc models the cavity radius is \(\lesssim 100\) au, which is roughly one quarter of the total disc radii. In Figure 13 this is where \(V_{\rm Ratio}\) starts deviate substantially from nought. The main exception is model OD (blue line), which has a smaller outer radii compared with the other models. In a planet-hosting discs, perturbations in the velocity field are expected to be mostly sub-sonic (e.g. see Bollati et al., 2021; Calcino et al., 2022). Although planet masses approaching and exceeding the thermal mass produce super-sonic perturbations, it is reasonable to expect that these perturbations increase in amplitude with increasing companion mass (e.g. see Dong et al., 2015). Thus we should also expect that \(V_{\rm Ratio}\) will increase with increasing companion mass. Using subsonic perturbations as an assumption, the maximum \(V_{\rm ratio}(s)\) at a particular value of \(s\) for a planet hosting disc should be \[V_{\rm Ratio,\;plan}\sim\frac{(v_{K}+c_{s})-(v_{K}-c_{s})}{(v_{K}+c_{s})+(v_{K }-c_{s})}=\frac{c_{s}}{v_{K}}=\frac{H}{r}, \tag{17}\] where \(c_{s}\) is the sound speed and \(H\) is the scale height. For a typical protoplanetary disc \(H/r\sim 0.1\), however in Figure 13 we can see that all of our circumbinary disc models show a \(V_{\rm Ratio}\) a factor of few higher than this (shaded region) close to and within the cavity. As expected, our no companion (NC), planet (P), and multiple planet (MP) models have a \(V_{\rm Ratio}\) much lower than the circumbinary models, and much lower than the theoretical maximum. Our eccentric planet (EP) model shows inflated values of \(V_{\rm Ratio}\) compared with the other planet models owing to the eccentric gas motion induced by the planet, as discussed in Section 3.1.1. In Figure 14 we display our quantity \(V_{\rm Area}\) for a single azimuthal angle of all the models in Table 1. As expected, \(V_{\rm Area}\) stays close to nought in models NC, P, and MP for all values of \(v\), with slight fluctuations owing to the pixel resolution of the velocity map. For all of our circumbinary models presented \(V_{\rm Area}\) deviates substantially from zero. We plot the values of \(\sigma^{2}_{\rm Ratio}\) versus \(\sigma^{2}_{\rm Area}\) for each of our models in the left panel of Figure 15, along with their weighted counterparts in the right panel. Models NC, P, and MP all display low values of the variance quantities \(\sigma^{2}_{\rm Ratio}\) and \(\sigma^{2}_{\rm Area}\), while the other models show an elevated variance in either one or both of these quantities. This is expected since our analysis in Section 3.1 showed that these models display much larger perturbations in their velocity fields. The variance \(\sigma^{2}_{\rm Area}\) in particular appears a robust indicator of perturbations as it is more than one order of magnitude larger for our binary discs than the planet hosting and no companion disc models. The exception to this is that our eccentric planet and gravitationally unstable model produces a large value of \(\sigma^{2}_{\rm Area}\) compared with the other models. Between the flat weighting and binary weighting functions, there appears to not be much difference in the variance measurements. The gravitationally unstable and the no companion models Figure 14: The parameter \(V_{\rm Area}\) plotted for each disc model with \(F_{\rm noise}=2.5\) mJy, including the no companion and planet models. We can see that \(V_{\rm Area}\) is much lower over the majority of the disc compared with the circumbinary disc models. Note that small values in velocity corresponds to large spatial scales. Figure 13: The velocity ratio \(V_{\rm Ratio}\) plotted as a function of normalised position, \(s/{\rm max}(s)\), for all of our disc models with \(F_{\rm noise}=2.5\) mJy have weighted variances equal to zero since they do not contain cavities. However we can learn two things by comparing the affect each weighting function has. Firstly, since the planet hosting and no companion models have similar values for \(\sigma^{2}_{\rm Ratio}\) and \(\sigma^{2}_{\rm Area}\), this indicates that the planets do not have a significant effect on these quantities. Secondly, the weighted and unweighted variances are almost identical for the circumbinary and eccentric planet models, signalling that the perturbations causing the elevated variance measurements is originating from the cavity. This naturally raises the question for whether the weighting functions are necessary at all, however we argue they are since our simulations are quite idealistic. We assume the discs are evolving in isolation and do not consider any disc instabilities which would cause fluctuations in the velocity field, both of which will increase our variance measurements. Since a cavity is a theoretically expected (Artymowicz and Lubow, 1994) and observationally supported (Casassus et al., 2013; Dutrey et al., 2014) outcome of binary-disc interactions, the inclusion of a weighting function specifically targeting perturbations in and near the cavity is justified. With this justification in mind, the shaded region in the right panel of Figure 15 encapsulates the area of the variance parameter space where we are more likely to see binary systems. The area is derived empirically with \(w_{B}\sigma^{2}_{\rm Ratio}>0.003\) and \(w_{B}\sigma^{2}_{\rm Area}>0.003\). It is robust to different noise levels, disc inclinations, and synthesised beam provided the cavity region is resolved by \(\sim\)5 beams (Appendix C). Finally, there is a positive correlation between the quantities \(\sigma^{2}_{\rm Ratio}\) and \(\sigma^{2}_{\rm Area}\) which appears stronger with an increase in the disc inclination (Appendix C). ## 7 Discussion The search for kinematic perturbations in protoplanetary discs is ramping up significantly, with several recently accepted proposals and one large program dedicated to this task. While there now exists a theoretical framework for modelling planet induced kinks (Bollati et al., 2021; Calcino et al., 2022), and some work on the expected signatures of gravitationally unstable discs (Hall et al., 2020; Terry et al., 2021), few studies have explored kinematic perturbations in circumbinary discs. ### Kinematic and Morphological Criteria for Circumbinary Discs We now describe the three morphological features which together provide strong support for binarity. In Section 4 we outlined several morphological features seen in integrated CO isotopologue observations. The presence of a cavity in \({}^{13}\)CO and C\({}^{18}\)O integrated emission is unanimous among our models, while a cavity may or may not be present in \({}^{12}\)CO depending on the disc and companion properties. Thus our first indicator of a circumbinary disc is the presence of a cavity in either \({}^{13}\)CO or C\({}^{18}\)O integrated emission. Since a cavity should be present in essentially all circumbinary discs, our other morphological and kinematic criteria are determined in the context of a cavity hosting disc. Our second indicator from integrated CO isotopologue observations is the presence of spiral-like features in proximity to a cavity. Although not studied in the present work, spiral-like features could also be detected in scattered light observations, as they have been in the known circumbinary(-triple) discs HD 142527 and GG Tau A (Fukagawa et al., 2006; Casassus et al., 2012; Canovas et al., 2013; Avenhaus et al., 2014, 2017; Keppler et al., 2020). Our third indicator is non-localised velocity kinks in proximity of the cavity in the channel maps. We demonstrated in Section 5.1 that kinks are seen on the edge of the disc cavity robustly in most models and viewing angles. We defined "in proximity to the cavity" to mean within \(r\leq 1.5\times r_{\rm cavity}\). Our two final criteria are obtained from velocity maps of the disc. These are the kinematic criteria \(w_{B}\,\sigma^{2}_{\rm Area}\) and \(w_{B}\,\sigma^{2}_{\rm Ratio}\). We showed in Section 5.2 that these criteria together, when above certain values, are indicators of binarity. Although more work should be done to differentiate between models, our present work allows us to summarise the following criteria that indicate binarity: 1. Gas depleted central cavity 2. Spiral arms in proximity to a cavity 3. Non-localised wiggles in the channel maps in proximity to a cavity 4. \(w_{B}\,\sigma^{2}_{\rm Ratio}>0.001\) 5. \(w_{B}\,\sigma^{2}_{\rm Area}>0.003\) Table 2 summarises which criteria are met in the models we have tested. Our kinematic criteria essentially measure asymmetries in the velocity map. There are many ways that these asymmetries could arise, as discussed in Section 7.4. Thus on their own, these criteria do not indicate binarity. However, in conjunction with the morphological criteria, our work indicates that a strong case for binarity can be made. From our short analysis in Appendix C, either \({}^{12}\)CO or \({}^{13}\)CO emission can be used to measure the kinematic criteria provided that the peak SNR in an individual channel is greater than 50. The cavity region should be resolved with at least 5 beams in order to properly attain the kinematic criteria, however the kinematic criteria can still be met with lower resolution than this. The inclination of the disc is also important to consider, with very high and low inclinations presenting challenges. Low inclinations of \(i\lesssim 5^{\circ}\) mean the projected azimuthal and radial perturbations are small, and hence \(w_{B}\,\sigma^{2}_{\rm Area}\) and \(w_{B}\,\sigma^{2}_{\rm Ratio}\) may not signify a binary even when one is present. For the higher inclinations, the main issue is that the outer disc surface can start to obscure the cavity region where most of the perturbations are expected. The CO emitting layer of the discs in this paper is somewhat lower than what is typically observed, so our criteria seem robust on our models with \(i\lesssim 85^{\circ}\). However in observations this threshold might be somewhat lower, and whether the disc surface obscures the inner disc should be determined from integrated and peak intensity maps. ### Planet Signatures versus Circumbinary Signatures Several of our criteria that can be met in planet hosting discs. The appearance of gas and dust depleted cavities is both theoretically and observationally supported in planet hosting discs (Zhu et al., 2011; Pinilla et al., 2012; Keppler et al., 2018; Long et al., 2018). As dust grains collect into a ring inside a gas pressure maximum, their morphology can look indistinguishable from the dust ring around a circumbinary disc (e.g. the case of GG Tau A, Dutrey et al., 2014). Thus dust cavities are not exclusive to circumbinary discs. However such a feature is not present in all planet hosting discs (for example, HD 163296 Qi et al., 2015; Isella et al., 2018). Gas cavities, on the other hand, may be a more reliable indicator of a stellar companion inside of a cavity since a more massive companion can more efficiently clear material. Although this morphology can be observed in planet hosting discs, we can reasonably expect that a dust and at least a partially gas depleted cavity should exist in essentially _all_ circumbinary discs. Our second criteria is subject to some interpretation and it is possible that planet-hosting discs show non-localised kinks. For example, the spiral wake from an embedded planet in the disc around HD 163296 was reported by Calcino et al. (2022). These perturbations are small compared with those seen in circumbinary discs Bollati et al. (2021). This is the reason we specify non-localised kinks in proximity to the cavity. Although a massive planet inside a cavity could produce non-localised kinks in proximity with the cavity, the large difference in mass between a planet and stellar companion will result in differences in the velocity kinks produced. A method used to derive the kink amplitude in circumbinary discs would be useful to compare with the kink amplitude generated by planets. Spiral arms are also another feature which are expected to be seen in planet-hosting discs (Goldreich and Tremaine, 1979, 1980; Ogilvie and Lubow, 2002; Rafikov, 2002). The brightness of planet-induced spiral arms in scattered light has been well studied using hydrodynamical simulations and radiative transfer (Dong et al., 2015; Zhu et al., 2015; Fung and Dong, 2015; Dong and Dawson, 2016). Planetary masses from as low as \(\sim 0.5\) M\({}_{\rm J}\) could be enough to induce spiral arms that are observable in scattered light by current generation telescopes (Fung and Dong, 2015). Thus, spiral arms, at least in scattered light, are also not a robust indicator of a circumbinary disc. However this may not be true in CO integrated emission and peak intensity. Several observational papers have found spirals in CO emission/peak intensity and have attributed them to planets (Tang et al., 2017; Boehler et al., 2018; Phuong et al., 2020). To our knowledge there are no works in the literature exploring the appearance of planet-induced spirals in CO isotopologues. Since the amplitude of companion-induced spirals correlates with the companion mass (Fung and Dong, 2015), the appearance of planetary-induced versus binary induced spirals should differ, with the latter being more visible in CO isotopologue observations than the former (e.g. see Mentiplay et al., 2018; Poblete et al., 2020). The only observational confirmation of this are the tentative and faint spirals in HD 163296 noted in the channels by Calcino et al. (2022), but not clearly seen in a model subtract peak intensity map (Teague et al., 2021). Further investigation is needed to test this hypothesis. Our kinematic criteria are more robust to false positives however there are scenarios where we could see large values of \(w_{B}\,\sigma^{2}_{\rm Area}\) and \(w_{B}\,\sigma^{2}_{\rm Ratio}\). We have tested two such scenario in the present work \begin{table} \begin{tabular}{l c c c c c} \hline \hline Name & Cavity & Non-localised kink & CO Spirals & \(w_{B}\,\sigma^{2}_{\rm Ratio}>0.001\) & \(w_{B}\,\sigma^{2}_{\rm Area}>0.003\) \\ \hline No Companion (NC) & _Xxxx_ & _Xxxx_ & _Xxxx_ & _XXXX_ & _XXXX_ \\ Planet (P) & _Xxxx_ & _Xxxx_ & _XXXX_ & _XXXX_ & _XXXX_ \\ Multiple Planets (MP) & _Xxxx_ & _Xxxx_ & _XXXX_ & _XXXX_ & _XXXX_ \\ Gravitationally Unstable (GI) & _Xxxx_ & _Xxxx_ & _XXXX_ & _XXXX_ & _XXXX_ \\ Eccentric Planet (EP) & _✓_ & _✓_ & _✓_ & _✓_ & _✓_ & _✓_ \\ No Over-density (NOD) & _✓_ & _✓_ & _✓_ & _✓_ & _✓_ & _✓_ \\ Over-density (OD) & _✓_ & _✓_ & _✓_ & _✓_ & _✓_ & _✓_ \\ Eccentric Companion (EC) & _✓_ & _✓_ & _✓_ & _✓_ & _✓_ & _✓_ \\ Light Inclined Companion (LLC) & _✓_ & _✓_ & _✓_ & _✓_ & _✓_ & _✓_ \\ Heavy Inclined Companion (HIC) & _✓_ & _✓_ & _✓_ & _✓_ & _✓_ & _✓_ \\ Polar Companion (PC) & _✓_ & _✓_ & _✓_ & _✓_ & _✓_ & _✓_ \\ \hline \hline \end{tabular} \end{table} Table 2: A summary of which morphological and kinematic criteria are met in our models for the \(F_{\rm noise}=2.5\) mJy noise level. The four marks in each column are for each viewing angle in order from \(\phi=0^{\circ}\) to \(\phi=270^{\circ}\). Although spirals can be observed in planet hosting discs (e.g. see Mentiplay et al., 2018), they are not clearly visible in our planet model. Note that our non-localised kink criteria specifically refers to the immediate area in proximitiy of the cavity Figure 15: The variance in \(V_{\rm Radio}\). \(\sigma_{\rm Radio}\), versus the variance in \(V_{\rm Area}\), \(\sigma_{\rm Area}\) (left panel), and the weighted variance in \(V_{\rm Radio}\), \(w_{B}\,\sigma_{\rm Ratio}\), versus the weighted variance in \(V_{\rm Area}\), \(w_{B}\,\sigma_{\rm Area}\) (right panel), for all of our disc models and \(F_{\rm noise}=2.5\) mJy. The four points for each model indicate each of the four viewing angles of our models tested. We see a correlation between the variance measurements with both weighting methods. The shaded region is derived empirically with \(\omega^{2}_{\rm Radio}>0.003\)\(w_{B}\,\sigma^{2}_{\rm Area}>0.003\). All our circumbinary models satisfy the criteria in \(w_{B}\,\sigma^{2}_{\rm Area}\), while the models with substantial eccentricity (No Comp. and Over-density) mostly satisfy the criteria in \(w_{B}\,\sigma^{2}_{\rm Ratio}\). in our eccentric planet and gravionally unstable simulations. We found that the asymmetric flows introduced by the eccentric planet increase \(\sigma_{\rm Area}^{2}\) and \(\sigma_{\rm Ratio}^{2}\). Although eccentric planets have been proposed to explain the observed morphology of several discs in the literature (e.g. Muley et al., 2019; Calcino et al., 2020), they likely do not make up a significant portion of the massive planet population in protoplanetary discs, whereas binary stars are common in the Universe. Additional simulations covering mass and orbital eccentricity are required to further explore their effects on our kinematic criteria. ### GI Wiggles versus Circumbinary Wiggles Hall et al. (2020) showed that gravitationally unstable discs can produce significant deviations from Keplerian velocity. These perturbations can be detected in the iso-velocity curves (GI wiggles), or by subtracting the Keplerian rotation field from the velocity map of the disc. Further exploration of numerical simulations of gravitationally unstable discs by Terry et al. (2021) found that the disc mass correlates with the wiggle amplitude in the channel maps. We found in the left panel of Figure 15 that our gravitationally unstable model produced relatively large values of \(\sigma_{\rm Ratio}^{2}\) and \(\sigma_{\rm Area}^{2}\) compared with the low perturbation models (i.e. NC, P, and MP). Although both gravitationally unstable discs and circumbinary discs display kinematic perturbations, there is a clear morphological difference between these discs in both the gas and dust distribution. Gravitationally unstable discs produce spiral arms that orbit at the Keplerian frequency and hence efficiently trap \(\sim\)mm sized dust grains (Rice et al., 2004; Hall et al., 2020) which is observable with ALMA (Dipierro et al., 2014). Thus GI discs should show evidence of dust trapping in spiral arms at mm wavelengths, while circumbinary discs are typically characterised by a cavity at mm wavelengths. Since binary induced spirals are not orbiting at the local Keplerian frequency outside the cavity, they should not trap dust. This allows for our kinematic criteria \(w_{B}\sigma_{\rm Ratio}^{2}\) and \(w_{B}\sigma_{\rm Ratio}^{2}\) to differentiate from these two classes, as seen in the right panel of Figure 15. It is plausible that very young systems can be both gravitationally unstable and contain a binary. One particular system where this might be the case is [BHB2007]11, where \(\sim\)mm dust grains trace spiral arms and accretions streams around two young stars (Alves et al., 2019). However for the more evolved Class II discs that display a central cavity, very few show dust associated with spiral arms (van der Marel et al., 2021), and hence GI is likely not significantly affecting these discs. ### Outside Perturbations and Other Applications Perturbations arising due to phenomenon not originating from the host disc can occur, and may be responsible for some of the morphological features we have discussed. For example, the outer disc can be strongly perturbed by stellar flybys (Cuello et al., 2019, 2020; Smallwood et al., 2023), external companions (e.g. as in HD 100453 and GG Tau A White et al., 1999; Benisty et al., 2017; Gonzalez et al., 2020), and post formation inflows/cloudlet capture (Dullemond et al., 2019; Kuffmeier et al., 2020; Huang et al., 2020). These outside influences will contribute to the kinematic and morphological criteria we derived in predictable ways that are distinguishable from a circumbinary disc. For example, an inflow or stellar flyby will affect \(V_{\rm Ratio}(s)\) and \(V_{\rm Area}(v)\) on the large spatial and low velocity scales. Hence in Figure 13 we expect to see an increase in \(V_{\rm Ratio}(s)\) for large \(s\), while in Figure 14 we expect to see an increase in \(V_{\rm Area}(v)\) for small \(v\). The same effect is also expected for bound gravitational bodies such as stellar and planetary companions. Our kinematic criteria could also prove useful in diagnosing protoplanetary discs influenced by these effects. Since our kinematic criteria are sensitive to any perturbations, care must be taken when interpreting their values in discs where there are clear and obvious outside perturbations that could also be spatially co-located with influences from an inner binary. Two examples of this are AB Aurigae and HD 100546, which were both proposed to be binary systems by Poblete et al. (2020) and Norfolk et al. (2021), respectively. There is clear evidence of infalling material interacting with the disc around both systems Dullemond et al. (2019); Kuffmeier et al. (2020). Although this complicates the application of our kinematic criteria, their magnitude should correlate with the strength of any induced perturbation in the disc. For example, the degree to which infalling material is perturbing a disc. More massive and faster falling material will induce stronger perturbations in the outer disc which will correlate with \(\sigma_{\rm Ratio}^{2}\) and \(\sigma_{\rm Area}^{2}\). The same may also be true of gravitationally unstable discs, where more unstable discs display larger perturbations (Terry et al., 2021) and hence larger values of our kinematic criteria. This is also another justification for including a weighting function focused on the perturbations in and around a central cavity. ### Model Caveats Although not a topic of the present work, changes in the disc parameters such as scale height and viscosity could have some implications for the conclusions we draw. For example, changes in the disc viscosity can result in large changes in the circumbinary disc morphology (Rabago et al., 2023). Substantial changes in the disc morphology are also seen in planet-hosting disc simulations (e.g. Ataiee et al., 2013; Zhang et al., 2018) where a lowering of disc aspect ratio and disc viscosity can result in an eccentric disc around Jupiter mass planets. Zhang et al. (2018) produce a suite of simulations covering differing values of planet mass, disc scale height, and viscosity. They found that the velocity perturbations around the gap carved by the planet do depend on viscosity, with a lower viscosity producing larger amplitude perturbations that do not damp as quickly as in the higher viscosity simulations. However it has been shown that the amplitude of the velocity kink induced by planetary mass objects is not sensitive to the disc viscosity (Rabago and Zhu, 2021), rather the kink amplitude is dependent on the planet thermal mass which depends on the disc scale height (Bollati et al., 2021). Since the gap structure and resulting velocity field depends on the planet mass, disc viscosity, and scale height (Fung et al., 2014), there could be a degeneracy in our kinematic criteria between the companion mass and the disc properties. The uncertainty in the disc properties could blur the boundary between the planet-hosting and circumbinary discs. However, we have reason to believe that even with introduced uncertainty in the disc properties, the correlation between companion mass and elevated values in our kinematic criteria will still hold. The reason is simply that larger mass bodies will produce larger perturbations in the disc, regardless of what disc profile is chosen. More massive companions lead to more shocking in the disc, more depleted cavities, and more perturbations in the disc overall. In Zhang et al. (2018), the velocity perturbations around the planet induced gap only change by a factor of a few (provided the eccentricity in the disc is not excited) across a factor of two change in disc scale height and two orders of magnitude change in viscosity. This is in contrast with the roughly order of magnitude or more change in velocity perturbations seen between the planet-hosting and circumbinary disc models of Figures 1, 2, and 3. Therefore even substantial changes in disc properties still does not produce larger perturbations than transitioning from planetary to stellar companions. Since our kinematic criteria are based on quantifying these perturbations, and more massive bodies lead to larger perturbations, they should produce a stronger signal in our criteria than less massive bodies. ## 8 Summary In this paper we have showcased some of the morphological features associated with circumbinary discs. We found that the presence of: 1. a gas depleted cavity, 2. spiral arms inside or outside of this cavity, 3. and non-localised kinks in the channel maps, are robust indicators of binarity. We also found that the kinematics of circumbinary discs contain peculiar features and defined metrics to quantify these features in Section 5.2. These metrics quantify 1. the ratio of maximum absolute velocity along a path close to the semi-major axis in each wing, 2. and the ratio of the area of the disc enclosed by a specific absolute velocity in each wing. These kinematic and morphological metrics together provide robust indicators of binarity and can be used to infer the existence of a binary in cases where direct imaging remains challenging. ## Acknowledgements JC acknowledges the support of LANL/LDRD program. HL acknowledges the support of NASA/ATP program and LANL/LDRD program. This research used resources provided by the Los Alamos National Laboratory Institutional Computing Program, which is supported by the U.S. Department of Energy National Nuclear Security Administration under Contract No. 89233218CNA000001. DJP and CP acknowledge funding from the Australian Research Council via FT130100034, DP180104235 and FT170100040. BJN is supported by an Australian Government Research Training Program (RTP) Scholarship. VC acknowledges funding from the Australian Research Council via DP180104235 and from the Belgian F.R.S.-FNRS for financial support through a postdoctoral researcher fellowship. We used plonk for the figures in this paper (Mentiplay 2019), which utilises visualisation routines developed for splash(Price 2007). ## Data Availability Statement The SPH code phantom is available for use at [https://github.com/danieljprice/phantom](https://github.com/danieljprice/phantom). The simulation setup files and dump files can be obtained through request to JC. mc-cost is available for use on a collaborative basis from CP. The code for computing the quantities \(V_{\text{Ratio}}\), \(V_{\text{Area}}\), \(\sigma^{2}_{\text{Area}}\), and \(\sigma^{2}_{\text{Ratio}}\), will be included in a future release of eddy(Teague et al., 2019), which is available at [https://github.com/richteague/eddy](https://github.com/richteague/eddy).
2303.05718
Tradeoff of generalization error in unsupervised learning
Finding the optimal model complexity that minimizes the generalization error (GE) is a key issue of machine learning. For the conventional supervised learning, this task typically involves the bias-variance tradeoff: lowering the bias by making the model more complex entails an increase in the variance. Meanwhile, little has been studied about whether the same tradeoff exists for unsupervised learning. In this study, we propose that unsupervised learning generally exhibits a two-component tradeoff of the GE, namely the model error and the data error -- using a more complex model reduces the model error at the cost of the data error, with the data error playing a more significant role for a smaller training dataset. This is corroborated by training the restricted Boltzmann machine to generate the configurations of the two-dimensional Ising model at a given temperature and the totally asymmetric simple exclusion process with given entry and exit rates. Our results also indicate that the optimal model tends to be more complex when the data to be learned are more complex.
Gilhan Kim, Hojun Lee, Junghyo Jo, Yongjoo Baek
2023-03-10T05:50:17Z
http://arxiv.org/abs/2303.05718v2
# Tradeoff of generalization error in unsupervised learning ###### Abstract Finding the optimal model complexity that minimizes the generalization error (GE) is a key issue of machine learning. For the conventional supervised learning, this task typically involves the bias-variance tradeoff: lowering the bias by making the model more complex entails an increase in the variance. Meanwhile, little has been studied about whether the same tradeoff exists for unsupervised learning. In this study, we propose that unsupervised learning generally exhibits a two-component tradeoff of the GE, namely the model error and the data error--using a more complex model reduces the model error at the cost of the data error, with the data error playing a more significant role for a smaller training dataset. This is corroborated by training the restricted Boltzmann machine to generate the configurations of the two-dimensional Ising model at a given temperature and the totally asymmetric simple exclusion process with given entry and exit rates. Our results also indicate that the optimal model tends to be more complex when the data to be learned are more complex. _Keywords_: Machine Learning, Classical phase transitions, Stochastic processes ## 1 Introduction Inductive reasoning, which derives general principles from specific observations, is an essential generalization process that builds up human knowledge. With the advent of big data, there is a rapidly growing need for automating the process of inductive reasoning. Lately, much development has been made in this direction thanks to various machine learning techniques based on the artificial neural networks (ANNs) [1, 2, 3, 4]. However, despite their tremendous success, we still have a very limited understanding of when and how an ANN achieves a good generalization capacity. There are two major types of generalization tasks performed by the ANNs. The more well-studied type is _supervised learning_, which refers to the task of guessing the correct form of a function over its entire domain by generalizing some given examples of its functional relations. In this case, the failure to properly generalize the given examples, the generalization error (GE), can be defined in terms of the mean squared error (MSE) of the predicted function from the true function. In practice, the true function is unknown, so the MSE estimated using independently drawn examples of functional relations (called the _test error_) is used as a proxy for the GE. Thanks to the mathematical structure of the MSE, this GE is readily decomposed into two parts [2]. The first part, called the bias, quantifies how the predicted function on average deviates from the true function. The second part, called the variance, quantifies how much the predicted function fluctuates from its average behavior. In many examples of supervised learning, these two components of the GE exhibit a tradeoff behavior: as the model complexity (_e.g._, the size of the ANN) increases, the bias decreases at the cost of the increasing variance. This leads to the GE showing a U-shape dependence on the model complexity, which is called the _bias-variance tradeoff_. It should be noted that the decomposition is not limited to the MSE but also generalized to different types of the GE (see, for example, [5]). In addition, according to recent studies, the GE of supervised learning again exhibits a monotonic decrease if the model complexity is increased even further. This is called the _double descent phenomenon_, whose origin has been extensively discussed [6, 7, 8, 9]. Meanwhile, there is the less studied but no less important type of tasks, namely _unsupervised learning_, which refers to the task of finding the probability distribution that best captures the statistical properties of a dataset sampled from an unknown distribution. The GE for unsupervised learning can be defined as the Kullback-Leibler (KL) divergence of the predicted distribution from the true distribution to be found. Again, there has been a proposal about how this GE can be decomposed into the bias and the variance [10]. However, little has been studied about whether the GE of unsupervised learning also exhibits a tradeoff behavior. In this study, we address the problem by training the Restricted Boltzmann Machine (RBM) to learn the data generated from the two-dimensional (2-d) Ising model and the totally asymmetric simple exclusion process (TASEP), which are well-known models of equilibrium and nonequilibrium phase transitions, respectively. Since the distributions of the configurations of these models are exactly known, the GE can be calculated exactly. Examining how these quantities depend on the number of hidden nodes of the RBM, we observe that the GE exhibits a tradeoff behavior. We propose a two-component decomposition scheme of the GE so that the tradeoff is determined by the monotonic behaviors of the two error components, one related to the model limitations and the other to the fluctuations in the data. We also examine how the optimal model complexity, resulting from by a tradeoff among these three GE components, depends on the complexity of the given training dataset. The rest of the paper is organized as follows. In Sec. 2, we introduce the restricted Boltzmann machine and our decomposition scheme for its generalization error. In Sec. 3, we describe how the RBM is trained using the data generated from the 2-d Ising model and the TASEP. In Sec. 4, we present our results about the tradeoff behaviors observed in unsupervised learning of the RBMs. Finally, we summarize our findings and discuss future works in Sec. 5. ## 2 Theory ### Restricted Boltzmann Machine The RBM is an energy-based generative model proposed by Smolensky [11] and popularized by Hinton [12, 13]. It has the form of a bipartite network of \(N_{V}\) nodes in the visible layer and \(N_{H}\) nodes in the hidden layer, see Fig. 1(a). For the visible-layer configuration \(\mathbf{v}\equiv\{v_{i}\}_{i=1}^{N_{V}}\) and the hidden-layer configuration \(\mathbf{h}\equiv\{h_{i}\}_{i=1}^{N_{H}}\), the corresponding energy is given by \[E(\mathbf{v},\mathbf{h})=-\mathbf{a}^{\mathrm{T}}\,\mathbf{v}-\mathbf{b}^{ \mathrm{T}}\,\mathbf{h}-\mathbf{v}^{\mathrm{T}}\,\mathbb{W}\,\mathbf{h}, \tag{1}\] where \(\mathbf{a}\equiv\{a_{i}\}_{i=1}^{N_{V}}\) and \(\mathbf{b}\equiv\{b_{i}\}_{i=1}^{N_{H}}\) indicate the bias terms, and \(\mathbb{W}\) is the \(N_{V}\times N_{H}\) weight matrix coupling the two layers. The state of each node is a Boolean variable, _i.e._, \(v_{i}\in\{0,\,1\}\) and \(h_{i}\in\{0,\,1\}\). The probability of each configuration is determined by this energy function according to the Boltzmann distribution \[Q_{VH}(\mathbf{v},\mathbf{h})=\frac{1}{Z}\exp\left[-E(\mathbf{v},\mathbf{h}) \right], \tag{2}\] Figure 1: (a) A schematic diagram showing the structure of the RBM. (b) Tradeoff behavior of the GE when the RBM is trained to reproduce the 2-d Ising configurations at temperature \(T=3.6\). (c) Tradeoff behavior of the GE when the RBM is trained to reproduce the TASEP configurations at \(\alpha=\beta=0.7\). The lines are to guide the eye. where \(Z\) is the normalizing factor (or the _partition function_) \[Z\equiv\sum_{\mathbf{v},\mathbf{h}}\exp\left[-E(\mathbf{v},\mathbf{h})\right]. \tag{3}\] The goal of the RBM is to find \(\mathbf{a}\), \(\mathbf{b}\), and \(\mathbb{W}\) such that the marginal distribution \(Q_{V}(\mathbf{v})\equiv\sum_{\mathbf{v}}Q_{VH}(\mathbf{v},\mathbf{h})\) is as similar as possible to some given empirical distribution \(P_{V}(\mathbf{v})\). To put it precisely, the RBM seeks to achieve \[Q_{V}^{*}\equiv\operatorname*{arg\,min}_{Q_{V}}D_{\mathrm{KL}}(P_{V}\|Q_{V}), \tag{4}\] where \[D_{\mathrm{KL}}(P_{V}\|Q_{V})\equiv\sum_{\mathbf{v}}P_{V}(\mathbf{v})\log\frac {P_{V}(\mathbf{v})}{Q_{V}(\mathbf{v})} \tag{5}\] is the Kullback-Leibler (KL) divergence. Towards this purpose, the above KL divergence is taken to be the loss function, and the RBM is updated according to the gradient descent \[a_{i}(t+1) =a_{i}(t)-\alpha\frac{\partial}{\partial a_{i}}D_{\mathrm{KL}}(P _{V}\|Q_{V}), \tag{6}\] \[b_{i}(t+1) =b_{i}(t)-\alpha\frac{\partial}{\partial b_{i}}D_{\mathrm{KL}}(P _{V}\|Q_{V}),\] (7) \[W_{ij}(t+1) =W_{ij}(t)-\alpha\frac{\partial}{\partial W_{ij}}D_{\mathrm{KL}}( P_{V}\|Q_{V}). \tag{8}\] Denoting by \[Q_{H|V}(\mathbf{h}|\mathbf{v})\equiv\frac{Q_{VH}(\mathbf{v},\mathbf{h})}{Q_{V }(\mathbf{v})}=\prod_{j=1}^{N_{H}}\frac{\exp\left[b_{j}h_{j}+\sum_{i=1}^{N_{V }}v_{i}W_{ij}h_{j}\right]}{1+\exp\left[b_{j}+\sum_{i=1}^{N_{V}}v_{i}W_{ij}\right]} \tag{9}\] the conditional probability of the hidden-layer configuration, we can show that the gradients of the KL divergence satisfy \[\frac{\partial}{\partial a_{i}}D_{\mathrm{KL}}(P_{V}\|Q_{V}) =\langle v_{i}\rangle_{Q_{V}}-\langle v_{i}\rangle_{P_{V}}, \tag{10}\] \[\frac{\partial}{\partial b_{i}}D_{\mathrm{KL}}(P_{V}\|Q_{V}) =\langle h_{i}\rangle_{Q_{VH}}-\langle h_{i}\rangle_{P_{V}Q_{H|V}},\] (11) \[\frac{\partial}{\partial W_{ij}}D_{\mathrm{KL}}(P_{V}\|Q_{V}) =\langle v_{i}h_{j}\rangle_{Q_{VH}}-\langle v_{i}h_{j}\rangle_{P_{ V}Q_{H|V}}, \tag{12}\] where \(\langle\cdot\rangle_{F}\) denotes an average with respect to the probability distribution \(F\). Thus, the training saturates when the first and the second moments of the state variables are equal for both empirical (\(P_{V}Q_{H|V}\)) and model (\(Q_{VH}\)) distributions. Since these gradients involve the average \(\langle\cdot\rangle_{Q_{VH}}\) whose computation is difficult for large networks, various approximation methods are used in practice, such as contrastive divergence (CD) [12], persistent contrastive divergence (PCD) [14], fast PCD [15], Bennett's acceptance ratio method [16], and the Thouless-Anderson-Palmer equations [17]. However, to avoid further complications arising from the approximations, in this study we stick to the exact gradients written above. ### Error components in unsupervised learning The goal of unsupervised learning is to construct a generative model whose statistical properties are as similar as possible to the true distribution underlying an ensemble of objects. We may formally describe the problem as follows. Suppose that there exists the true probability distribution \(P_{V}^{0}\) generating an ensemble of objects. Then we can think of the best model \(Q_{V}^{0}\) that the RBM can express, namely \[Q_{V}^{0}\equiv\underset{Q_{V}}{\arg\min}\;D_{\mathrm{KL}}(P_{V}^{0}\|Q_{V}). \tag{13}\] Ideally, the RBM should generate \(Q_{V}^{0}\) at the end of the training. However, this may not be true due to three reasons. First, the KL divergence is generally not a convex function of \(\mathbf{a}\), \(\mathbf{b}\), and \(\mathbb{W}\), so the gradient descent shown in Eqs. (6), (7), and (8) may end up in a local minimum far away from \(Q_{V}^{0}\). Second, even if the RBM does evolve towards \(Q_{V}^{0}\), the convergence may take an extremely long time. In this case, the training will have to end before \(Q_{V}^{0}\) is reached. Third, in practical situations, we do not have direct access to the true distribution (which we denote by \(P_{V}^{0}\)) generating the observed samples. Due to the sampling error, the distribution \(P_{V}\) used in the training is generally different from \(P_{V}^{0}\). Then, even if the RBM does manage to find a distribution most similar to \(P_{V}\), it may still be quite different from \(Q_{V}^{0}\). For these reasons, the distribution generated by the RBM at the end of the training is not \(Q_{V}^{0}\), but each training will result in its own model distribution \(Q_{V}^{\mathrm{m}}\). Then the GE, which quantifies the expected difference of the model distribution from the true distribution, can be defined as \[\mathrm{GE}\equiv\left\langle D_{\mathrm{KL}}(P_{V}^{0}\|Q_{V}^{\mathrm{m}}) \right\rangle_{\mathrm{m}}, \tag{14}\] where \(\left\langle\cdot\right\rangle_{\mathrm{m}}\) represents the average with respect to different models obtained by independent trainings. By definition, the GE cannot be smaller than the _model error_ (ME) \[\mathrm{ME}\equiv D_{\mathrm{KL}}(P_{V}^{0}\|Q_{V}^{0}), \tag{15}\] which indicates the fundamental lower bound on how similar the model distribution generated by the RBM can be to the true distribution \(P_{V}^{0}\). Finally, we identify the excess part of the error, \[\mathrm{DE}\equiv\mathrm{GE}-\mathrm{ME}, \tag{16}\] as the _data error_ (DE), which mainly stems from deviations of the training data from the true distribution, as will be shown below. Thus, Eqs. (14), (15), and (16) define a two-component decomposition of the GE for unsupervised learning. ## 3 Methods To examine the behaviors of the error components defined above, we train the RBMs to two basic models of statistical physics: the 2-d Ising model and the TASEP with open boundaries. We chose these models for two advantages. First, these models have exactly known steady-state distributions. The Ising model follows the standard Boltzmann statistics, and the nonequilibrium steady-state statistics of the TASEP can be exactly obtained using the matrix-product ansatz [18]. Thus, for these models, we have full information about \(P_{V}^{0}\). Second, these models provide examples of equilibrium and nonequilibrium phase transitions. Depending on the nature of the phases and the transitions between them, this allows us to control the complexity of the data and examine how the tradeoff behavior is affected by it. With these considerations in mind, we describe the two models in the following. ### 2-d Ising model The Ising model, originally proposed as a simplified model of ferromagnetism [19], is a paradigmatic model of equilibrium phase transitions with exact solutions for its free energy [20] and spontaneous magnetization [21] on the 2-d square lattice. In this study, we use the Ising model on a \(3\times 3\) square lattice with periodic boundary conditions (\(v_{4,j}=v_{1,j}\) and \(v_{i,4}=v_{i,1}\)) following the Boltzmann statistics \[P_{V}^{0}(\mathbf{v})\propto\exp\Biggl{[}-\frac{1}{T}\mathcal{H}(\mathbf{v}) \Biggr{]}\,, \tag{17}\] Figure 2: (a) The magnetization and the susceptibility of the 2-d Ising model on the \(3\times 3\) square lattice with periodic boundaries. The vertical lines indicate the values of temperature used for generating the training datasets. (b) A schematic illustration of the TASEP with open boundaries. (c) The phase diagram of the TASEP with open boundaries. The red points indicate the control parameters used for generating the training datasets. where the Hamiltonian \(\mathcal{H}\) is given by \[\mathcal{H}(\mathbf{v})=-\sum_{i=1}^{3}\sum_{j=1}^{3}\left[(2v_{i,j}-1)(2v_{i+1,j }-1)+(2v_{i,j}-1)(2v_{i,j+1}-1)\right], \tag{18}\] with \(v_{i,j}\in\{0,\,1\}\) for each \(i\) and \(j\). We train the RBMs to the equilibrium distributions at three different temperatures \(T=1.9\), \(T=3.6\), and \(T=16\). These values are chosen so that \(T=1.9\) corresponds to the ferromagnetic (ordered) phase, \(T=3.6\) stands for the critical regime, and \(T=16\) generates the paramagnetic (disordered) phase. Even though the size of the system used in our study is small, these three values of temperature generate markedly different ensembles of spin configurations, as shown in the order parameter (magnetization) and its fluctuations (susceptibility) plotted in Fig. 2(a). Using \(P_{V}^{0}\) thus defined, we train the RBM in two different ways. The first way is the usual unsupervised learning. We draw a certain number of equilibrium spin configurations from \(P_{V}^{0}\) by the Metropolis-Hastings algorithm. To remove correlations between different sampled configurations, we saved the snapshot of the system every 90 or more (varied depending on the correlation time) Monte Carlo steps. This sampled set of Ising configurations form \(P_{V}\), which we use in the gradient descent dynamics described by Eqs. (6)-(12). We repeat the process 10 times to obtain 10 independent models \(\{Q_{V}^{\rm m}\}\), with which we calculate the GE according to Eq. (14). Meanwhile, to calculate the ME and the DE, we train the RBM in a different way. Instead of the sampled \(P_{V}\), we use the true distribution \(P_{V}^{0}\) to directly calculate the true gradients in Eqs. (6)-(12). This is done by evaluating the averages over all the \(2^{9}=512\) spin configurations of the 2-d Ising model on the \(3\times 3\) square lattice. Repeating this training 10 times, we choose the resulting model that minimizes the KL divergence to be an estimate for \(Q_{V}^{0}\). In fact, we find that the KL divergence obtained by this training method exhibits only small variabilities: the error of the estimated ME is at most about the order of the symbol size in the plots. ### TASEP with open boundaries The TASEP is a simple model of nonequilibrium 1-d transport of hard-core particles. Originally proposed as a model of biopolymerization [22], the model has found numerous applications in various traffic [23] and biological [24] systems. For the case of open boundaries, the process is defined as follows (see Fig. 2(b) for a schematic illustration). Consider a 1-d chain of \(L\) discrete sites. Each site can hold at most a single particle. From the 1-st site to the \((L-1)\)-th site, every particle moves to the right with rate 1 if the next site is empty, and the movement is forbidden if the next site is occupied. Meanwhile, if the 1-st site is empty (\(L\)-th site is occupied), a particle moves into the site from the left particle reservoir (moves from the site to the right particle reservoir) with entry rate \(\alpha\) (exit rate \(\beta\)). Depending on the values of \(\alpha\) and \(\beta\), the TASEP exhibits various phases distinguished from each other by the current \(J\) and the bulk density \(\rho_{b}\), as shown in Fig. 2(c). The phase boundaries can be correctly predicted by the simple kinematic wave theory or the mean-field arguments, although the exact solution for the steady-state statistics requires the matrix product ansatz [18, 25]. Similarly to the case of the 2-d Ising model, we train the RBM in two different ways. First, we generate a certain number of steady-state particle configurations by directly running the kinetic Monte Carlo simulation of the TASEP. To remove correlations between different sampled configurations, we take the snapshot every 90 Monte Carlo steps. This sampled set is used as \(P_{V}\) in the gradient descent dynamics described by Eqs. (6)-(12). Then the same process is repeated 10 times to obtain 10 independent models \(\{Q_{V}^{\rm m}\}\), with which we calculate the GE according to Eq. (14). To calculate the ME, we need the true distribution \(P_{V}^{0}\) of the TASEP. Using the recursive relations between matrix-product representations of the steady-state probabilities shown in [25], we can iteratively determine the probabilities of \(2^{9}=512\) TASEP configurations. Then, based on these probabilities, we directly calculate the true gradients in Eqs. (6)-(12). Then the ME is found by choosing the model that minimizes the KL divergence to be an estimate for \(Q_{V}^{0}\). Figure 3: Training dynamics of the RBM. (a) The evolution of the GE when the RBM is trained to 512 sampled configurations of the 2-d Ising model at temperature \(T=1.9\). (b) The evolution of the GE when the RBM is trained to the exact gradients of the KL divergence at the same temperature. The insets show the average trajectories of the RBM in terms of the error and the weight width during each dynamics, with the direction of time shown by black arrows. ## 4 Results ### Training dynamics In Fig. 3(a), we show the evolution of the mean error \(\left\langle D_{\mathrm{KL}}(P_{V}^{0}\|Q_{V}^{m})\right\rangle_{m}\) as the RBM is trained to the 512 sampled configurations (using mini-batches of size 256) of the 2-d Ising model at \(T=1.9\) (ferromagnetic phase). When \(N_{H}\leq 2\), the error monotonically decreases as the training proceeds, saturating by epoch \(10^{3}\). However, for \(N_{H}\geq 3\), the error reaches the minimum around epoch 500 and increases again, reaching higher value as \(N_{H}\) is increased. These tendencies reflect that more complex RBMs (with larger \(N_{H}\)) end up learning even the noisy features of the sampled configurations that deviate from the true \(P_{V}^{0}\). This is equivalent to what we call the overfitting phenomenon in supervised learning. Meanwhile, in Fig. 3(b), we show the evolution of the same mean error as the RBM is trained to the true distribution of the 2-d Ising model at \(T=1.9\). When \(N_{H}\leq 4\), the error saturates to some value that decreases as \(N_{H}\) is increased. When \(N_{H}\) is increased further, the error keeps decreasing as the training proceeds, never saturating within the observation time span. These show that the overfitting effect is absent when the true distribution is directly used for the training. The presence of overfitting can also be checked by observing the weights of the RBM. As shown in the inset of Fig. 3(a), when the RBM is trained to the sampled dataset, the width (standard deviation) of the weight distribution tends to increase towards the end of the training. This is because here the RBM is effectively decreasing its temperature, trying to distinguish configurations which happen to appear in the sampled dataset from those which happen not to appear, even if they are very similar to each other. In contrast, when the RBM is trained to the true distribution, the weight width saturates to some value and stops increasing, as shown in Fig. 3(b). This shows that there is no overfitting involved in the dynamics. ### Effects of data volume As stated in Eq. (15), the ME depends only on the true distribution \(P_{V}^{0}\) and the optimal model distribution \(Q_{V}^{0}\); thus the ME does not depend on the volume of the training dataset. The only part of the GE that can be affected by the data volume is the DE, whose behaviors are shown in Fig. 4. As the data volume \(V\) increases, the DE tends to decrease like \(V^{-1}\). This scaling behavior can be understood as follows. By the central limit theorem, the difference between the true distribution \(P_{V}^{0}\) and the sampled distribution \(P_{V}\) is of order \(V^{-1/2}\). The difference between \(Q_{V}^{0}\), related to \(P_{V}^{0}\), and \(Q_{V}^{m}\), related to \(P_{V}\), would be of the same order. Now, the DE defined in Eq. (16) can be rewritten as \[\mathrm{DE}=\left\langle D_{\mathrm{KL}}(P_{V}^{0}\|Q_{V}^{m})\right\rangle_{ \mathrm{m}}-D_{\mathrm{KL}}(P_{V}^{0}\|Q_{V}^{0}), \tag{19}\] which reaches the minimum zero when \(Q_{V}^{0}\) and \(Q_{V}^{m}\) are exactly equal. However, we expect these distributions to differ by a small amount proportional to \(V^{-1/2}\). Since the DE is close to its minimum, this difference of order \(V^{-1/2}\) leads to a correction to the DE of order \((V^{-1/2})^{2}\sim V^{-1}\). Hence, the DE scales like \(V^{-1}\). We note that some deviations from DE \(\sim V^{-1}\) are observed for the RBMs with \(N_{H}=1\) and \(2\) trained to the Ising model at \(T=1.9\) and \(T=3.6\). When \(N_{H}\) is too small compared to the complexity of the training dataset, the dynamics of the RBM may develop glassy features (as is the case for supervised learning [26]), such as the rugged loss function landscape and the presence of multiple local minima. In such cases, the DE may not be entirely induced by the difference between \(P_{V}^{0}\) and \(P_{V}^{m}\). However, in the regime of large \(N_{H}\) where the DE becomes a significant part of the GE, we expect the DE to be dominated by the effects of \(P_{V}^{0}\neq P_{V}^{m}\). ### Effects of \(N_{h}\) Now we examine the effects of \(N_{H}\) on the the GE and its two components defined by Eqs. (14), (15), (16), which are shown in Fig. 5. The results for the 2-d Ising model are shown in Fig. 5(a), (b), and (c). In all cases, the ME (DE) monotonically decreases (increases) with \(N_{H}\). For \(T=1.9\) and \(T=3.6\), this leads to the nonmonotonic behavior of the GE, which is minimized at \(N_{H}=3\) for \(T=1.9\) and \(N_{H}=4\) for \(T=3.6\). Meanwhile, for \(T=16\), the ME is already very small for \(N_{H}=1\), which reflects that a single hidden node is enough to describe the disordered state, where all spins are unbiased i.i.d. random variables. The results for the TASEP are shown in the lower panel of Fig. 5. For \(\alpha=0.3\) and \(\beta=0.7\), the ME is almost always zero, see Fig. 5(d). This is because, in the low-density phase, the occupancies of the most of the sites are i.i.d. random variables. Figure 4: Dependence of the DE on the training dataset volume. We show results for the 2-d Ising model at (a) \(T=1.9\) (ferromagnetic), (b) \(T=3.6\) (critical), (c) \(T=16\) (paramagnetic) and for the TASEP with open boundaries at (d) \(\alpha=0.3\), \(\beta=0.7\) (low density), (e) \(\alpha=0.5\), \(\beta=0.7\) (phase boundary), (f) \(\alpha=0.7\), \(\beta=0.7\) (maximal current). The monotonic decrease of the ME becomes more visible as the system crosses the phase boundary (see Fig. 5(e)), although the tradeoff behavior of the GE becomes clear only when the system is well within the maximal-current phase, see Fig. 5(f). For the TASEP, it is known that the mean-field approximations are valid sufficiently far away from the boundaries. While those boundary effects decay exponentially with distance in the low-density and the high-density phases, the decay is algebraic in the maximal-current phase. Thus, the amount of correlations present in the data tend to be larger for the maximal-current phase. To sum up, we observe that the GE exhibits a U-shaped tradeoff behavior as the ME (DE) decreases (increases) monotononically with \(N_{H}\). The GE minimum tends to occur at a higher \(N_{H}\) when the dataset to be modeled is more complex. The situation is analogous to the bias-variance tradeoff observed in supervised learning, as shown in Table 1. ### Comparison with the bias-variance decomposition While the behaviors of the ME and the DE shown in Fig. 5 are similar to those shown by the bias and the variance in supervised learning, there are no clear mathematical connections between these quantities. In fact, in 1998, Heskes proposed a bias-variance decomposition scheme for the KL divergence [10], which would read in our problem as \[\left\langle D_{\mathrm{KL}}(P_{V}^{0}\|Q_{V}^{m})\right\rangle_{m}=\underbrace {D_{\mathrm{KL}}(P_{V}^{0}\|\bar{Q}_{V})}_{\equiv\mathrm{Bias}}+\underbrace {\left\langle D_{\mathrm{KL}}(\bar{Q}_{V}\|Q_{V}^{m})\right\rangle_{m}}_{ \equiv\mathrm{Variance}}, \tag{20}\] where \[\bar{Q}_{V}(\mathbf{v})\equiv\frac{1}{\mathcal{Z}}\exp\left\langle\log Q_{V}^ {m}(\mathbf{v})\right\rangle_{m} \tag{21}\] Figure 5: Dependence of various error measures on the number of hidden nodes in the RBM. We show results for the 2-d Ising model at (a) \(T=1.9\) (ferromagnetic), (b) \(T=3.6\) (critical), (c) \(T=16\) (paramagnetic) and for the TASEP with open boundaries at (d) \(\alpha=0.3\), \(\beta=0.7\) (low density), (e) \(\alpha=0.5\), \(\beta=0.7\) (phase boundary), (f) \(\alpha=0.7\), \(\beta=0.7\) (maximal current). is the _mean distribution_ for the suitable normalization constant \(\mathcal{Z}\). This generalizes the original bias-variance decomposition originally proposed for the MSE in the following sense: 1. The variance does not depend on \(P_{V}^{0}\) directly. Also it is nonnegative and equal to zero if and only if \(Q_{V}^{m}\) are always identical. 2. The bias depends on only \(P_{V}^{0}\) and the "average model" \(\bar{Q}_{V}\), which is defined as the minimizer of the variance. We reexamine the effects of \(N_{H}\) shown in Fig. 5 using the Heskes decomposition scheme. As shown in Fig. 6, while the variance monotonically increases with \(N_{H}\), the bias exhibits a nonmonotonic behavior as \(N_{H}\) is increased. It seems that, in this case, the bias also contains significant contributions from the sampling fluctuations which were captured by the DE of our decomposition scheme. This might be due to the training outcome \(Q_{V}^{m}\) having a skewed distribution around the optimal distribution \(Q_{V}^{0}\). Thus, to describe the tradeoff behavior of the GE, the decomposition into the ME and the DE seems more appropriate than the Heskes bias-variance decomposition. ## 5 Summary and outlook In this study, we proposed that the generalization error (GE) in unsupervised learning can be decomposed into two components, namely the model error (ME) and the data error (DE). To examine how these quantities behave as the data and the model complexities are varied, we trained the RBMs with various numbers of hidden nodes to generate the steady-state configurations of the 2-d Ising model at a given temperature and the TASEP with given entry and exit rates, whose statistics are exactly known. Figure 6: Decomposition of the GE according to the bias-variance scheme à la Heskes. We show results for the 2-d Ising model at (a) \(T=1.9\) (ferromagnetic), (b) \(T=3.6\) (critical), (c) \(T=16\) (paramagnetic) and for the TASEP with open boundaries at (d) \(\alpha=0.3\), \(\beta=0.7\) (low density), (e) \(\alpha=0.5\), \(\beta=0.7\) (phase boundary), (f) \(\alpha=0.7\), \(\beta=0.7\) (maximal current). For both models, we observed that the DE tends to decrease as the inverse volume of the training dataset, verifying that our decomposition properly distinguishes between the two sources of the GE, namely the inadequacies of the model (ME) and those of the training data (DE). Moreover, we found that the ME (DE) decreases (increases) monotonically as the number of hidden nodes is increased, leading to the tradeoff behavior of the GE. The GE minimum occurs at a higher number of hidden nodes when there exist stronger correlations between different parts of the system. This is analogous to the bias-variance tradeoff in supervised learning--too simple machines fail to capture the essential features of the data, while too complex machines fail to filter out the noise. Thus, our study clarifies the nature of the GE tradeoff in unsupervised learning. Various theoretical studies are possible from here. For example, while our study has reported only numerical results for the RBMs, it would be interesting to think of an analytically tractable example of unsupervised learning for which the location of the GE minimum can be explicitly connected to the statistical properties of the training data. Also notable is the abrupt change in the training dynamics at the onset of the overfitting regime shown in Fig. 3. It would be worthwhile to check whether the RBM crosses any phase boundary as its dynamical behavior changes. The issues of (i) how regularization suppresses overfitting and affects the GE minimum, (ii) how the hidden layer differently encodes information in the underparametrized and the overparmetrized regimes [27, 28], and (iii) whether the double descent of the GE observed in supervised learning [6, 7, 8, 9] is also possible in unsupervised learning would be also interesting to investigate. On the practical side, we note that the GE defined as the KL divergence in Eq. (14) is not easy to calculate as \(P_{V}^{0}\) is unknown in practice. Instead, one can focus on the tradeoff behavior of the log-likelihood \[\mathcal{L}\equiv\left\langle\frac{1}{M}\sum_{i=1}^{M}\log Q_{V}^{\rm m}( \mathbf{v}_{i})\right\rangle_{\rm m}, \tag{22}\] whose value is obtained using a sampled test dataset \(\{\mathbf{v}_{1},\ldots,\mathbf{v}_{M}\}\). Unlike the GE, \(\mathcal{L}\) is not bounded by zero, but the the nonmonotonic behavior of the GE as a function of \(N_{H}\) will manifest itself as the inverted nonmonotonic behavior of \(\mathcal{L}\), as illustrated for \begin{table} \begin{tabular}{c|c|c} & Supervised learning (FNN) & Unsupervised learning (RBM) \\ \hline Generalization error & Mean-squared error & KL divergence \\ Limitation of the model & Bias & Model error \\ Performance variability & Variance & Data error \\ What it learns & Functional relationship & Distribution \\ What it overfits & Noise & Sampling error \\ \end{tabular} \end{table} Table 1: A comparison between the GE tradeoff behaviors in supervised (feed-forward neural network; FNN) and unsupervised learning the 2-d Ising model and the TASEP in Fig. 7. It would be interesting to check whether the same behavior is observed for more diverse range of examples.
2302.05431
NeSe: Near-Sensor Event-Driven Scheme for Low Power Energy Harvesting Sensors
Digital technologies have made it possible to deploy visual sensor nodes capable of detecting motion events in the coverage area cost-effectively. However, background subtraction, as a widely used approach, remains an intractable task due to its inability to achieve competitive accuracy and reduced computation cost simultaneously. In this paper, an effective background subtraction approach, namely NeSe, for tiny energy-harvested sensors is proposed leveraging non-volatile memory (NVM). Using the developed software/hardware method, the accuracy and efficiency of event detection can be adjusted at runtime by changing the precision depending on the application's needs. Due to the near-sensor implementation of background subtraction and NVM usage, the proposed design reduces the data movement overhead while ensuring intermittent resiliency. The background is stored for a specific time interval within NVMs and compared with the next frame. If the power is cut, the background remains unchanged and is updated after the interval passes. Once the moving object is detected, the device switches to the high-powered sensor mode to capture the image.
Sepehr Tabrizchi, Mehrdad Morsali, Shaahin Angizi, Arman Roohi
2023-02-07T19:11:50Z
http://arxiv.org/abs/2302.05431v1
# NeSe: Near-Sensor Event-Driven Scheme for Low Power Energy Harvesting Sensors ###### Abstract Digital technologies have made it possible to deploy visual sensor nodes capable of detecting motion events in the coverage area cost-effectively. However, background subtraction, as a widely used approach, remains an intractable task due to its inability to achieve competitive accuracy and reduced computation cost simultaneously. In this paper, an effective background subtraction approach, namely NeSe, for tiny energy-harvested sensors is proposed leveraging non-volatile memory (NVM). Using the developed software/hardware method, the accuracy and efficiency of event detection can be adjusted at runtime by changing the precision depending on the application's needs. Due to the near-sensor implementation of background subtraction and NVM usage, the proposed design reduces the data movement overhead while ensuring intermittent resiliency. The background is stored for a specific time interval within NVMs and compared with the next frame. If the power is cut, the background remains unchanged and is updated after the interval passes. Once the moving object is detected, the device switches to the high-powered sensor mode to capture the image. ## I Introduction From energy-harvested surveillance and monitoring systems in smart cities to smart human-machine interfaces in mobile devices, smart, low-power, connected sensors are attracting increasing interest in a wide variety of applications. Moreover, our environment can be best described through vision, which is becoming increasingly ubiquitous in video monitoring applications. Human observers monitor several cameras to detect unusual activity and provide immediate feedback. Unfortunately, human observers lose 90% of their concentration capability after only 20 minutes of following ten cameras attentively, which defeats the purpose of this approach. Therefore, the automatic detection of unusual events in embedded applications is becoming increasingly significant. Machine vision applications often begin with background subtraction, making it an essential component. Inputs from background subtraction are given to higher-level processes, such as object tracking. An online video background subtraction usually consists of two stages: initialization of the background model, in which the bootstrapping is performed, and the background model's maintenance, which involves updating the parameters online. Interpreting a scene, however, requires large amounts of computing power and data-intensive vision algorithms. As they are highly parallelizable, pixel-level foreground detectors are ideal for embedded platforms. CMOS imagers with on-chip feature extraction and compression have been developed extensively in the last decade with the primary goal of optimizing computing resources and reducing overall power consumption [1, 2, 3]. In this work, we propose a near-sensor event-driven architecture, namely _NeSe_, allowing for a trade-off between accuracy and power efficiency. NeSe is capable of operating in different modes, 12 in total, regarding the precision and box sizes, which will be explained in the following. To the best of our knowledge, this work is the first that utilizes non-volatile elements to store the static background, which leads to a notable reduction in standby power consumption. ## II Near-Sensor Processing Background In the same way that eyes and brains work together, the sensors that detect the field of view generate a stream of pixels that represent the scenic event and are sent to a backend processor. Although there are 130 million pixels on the retina, the brain only has 1.3 million synaptic connections, which indicates a high sparsity ratio. This massive sparsity can significantly reduce power consumption and latency. A further improvement can be made by reducing the amount of redundant information sent to the brain. By inspiring from the observations and taking steps to mitigate the abovementioned issues, the integration of computing and sensing has been extensively studied, reducing data movement and ADC bandwidth. The research outcomes are classified into three designs, processing-near-sensor (PNS) [4, 5], processing-in-sensor (PIS) [6, 7, 8], and finally processing-in-pixel (PIP) [9, 10]. Most computer vision systems perform background subtraction as a first step in detecting moving objects within a video stream without having prior knowledge of the objects themselves [11]. A background model is generally created during the background subtraction process. The easiest way to do this is to manually set a static image for the background that has no moving objects. Each video frame is then compared to the static image to compute the absolute difference, referred to as Static Frame Difference, and is represented by: \(|F_{i}-B|>TH\). In the event of changing ambient lighting, a static image may not be the best choice since the foreground segmentation may fail completely. Alternately, the previous frame may be used instead of a static image, referred to as Frame Difference, and is expressed by: \(|F_{i}-F_{i-1}|>TH\). Due to its sensitivity to threshold TH, this technique may only work properly under certain frame rates and object speeds, e.g., it fails if the moving object stops suddenly. Thus, Authors in [12] modeled the background more accurately using the average, arithmetic mean, or weighted mean of several previous frames. The equation for the past \(n\) frame is: \(B_{j}=\frac{1}{n}\Sigma_{i=0}^{n-1}F_{j-i}\). In order to store more frames in off-chip memory, this model requires high memory stages are needed, which conflict with resource-constrained tiny devices. ## III Proposed Design We propose NeSe as an efficient and reconfigurable always-on intelligent visual perception architecture as shown in Fig. 1(a) that realizes a near-sensory processing scheme with event detection capabilities. NeSe consists of a \(600\times 600\) pixel array (PA), a non-volatile Spin-Orbit Torque Magnetic Random Access Memory (SOT-MRAM) array, a row controller (Ctrl), a command decoder, a sensor timing Ctrl, a memory/computing unit, and readout/ADC/SA/comparator circuitry. The storage element in SOT-MRAM is SHE-MTJ [13] Each cell located in the MRAM array is connected with a Write Word Line (WWL), Write Bit Line (WBL), Read Word Line (RWL), Read Bit Line (RBL), and Source Line (SL). The bit-cell structure of 2T1R SOT-MRAM and its biasing conditions are shown in Fig. 1(d). The proposed architecture operates in two modes, i.e., _sensing_ and _event detection_. The NeSe architecture captures the input as a static background and then writes the central pixels according to the configured settings into the MRAM cells. Due to the non-volatility feature of MRAMs, if the power of NeSe is cut, the initial background is still held. Once the moving object is detected, the architecture switches to the sensing mode to detect the object(s). To reduce the overall power consumption, NeSe only updates (sends) the modified pixels on the MRAM array. ### _Pixel and MRAM arrays_ Illustrated in Fig. 1(c), conventional NeSe's pixel consists of a three-transistor/one-photodiode (PD) sensor. In the sensing mode, by initially setting Rst="high", the PD connected to the T\({}_{1}\) transistor turns into inverse polarization and the readout component captures a \(V_{1}\) = VDD voltage. By turning off T\({}_{1}\), PD generates a photo-current with respect to the external light intensity which in turn leads to a voltage drop (\(V_{PD}\)) at the gate of T\({}_{2}\). Therefore, the voltage values before and after the image light exposure, i.e., \(V_{1}\) and \(V_{2}\), are sampled by the readout circuit, and the difference between the two voltages is sensed, amplified, and then converted to digital data by an ADC. This value is proportional to the voltage drop on \(V_{PD}\). Figure 1(e) depicts the functionality of one proposed pixel. It is worth pointing out that each ADC samples when the voltage drops, then it subtracts the pixel reset voltage and converts the output signal. Accordingly, the ADC can skip to the next row of the array. NeSe is equipped by a near-sensor CMOS bit-wise XNOR comparator, as shown in Fig. 1(a), to efficiently compare such row-wise digitized pixel data with the corresponding captured background in the MRAM array to detect events. To enable this, one row of the MRAM array, shown in Fig. 1(b), is selected, sensed out, and loaded as the first operand into a register at the comparator where the second register is loaded by the pixel data. Accordingly, a single-cycle XNOR operation is accomplished. If a mismatch is detected, i.e., an event observed, the MRAM array holding the central pixels requires to be updated. Computationally, this stage requires \(n\) MRAM write operation. To achieve an ultra-fast low-energy write operation, the SOT-MRAM cells are developed with a \(20K_{b}T\) energy barrier. As experimentally shown in [14], this will reduce the write energy consumption by half compared with the conventional 40\(K_{b}T\) design at the cost of lower retention time. ### _Event-Detection Mode_ The primary task of the always-on NeSe architecture is to detect an event using background variations. NeSe supports 12 various implementations to consider both efficiency and accuracy design metrics. Different designs are determined by the \(box\_size\in\{3,5,7\}\) and \(precision\in\{1,2,3,4\}\), where \(box\_size\) represents height and width of defined groups, and \(precision\) denotes the bit-width of ADCs. Each \(n\times n\) pixel box includes only one ON pixel, (\(n-1\)) Disconnect pixels, and (\(n^{2}-n\)) OFF pixels. An implementation with a larger box size reduces power consumption at the cost of accuracy degradation. In NeSe, each column is enabled via a distinct but common \(V_{DD}\), and each row is chosen using a common row selector (_R_) signal. Thus, the ON, Disconnect, and OFF pixels are formed when \(R\) and the column are enabled, \(R\) is disabled, but the column is enabled, and the column is disabled regardless of the \(R\) value, respectively. The \(R\) signal is valued using (\(nx-1\)), where \(n\in\{3,5,7\}\) and \(x\) is the row index \(\in\{1,2,\ldots,\left\lfloor 600/n\right\rfloor\}\). Consequently, all the columns without central pixels are disconnected from the power supply (OFF), while the rest of the pixels in the columns containing the central pixel is disconnected using \(R\) signal. For instance, as shown in Fig. 2, by setting \(box\_size\) to 3, all the pixels are grouped into a \(3\times 3\) shape, where the central pixel (e.g., Fig. 1: (a) The NeSe architecture, including (b) an MRAM array and (c) a pixel. (d) Schematic and biasing of an MRAM, and (e) pixel’s transient waveform. \(P_{2,2}\)) is ON, other two pixels (e.g., \(P_{1,2}\) and \(P_{3,2}\)) in the same column are disconnected from ADCs because of \(R\) values, and the rest (e.g., \(P_{1,1},P_{2,1},P_{3,1},P_{1,3},P_{2,3}\) and \(P_{3,3}\)) are OFF. The power consumption and the total number of boxes regarding different box sizes are summarized in Table II. Larger box sizes (e.g., \(7\times 7\)) consist of the lower number of central pixels (\(7396\)), which leads to more power saving at the cost of accuracy loss. Another reconfigurable capability of NeSe is the _precision's_ bit-width, which defines the number of compared bits between a pixel and its previous stored value in an MRAM. A lower _precision_ requires a smaller number of comparisons and write-back operations that decreases the power consumption but again at the cost of accuracy loss. Thus, a trade-off between efficiency and accuracy can be determined by the user w.r.t available resources, criteria, etc. Figure 3 depicts different scenarios, including various box sizes, precisions, light situations, and updating the background. First, NeSe captures Fig. 3(a) and stores it as a static background within MRAM cells. Then, an event has occurred in Fig. 3(b), and its results related to different precisions are shown. Interestingly, even 1-bit precision removes the background efficiently. Figure 3(c) illustrates the results for varied box sizes. After comparing the new input (\(t_{i+n+5}\)) with the stored background at time \(t_{i}\), we detect that the chair is moved, and the mug is left on the desk. A smaller box size (e.g., \(3\times 3\)) provides sharper output. After a while, as shown in Fig. 3(d) time \(t_{i+2n}\), light status changed, but NeSe functions appropriately. Finally, in Fig. 3(e), the background is updated by these pixels because the chair locations and the mug remain unchanged for a while. The comparison results using the high accuracy \(3\times 3\) boxes exhibit no difference between Fig. 3(e) and the new background. Algorithm1 shows all the steps, including the event-detection and sensing modes provided by the NeSe architecture. The algorithm takes the size of the box, precision, and two thresholds, threshold\({}_{\text{pixels}}\), and time\({}_{\tau}\). The former is used for minimum changes, whereas the latter is leveraged to update the background. First, every row containing a central pixel, line (9), is activated, and the parallel comparison is performed in line 11 between all the central values and the previous value of the same pixel. The **parallel_comp** function takes the precision, which determines the required number of compared bits. For example, if \(precision=1\), only the most significant bits of pixels are compared. In line 12, if the number of changes is greater than or equal to threshold\({}_{\text{pixels}}\), the row index is held in the turn_on_list. After checking all rows, the length of the turn_on array is checked. In the case of non-equality to zero, the mode is changed to the sensor mode, and the time counter is increased by one. This variable indicates how many times NeSe is switched to sensor mode continuously. If this variable reaches time\({}_{\tau}\), we need to update the background with the new values (line 7). As shown in Fig.3(e), after updating the background to (d), most of the compared pixels are black. ``` 1:Input\({}_{\text{i}}\):\(\textit{box\_size}\in\{3,5,7\}\) & precision \(\in\{1,2,3,4\}\)-bit 2:Inputs:threshold\({}_{\text{pixel}}\), time\({}_{\tau}\): 3:Output:sensor\({}_{\text{mode}}\) status 4:turn\({}_{\text{pm}}\)\(\leftarrow\)\(\emptyset\) 5:procedureEvent-Detection 6:if\(\text{time}\geqslant{}time_{\tau}\):\(\triangleright\) Merge steady objects with the background. 7:update(background) 8:for\(i=\lfloor\text{bin power that largely depends on the ADC precision. \(P_{MRAM}\) is the SOT-MRAM's read power and \(P_{compare}\) denotes the power consumed by the near-sensor CMOS bit-wise XNOR comparator. Assuming a 2-bit ADC structure, every central pixel after readout has to be compared with two SOT-MRAM cells holding the background value. This means 2\(\times P_{MRAM}\) are considered in the evaluations. We observe that the higher the ADC precision is (here from 4-bit to 2-bit), the higher power budget is required for the edge device to perform such a near-sensor computation and within a particular ADC precision, the larger box size brings higher power efficiency to the system at the cost of lower accuracy as discussed above. ### _Intermittent-Robust Operation_ Power supplies in energy harvesting systems are limited in capacity. Besides, a CMOS-based design loses data when powered down, so restoring (writing back) information after a new power-up consumes power and time. Energy harvesting devices may undergo a charge/discharge cycle hundreds of times per second, which means the system might consume a significant portion of its entire power supply capacity to restore data. Although NV-MRAMs provide power failure tolerant designs, the required power consumption of write operations for non-volatile elements remains an issue. Thermal barriers between \(40-60\)\(kT\) are generally chosen for MRAM to provide a retention time (\(\tau=\tau_{0}\)\(exp(\Delta/kT)\)) of 10-15 years, while the critical spin-current is linearly proportional to the thermal barrier \(\Delta\). Thus, for our application that does not require retention times of years, we reduce the thermal barrier of nanomagnets by means of uniaxial anisotropy. Herein, MRAM components with \(20kT\) energy barriers are investigated that can achieve retention times ranging from minutes to hours while providing at least 75% energy reduction. By reducing the charge currents required for the write operation, significant energy savings can be achieved due to a quadratic relationship between the Ohmic (\(I^{2}R\)) losses and the input write currents. ## V Conclusion This paper proposed a practical background subtraction approach, NeSe, for tiny energy-harvested sensors leveraging MRAMs. NeSe allows the accuracy and efficiency of event detection to be adjusted at runtime based on the application's requirements. Furthermore, the proposed design reduces data movement overhead due to the near-sensor implementation of background subtraction. Moreover, MRAMs ensure intermittent resiliency, meaning if the power is cut, the background remains unchanged. Finally, if the moving object is detected, the device switches to the high-powered sensor mode. ## Acknowledgements This work is supported in part by the National Science Foundation under Grant No. 2216772 and 2216773.
2308.10701
On the comparison between pre- and post-surgery nasal anatomies via computational fluid dynamics
Nasal breathing difficulties (NBD) are widespread and difficult to diagnose; the failure rate of their surgical corrections is high. Computational Fluid Dynamics (CFD) enables diagnosis of NBD and surgery planning, by comparing a pre-operative (pre-op) situation with the outcome of virtual surgery (post-op). An equivalent comparison is involved when considering distinct anatomies in the search for the functionally normal nose. Currently, this comparison is carried out in more than one way, under the implicit assumption that results are unchanged, which reflects our limited understanding of the driver of the respiratory function. The study describes how to set up a meaningful comparison. A pre-op anatomy, derived via segmentation from a CT scan, is compared with a post-op anatomy obtained via virtual surgery. State-of-the-art numerical simulations for a steady inspiration carry out the comparison under three types of global constraints, derived from the field of turbulent flow control: a constant pressure drop (CPG) between external ambient and throat, a constant flow rate (CFR) through the airways and a constant power input (CPI) from the lungs can be enforced. A significant difference in the quantities of interest is observed depending on the type of comparison. Global quantities (flow rate, pressure drop, nasal resistance) as well as local ones are affected. The type of flow forcing affects the outcome of the comparison between pre-op and post-op anatomies. Among the three available options, we argue that CPG is the least adequate. Arguments favouring either CFR or CPI are presented.
Eric Segalerba, Gabriele Dini Ciacci, Maurizio Quadrio, Jan O. Pralits
2023-08-21T13:10:48Z
http://arxiv.org/abs/2308.10701v1
# On the comparison between pre- and post-surgery nasal anatomies via computational fluid dynamics ###### Abstract Nasal breathing difficulties (NBD) are widespread and difficult to diagnose; the failure rate of their surgical corrections is high. Computational Fluid Dynamics (CFD) enables diagnosis of NBD and surgery planning, by comparing a pre-operative (pre-op) situation with the outcome of virtual surgery (post-op). An equivalent comparison is involved when considering distinct anatomies in the search for the functionally normal nose. Currently, this comparison is carried out in more than one way, under the implicit assumption that results are unchanged, which reflects our limited understanding of the driver of the respiratory function. The study describes how to set up a meaningful comparison. A pre-op anatomy, derived via segmentation from a CT scan, is compared with a post-op anatomy obtained via virtual surgery. State-of-the-art numerical simulations for a steady inspiration carry out the comparison under three types of global constraints, derived from the field of turbulent flow control: a constant pressure drop (CPG) between external ambient and throat, a constant flow rate (CFR) through the airways and a constant power input (CPI) from the lungs can be enforced. A significant difference in the quantities of interest is observed depending on the type of comparison. Global quantities (flow rate, pressure drop, nasal resistance) as well as local ones are affected. The type of flow forcing affects the outcome of the comparison between pre-op and post-op anatomies. Among the three available options, we argue that CPG is the least adequate. Arguments favouring either CFR or CPI are presented. **Keywords: nasal flow, CFD, LES, nasal resistance, constant power input** ## 1 Introduction Nasal breathing difficulties (NBD) are a widespread condition; it is well known (Gray, 1978) that the large majority of population exhibits some anatomic deformity of the nasal airways. Clinicians are frequently confronted with the issue of whether such deformities are the actual cause of the patient's symptoms: while some situations, e.g. an overly deviated septum, are self-evident, the interpretation of less pronounced deformities is often debatable. The number of surgeries, and in general the burden induced by NBD on the healthcare system, is large. Surgeons rely on their own judgment and experience to take surgical decisions, but errors are unavoidable (Rhee et al., 2014). The subjectivity of such choices leads to several unnecessary surgeries being performed each year worldwide: a large failure rate of the interventions actually carried out is recorded e.g. for septoplasty (Sundh and Sunnergren, 2015) or maxillectomy (Bertazzoni et al., 2017). Numerical analysis, as a tool to investigate bio-mechanical problems, is becoming common practice in several areas, including the nasal airways, where computational fluid dynamics (CFD) is increasingly used in several studies: see e.g. Inthavong et al. (2019) for a recent and authoritative review. CFD makes virtual surgery possible (Radulesco et al., 2020; Moghaddam et al., 2020), by enabling the comparison between the flow in the original (pre-surgery or pre-op) anatomy and the flow in the (post-surgery or post-op) anatomy modified by the surgeon on the computer. However, even though computing the flow in the nasal cavities via CFD may seem straightforward, a clearly defined and standardized procedure is still lacking. Fundamental questions regarding the flow in the human nose remain unaddressed, reflecting our limited understanding of its physiological driver(s), and this has hindered so far a widespread use of CFD for clinical purposes. This contribution addresses one such question, which has not been identified so far, let alone answered: How should a comparison between pre-op and post-op anatomies be carried out? The question is apparently simple, yet the answer is non-trivial, and requires putting together concepts ranging from numerical analysis to physiology of the entire respiratory system. Once the scope of the question is enlarged to include the comparison of two generic anatomies, it becomes apparent that an appropriate answer is crucial for the successful identification of the functionally normal nose. By surveying the existing literature, and limiting the analysis to the frequent case of steady inspiration (or expiration), one notices that several CFD simulations of the nose flow enforce either a constant pressure difference between the external ambient and some point in the trachea (see e.g. Cannon et al., 2013; Radulesco et al., 2019; Cherobin et al., 2020) or a certain flow rate through the passageways (see e.g. Lindemann et al., 2013; Calmet et al., 2020; Bruning et al., 2020)). The first choice does not appear to possess a clear physiological rationale, whereas the second implies a comparison under the constraint of the same oxygen consumption rate. Statistically, about 2/3 of the papers employ the latter approach. The two choices will be shown here to be not equivalent; hence, one has to decide beforehand which global quantity is kept constant across a comparison. Moreover, a third option will be introduced. Although in a vastly different context, the very same question was identified and answered by Frohnapfel et al. (2012) in the field of turbulence and flow control. In that case, the complex nasal anatomy reduces to a much simpler duct (a straight channel or a pipe, for example): still, either a pressure drop must be established across the inlet and the outlet sections, or a flow rate must be imposed, for the fluid to flow through the duct. Surgery can be interpreted as flow control via shape optimization; the pre- and post-op anatomies correspond to the flow without and with flow control. In a duct flow, the comparison can be carried out by either enforcing a Constant Pressure Gradient (CPG) and measuring the flow rate, or enforcing a Constant Flow Rate (CFR) and measuring the pressure drop. A third option, named Constant Power Input (CPI) (Hasegawa et al., 2014), was also proposed as a further alternative, in which the quantity that remains constant across the comparison is the pumping power that enters the system, given by the product of the pressure drop and the flow rate. In an indefinite plane channel flow, the choice between CPG, CFR and CPI has been found to imply minor differences in the statistical description of the same flow (Quadrio et al., 2016), and a similar result is expected for the nasal flow. The point of concern, though, is that the choice of the forcing term becomes crucial when flow control is applied to reduce the skin-friction aerodynamic drag, and two _different_ flows (albeit geometrically similar) need to be compared: in this case, the outcomes of CFR, CPI and CPG simulations differ significantly. The objective of the present paper is thus to assess whether or not comparing pre-op and post-op anatomies is affected by the choice of one among the CFR, CPI and CPG strategies. We will delineate a simple CFD setup, where for example the clinically important temperature field is not computed, and consider one patient-specific pre-op anatomy with a corresponding post-op anatomy already available. It was created with virtual surgery, specifically with an endoscopic medial maxillectomy, and has been described by Saibene et al. (2020), where it was used as a reference surgical approach to develop alternative options which partially preserve the middle turbinate. The structure of the paper is as follows: in Sec. 2 the anatomies are described, and details about the CT scan and its reconstruction are provided; the computational model (equations, discretization and treatment of turbulence) is illustrated, and the three types of flow forcing are described in detail. In Sec. 3 both quantitative and qualitative results obtained for the two anatomies with the different forcings are described and compared, including the first-ever CPI simulations of the flow in the human nose. Sec. 4 critically discusses the results in terms of the significance of the various comparisons; lastly, Sec. 5 summarizes the study and indicates the importance of a clinical consensus to define the best comparison strategy. ## 2 Methods ### Anatomy and computational domain The pre-op anatomy considered in the present study is obtained from segmentation of a CT scan. The post-op anatomy, instead, is built after a virtual maxillectomy of the former. Both have been described and discussed at length by Saibene et al. (2020). The CT scan of a 67-year-old man, consisting of 384 DICOM images with spatial and coronal resolution of 0.5 \(mm\) and an axial gap of 0.6 \(mm\), is segmented, at constant radiodensity threshold, under supervision of an ENT expert, according to a previously described procedure (Quadrio et al., 2016). Figure 1 portraits the pre-op anatomy, complemented by a spherical air volume surrounding the external nose, designed to move the inlet portion of the computational domain far from the nostrils while minimizing the computational overhead. The post-op anatomy is obtained by virtual surgery of the same patient for endoscopic medial maxillectomy, executed under close guidance of an ENT surgeon (Saibene et al., 2020). This surgery is a standard procedure in the management of maxillary sinus neoplasms, and is sometimes employed to address inflammatory conditions. It has been chosen because it clearly alters the nasal resistance, and thus constitutes a convenient test bed for the present study. ### Computational procedures The pre- and post-op flow fields for a steady-state inspiration are computed with a relatively standard high-fidelity CFD approach, shortly described below. For the numerical solution of the flow equations, we employ the open-source finite-volumes solver OpenFOAM (Weller et al., 1998). The computational domain shown in Figure 1 is discretized into a volume mesh which contains approximately 14.6 millions cells for the pre-op anatomy, and 15.4 millions for the post-op one, whose volume is slightly larger. Meshing starts from an uniform background mesh of cubic cells, with edge length of 250 microns, which is deformed and refined near the boundary in the process of adaptation to the curved boundary. The maximum non-orthogonality is less than 60\({}^{\circ}\), and its mean value is 4.4\({}^{\circ}\); maximum skewness is 3.2\({}^{\circ}\). In physiological conditions, the nasal flow is typically transitional with coexisting laminar and subcritical turbulent regions. Moreover, it is often unsteady even when the boundary conditions are steady. Hence we adopt a time-resolved approach, i.e. a high-resolution Large-Eddy Simulation (LES), in which most of the turbulent flow scales are resolved, and only the smallest scales are modelled. The LES turbulence model is the Wall-Adapting Local Eddy-viscosity (Nicoud and Ducros, 1999), which is able to turn itself off in regions where turbulence is absent. It should be realized, though, that with a high-resolution LES the turbulence model becomes relatively unimportant, since the mesh is fine enough to render the model contribution small or negligible. The differential operators are all discretized at second-order accuracy, as it has been shown by Schillaci and Quadrio (2022) that a lower order deteriorates the solution to an unacceptable level, regardless of the turbulence modeling approach. The incompressible LES equations are solved with no-slip and no-penetration conditions applied to the solid boundaries; the boundary conditions enforced at the inflow (sphere) and at the outflow (throat) boundaries depend on the specific forcing, and will be discussed below in Sec. 2.3. The temporal discretization uses a second-order implicit scheme: no stability constraint limits the size of the time step. However, for accuracy a value of the Courant--Friedrichs--Lewy number of 0.3 is imposed in all cases. The average time step is \(1.1\cdot 10^{-5}\) seconds for the pre-op cases, and \(4.5\cdot 10^{-6}\) seconds for the post-op cases. Each simulation computes one second of physical time in about 4000 core hours. Parallel computing is used to reduce the computing time to less than two days. ### Flow forcing The pre-op and post-op anatomies are compared for a case of steady inspiration, in which either the volumetric flow rate \(Q\) (CFR), the pressure difference \(\Delta p\) between inlet and outlet (CPG) or the power input \(P\) entering the system (CPI) are kept constant across the comparison. (Note that, in an incompressible flow, pressure is defined within an arbitrary constant; a reference pressure \(p=0\) is thus set at the outer ambient, and assigning \(\Delta p\) becomes equivalent to assigning the pressure \(p_{th}\) at the throat.) A close analogy exists between the present problem and the comparison of two turbulent duct flows, where flow control is used to alter the natural relation between the flow rate and the pressure drop, i.e. the friction law (Hasegawa et al., 2014). The first two options are simple from a practical point of view: in a typical CFD flow solver the numerical values for the quantities to be kept constant can be straightforwardly assigned. In CFR, one enforces a constant flow rate \(Q=Q_{0}\), obtains a time-varying outlet pressure \(p_{th}(t)\) and computes its average value \(\overline{p}_{th}\)_a posteriori_. In CPG, one enforces a constant outlet pressure \(p_{th}=p_{0}\), obtains a time-varying flow rate \(Q(t)\) and computes its average value \(\overline{Q}\)_a posteriori_. The third approach, CPI, is less conventional and typically not immediately available in standard CFD solvers, as power cannot usually be prescribed directly, but is at least as physically sound as the others. In CPI, one enforces a constant power input \(P=P_{0}\), where \(P\) is the product of the pressure drop and the flow rate: \[P=Q\Delta p=-Qp_{th}, \tag{1}\] in which the last identity follows from having set \(p=0\) at the inlet. The solution then yields \(p_{th}(t)\) and \(Q(t)\), from which the values \(\overline{p}_{th}\) and \(\overline{Q}\) are computed both _a posteriori_. In a time-dependent calculation, the simplest numerical implementation of Eq.(1) computes at each time instant \(t\) the throat pressure \(p^{t}_{th}\) needed to drive the flow as a function of the flow rate \(Q^{t-dt}\) at the previous time \(t-dt\), i.e.: \[p^{t}_{th}=-\frac{P_{0}}{Q^{t-dt}}, \tag{2}\] Figure 1: Pre-op (left) and post-op (right) anatomies, including an external spherical volume arond the nose tip. In the post-op anatomy, the red circle highlights changes due to the virtual maxillectomy. where \(dt\) is the time step. In analogy with the duct flow problem, it is important to notice that the three options are essentially equivalent as long as a single anatomy is considered. In other words, a given case can be computed with CFR by enforcing a constant \(Q=Q_{0}\) to obtain as a result a certain mean pressure \(\overline{p}_{th}=p_{0}\) and mean power \(\overline{P}=P_{0}\); the same case computed with CPG by enforcing a constant \(p_{th}=p_{0}\) yields \(\overline{Q}=Q_{0}\) and \(\overline{P}=P_{0}\); and when CPI is used by enforcing a constant \(P=P_{0}\), one gets \(\overline{Q}=Q_{0}\) and \(\overline{p}_{th}=p_{0}\). It is only when distinct anatomies are considered that differences may become significant. This is exactly the scenario considered in the present work. ## 3 Results Results from six simulations are presented, comparing the pre-op and the post-op cases computed with the three options (CFR, CPG and CPI) available for the flow forcing. The simulations consider a steady inspiration with a flow rate of about \(16\ l/min\) or \(2.67\cdot 10^{-4}\ m^{3}/s\), corresponding to a mild breathing intensity (Wang et al., 2012). Equivalently, the reference case has \(24.45\ Pa\) of pressure difference between the external ambient and the throat, and \(6.53\ mW\) of mechanical power used to drive the flow through the nasal cavities. The LES approach computes the temporal evolution of the flow; after excluding the initial transient, the time-dependent solution is averaged over one second of physical time to compute flow statistics. Based on previous experience (Covello et al., 2018), we know that at these values of breathing rate averaging the time-dependent solution for \(0.6\) seconds is sufficient to obtain accurate flow statistics. This duration is almost doubled here, as the study aims at appreciating differences of flow statistics. Figure 2 presents the three planes used in the following to illustrate and discuss the flow fields. We will consider two para-sagittal planes, the first (SL) cutting through the left, unaltered side of the airways, and the second (SR) cutting through the right side, modified by virtual surgery. Plane SR is not perpendicular to the \(x\) axis, but is slightly inclined such that it passes through the throat. Lastly, a coronal plane (C) intersects the maxillary sinuses. ### Constant Pressure Gradient (CPG) In a CPG simulation, the flow is driven by the pressure difference between the inlet (the external surface of the sphere, where the reference pressure is set to zero) and the outlet (the bottom plane in the throat region), directly enforced as a boundary condition as \(p_{th}=p_{0}\). Assigning the throat pressure is a common practice in the CFD literature of the nasal airflow, see e.g. Otto et al. (2017); Farzal et al. (2019); Li et al. (2019); Plasek et al. (2022) among many others. The total flow rate increases from a pre-op value of \(2.67\cdot 10^{-4}\ m^{3}/s\) (or \(16.02\ l/min\)) to a post-op value of \(3.12\cdot 10^{-4}\ m^{3}/s\) (or \(18.72\ l/min\)) resulting in a percentage increase of \(16.9\%\). The corresponding increase in power input, defined by Eq.(1) is \(16.9\%\). Figure 3 compares the magnitude \(U\) of the mean velocity for the pre-op and post-op anatomies in the para-sagittal planes SL and SR. Differences are minute in the unaltered SL side; the operated side, visible in the SR cut, presents large anatomical differences, as the maxillary sinus is removed; the flow field is obviously quite different as well. Although qualitative differences can be discerned throughout the whole SR plane, the effect of the surgery is particularly evident in the vestibular and at the nasopharynx, where the post-op flow shows a more uniform velocity distribution. An inspection of the mean sagittal velocity \(U_{y}\) in the coronal plane, shown in Figure 4, reveals that the virtual surgery has created an ample region of intense reverse flow, with air flowing backwards from the rhinopharynx towards the Figure 2: Planes used throughout the paper to visualize results. Left: para-sagittal plane SL passing through the unaltered left nostril; center: para-sagittal oblique plane SR cutting through the operated right nostril and the throat; right: coronal plane C cutting through the maxillary sinuses. The \(x\)-axis is normal to the sagittal plane and points to the right; the \(y\)-axis is normal to the coronal plane and points towards the nose tip, and the \(z\)-axis is normal to the transverse plane and points upwards. external ambient and reaching the remarkable reverse speed of 1 \(m/s\). ### Constant Flow Rate (CFR) In a CFR simulation, the flow is still driven by a pressure difference between inlet and outlet; however, its value is variable in time and adjusted during the simulation to achieve the target value of the flow rate \(Q=Q_{0}\) which is enforced via the boundary condition. Assigning the flow rate is perhaps the most popular choice in the CFD literature of the nasal airflow, see e.g. Lee et al. (2010); Li et al. (2019); Van Strien et al. (2021); Berger et al. (2021). The computed time-averaged pressure drop in the pre-op case is 24.45 \(Pa\), and as expected is identical to the one enforced in CPG. In the post-op case, the pressure drop reduces to 18.5 \(Pa\), with a decrease of 24.6%. The corresponding decrease in power input is 24.6%. Figure 5 compares the magnitude \(U\) of the mean velocity for the pre-op and post-op anatomies. At a glance, results appear in line with those from CPG. However, while the pre-op cases are, as expected, virtually identical, the post-op ones do show differences. These are quantified in Figure 6, where the local difference between the magnitude of the mean velocity, computed with CPG and CFR, is plotted in planes SL and SR. Figure 4: CPG comparison: sagittal component \(U_{y}\) of the mean velocity in the coronal plane C. The black line represents the zero contour level. Figure 5: CFR comparison: magnitude \(U\) of the mean velocity vector in planes SL and SR. Figure 3: CPG comparison: magnitude \(U\) of the mean velocity vector in planes SL and SR. Figure 6: Post-op changes of the magnitude \(U\) of mean velocity between CPG and CFR. Plot of \(U_{CPG}-U_{CFR}\) in planes SL and SR. Large local differences are observed, higher than 1 \(m/s\); they are mostly found on the operated SR side, but also the unaltered SL side differs. The latter shows a rather uniform change, descending from a change of flow rate, which extends down to the rhinopharynx; the former, instead, additionally shows important localized differences, with changing sign over a small area. Notably, even though the post-op CPG flow rate is higher than the CFR one, the quantity \(U_{CPG}-U_{CFR}\) becomes locally negative. ### Constant Power Input (CPI) In a CPI simulation, the flow is driven by a time-varying pressure drop, and achieves an equally time-varying flow rate; both quantities are adjusted in time to instantaneously achieve the target value of the mechanical power \(P=P_{0}\). CPI simulations of the flow in the human nose are reported in this paper for the first time. In the pre-op case, the numerical value for the power input, given by the product of time-averaged flow rate and enforced pressure drop in the pre-op CPG simulation (or, equivalently, by the product of the enforced flow rate and time-averaged pressure drop in the pre-op CFR simulation) is 6.53 \(mW\). The obtained values for the pressure drop and flow rate match, as they should, those of the CPG and CFR simulations. If the same value of power is enforced for a CPI comparison, the post-op simulation yields a 9.5% reduction of the pressure drop, accompanied by a 10.5% increase of the flow rate. Figure 7 compares the magnitude of the post-op mean velocity computed with CPG and CPI, by plotting \(U_{CPG}-U_{CPI}\) in planes SL and SR. Noticeable differences on both sides are evident, which qualitatively resemble those discussed in figure 6 for CFR. ## 4 Discussion The results presented above demonstrate that post-op velocity and pressure fields depend significantly upon the choice of the flow forcing, in terms of both global and local quantities. What forcing to choose remains to some extent a free decision, but one to be taken consciously; the main goal of the present contribution is to highlight the implications of this important logical step. This issue has gone essentially unnoticed so far, for the main reason that computing a single case with either CFR or CPG is essentially equivalent; differences only appear when a comparison between two cases has to be made. The same applies to CPI, a third alternative introduced in this work for the first time in the context of biological flows. Figure 8 shows isosurfaces for the magnitude of the mean velocity vector, computed with all the forcing strategies, and for the pre- and post-op cases. Obviously, contours are identical in all the pre-op cases (top row), but large and significant differences arise post-op (bottom row). Figure 9 provides a compact three-dimensional view of such differences, in terms of \(U_{CPG}-U_{CFR}\) (left) and \(U_{CPG}-U_{CPI}\) (right): velocity differences reach up to \(\pm 1\)\(m/s\), and are particularly significant in the whole right meatus, as a direct consequence of the surgically modified anatomy, and in the rhinopharynx, as an indirect effect of the flow exiting the meatal volumes at different rates. Once again, we notice that differences can take either sign throughout the volume. Figure 8: Magnitude \(U\) of the mean velocity, for all computed cases, in a three-dimensional view. The iso-surfaces correspond to the level of \(U=3\)\(m/s\). Figure 7: Post-op changes in the magnitude \(U\) of mean velocity between CPG and CPI. Plot of \(U_{CPG}-U_{CPI}\) in planes SL and SR. We have shown that not only flow details, but global quantities too are affected by the comparison strategy. In fact, flow rate and pressure drop exhibit large relative changes when the flow forcing is changed; power input even changes sign altogether, being increased under CPG and decreased under CFR. A summary of the numerical values of global quantities measured in the numerical experiments is presented in Table 1. A first important point to remark is that comparing different anatomies is a logical step that becomes relevant not only when evaluating virtual surgeries, but also in the more fundamental search for the functionally normal nose. As an example, we mention the study by Zhao and Jiang (2014), in which 22 patients with a normal nasal airflow were compared under the CPG condition. When trying to assess which anatomical differences within a number of individuals imply functional consequences, a flow forcing must be selected for the multi-patient comparison, and the outcome is affected by that choice. Based on the present results, the conclusions drawn by such studies should be carefully checked to be robust with respect to the type of flow forcing. The effects observed in the numerical experiments considered here are quantitatively significant, as the endoscopic medial maxillectomy surgery employed as a testbed is a clinically representative surgical maneuver. In general, as already observed in turbulent flow control (Hasegawa et al., 2014), CFR and CPG are the extreme cases, with CPI occupying an intermediate position. The two most commonly employed forcing strategies, namely CPG and CFR, evidence large and significant differences in global quantities; a 16.9% increase in flow rate for CPG, and a 24.6% reduction in pressure drop for CFR. In these two cases, changes in power input have a different sign, with a 24.6% reduction for CFR and a 16.9% increase for CPG. Table 1 also reports values of the nasal resistance \(R\), defined as the ratio between pressure drop (between the external ambient and the nasopharynx) and flow rate: \[R=\frac{\Delta p}{Q}.\] The quantity \(R\) is expected to decrease after a surgery like maxillectomy, which enlarges the cross-sectional area of the meati. Indeed, this is found to be the case, regardless of the forcing type. However, if the outcome of the surgery is evaluated through the quantitative change in nasal resistance, the pre-/post-op comparison criterion affects its estimate significantly, with the post-op reduction of \(R\) being overestimated by a relative 70% when computed with CFR (24.5% reduction) than with CPG (14.4% reduction). The effect is even larger if one computes the lateral resistances. Global differences obviously result from the integrated effect of local differences in the flow fields; large velocity differences are found in different parts of the upper airways, as seen in figure 9. The major differences are located in the operated right side, but the unoperated airway too shows visible differences. Moreover, the increase in flow rate in the CPG case might lead to changes in position or onset of transition from laminar to turbulent flow, which would affect heat transfer and particle deposition. Figure 10: Three-dimensional view of the post-op field of turbulent kinetic energy \(k\), computed with CPG (left) and CFR (right), and visualized via the iso-surface at the level \(k=0.25\ m^{2}/s^{2}\). Zoom in a subregion shown in the inset. Figure 9: Three-dimensional view of the post-op changes in the magnitude \(U\) of the mean velocity. Left: \(U_{CPG}-U_{CFR}\); right: \(U_{CPG}-U_{CPI}\). The red/blue iso-surfaces correspond to \(+0.35\ m/s\) (red) and \(-0.22\ m/s\) (blue) respectively. Not only the mean field is affected, but turbulent quantities too are found to depend on the flow forcing. As an example, figure 10 shows the field of the turbulent kinetic energy \(k\), and compares the post-op solutions computed with CPG and CFR. One notices a clearly different level of turbulent activity, concentrated right where the (virtual) surgery has modified the anatomy, and significantly larger for the CPG case. It follows that the flow forcing should be considered with extreme care, should CFD be used to assess the outcome of a virtual surgery in terms of the intensity of the turbulent motions. The last point we put up for discussion is perhaps the most important, and concerns our current inability to fully answer the question set out in the Introduction: How should a comparison be carried out? In fact, recognizing the importance and quantifying the effects of the choice of the three forcing strategies is an essential step, but _per se_ does not lead to the right recipe. In other words, we are still unable to identify the _right_ forcing, or at least the _best_ forcing. Nevertheless, we believe that CPG represents the least adequate choice, on the ground that the value of the enforced pressure drop directly depends on the location of the outlet, hence on the CT scan. Since it is difficult to have precise control on the boundaries of the scan, using CPG would at least require to identify and then stick to one anatomical landmark (e.g. the larynx) and to define the outflow boundary accordingly. A stronger physiological basis exists to support CFR, which implies comparing breathing at the same rate of oxygen consumption. CPI too, proposed here for the first time, inherits the same difficulties of CPG, but is grounded upon a rather convincing physiological criterion for the comparison, as it implies comparing breathing under the constraint that the same mechanical pressure is provided by the lungs for ventilation. The minor disadvantage of CPI, i.e. being not currently available in most commercial CFD solvers, might be more than compensated by its physical appeal, that extends to other field in biomechanics, e.g. when assessing the importance of aneurysms, thromboses or stenoses in blood vessels under the same mechanical power provided by the heart. ## 5 Conclusions The present work has discussed the implications of choosing the force that drives the flow through the nasal airways when CFD is used to compare two nasal anatomies. Results are obtained for steady inspiration of mild intensity, by using state-of-the-art, well resolved Large Eddy Simulations. A pair of pre-op and post-op anatomies has been considered, but the same line of reasoning applies to any two anatomies, e.g. when one is seeking to take advantage of CFD to investigate the functionally normal nose. Similarly, our conclusions apply to any type of comparison, regardless of the specific modelling; although we have employed LES simulations in this work, conclusions apply without modifications to RANS simulations. A comparison can be carried out at the same pressure drop (CPG), at the same flow rate (CFR), and at the same power input (CPI). In particular, the possibility of comparing under the same power input is proposed here for the first time. Beyond the nasal flow, CPI should be considered as a sound alternative in other domains of the fluid dynamics of the human body, e.g. when studying malformations or obstructions of blood vessels (and their surgical corrections), which could be assessed under the same pumping power provided by the heart. The forcing criterion affects the outcome of the comparison in a significant way. For example, \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{CPG} & \multicolumn{3}{c}{CFR} & \multicolumn{3}{c}{CPI} \\ \cline{2-10} & pre & post & \(\%\Delta\) & pre & post & \(\%\Delta\) & pre & post & \(\%\Delta\) \\ \hline \(p_{th}\)\([Pa]\) & \(-24.4\) & \(-24.4\) & - & \(-24.4\) & \(-18.5\) & \(-24.6\) & \(-24.4\) & \(-22.1\) & \(-9.5\) \\ \(Q\)\([10^{-4}m^{3}/s]\) & \(2.67\) & \(3.12\) & \(16.9\) & \(2.67\) & \(2.67\) & - & \(2.67\) & \(2.95\) & \(10.5\) \\ \(P\)\([10^{-3}W]\) & \(-6.53\) & \(-7.63\) & \(16.9\) & \(-6.55\) & \(-4.94\) & \(-24.6\) & \(-6.53\) & \(-6.53\) & - \\ \(R\)\([10^{4}Pa\,s/m^{3}]\) & \(9.16\) & \(7.84\) & \(-14.4\) & \(9.18\) & \(6.93\) & \(-24.5\) & \(9.16\) & \(7.50\) & \(-18.1\) \\ \hline \hline \end{tabular} \end{table} Table 1: Global quantities computed with the three flow forcings. variations of nasal resistance induced by surgery change up to a relative 70%, being largest under CFR and smallest under CPG, with CPI in the middle. Local, instantaneous and time-averaged flow fields are affected as well. We have discussed how the CPG approach is, in our opinion, the least adequate choice, owing to the lack of an absolute landmark for the outflow boundary of the computational domain. On the other hand, reasonable arguments for both CFR and CPI can be put forward to provide the comparison with a physiological rationale. The approaches are equivalent from a computational standpoint, in terms of both complexity and computational cost, although CPI is less straightforward to implement being not immediately available in out-of-the-box commercial solvers. Choosing between them requires deciding which of the implied physiological constraints is best suited to provide the comparison with clinical significance. Further investigations are needed to arrive at a general community consensus; a clear understanding of the physiological significance of the various boundary conditions might lead to the ability of setting them on a patient-specific basis, e.g. in terms of specific oxygen consumption per unit weight.
2307.00397
Improving CNN-based Person Re-identification using score Normalization
Person re-identification (PRe-ID) is a crucial task in security, surveillance, and retail analysis, which involves identifying an individual across multiple cameras and views. However, it is a challenging task due to changes in illumination, background, and viewpoint. Efficient feature extraction and metric learning algorithms are essential for a successful PRe-ID system. This paper proposes a novel approach for PRe-ID, which combines a Convolutional Neural Network (CNN) based feature extraction method with Cross-view Quadratic Discriminant Analysis (XQDA) for metric learning. Additionally, a matching algorithm that employs Mahalanobis distance and a score normalization process to address inconsistencies between camera scores is implemented. The proposed approach is tested on four challenging datasets, including VIPeR, GRID, CUHK01, and PRID450S, and promising results are obtained. For example, without normalization, the rank-20 rate accuracies of the GRID, CUHK01, VIPeR and PRID450S datasets were 61.92%, 83.90%, 92.03%, 96.22%; however, after score normalization, they have increased to 64.64%, 89.30%, 92.78%, and 98.76%, respectively. Accordingly, the promising results on four challenging datasets indicate the effectiveness of the proposed approach.
Ammar Chouchane, Abdelmalik Ouamane, Yassine Himeur, Wathiq Mansoor, Shadi Atalla, Afaf Benzaibak, Chahrazed Boudellal
2023-07-01T18:12:27Z
http://arxiv.org/abs/2307.00397v2
# Improving CNN-based Person Re-identification using score Normalization ###### Abstract Person re-identification (PRe-ID) is a crucial task in security, surveillance, and retail analysis, which involves identifying an individual across multiple cameras and views. However, it is a challenging task due to changes in illumination, background, and viewpoint. Efficient feature extraction and metric learning algorithms are essential for a successful PRe-ID system. This paper proposes a novel approach for PRe-ID, which combines a Convolutional Neural Network (CNN) based feature extraction method with Cross-view Quadratic Discriminant Analysis (XQDA) for metric learning. Additionally, a matching algorithm that employs Mahalanobis distance and a score normalization process to address inconsistencies between camera scores is implemented. The proposed approach is tested on four challenging datasets, including VIPeR, GRID, CUHK01, and PRID450S. The proposed approach has demonstrated its effectiveness through promising results obtained from the four challenging datasets. PRe-ID, Score Normalization, XQDA, CNN, feature extraction. ## I Introduction Person re-identification, or PRe-ID, involves recognizing an individual across different images or videos captured in a surveillance system [1]. This is critical in various real-time applications such as person retrieval, video monitoring, public safety, long-term human behavior analysis, and cross-camera tracking [2, 3, 4]. The use of CNN models has become popular in recent deep learning architectures, either through the development of new models or through the use of pretrained models known as transfer learning [5, 6]. The process of person re-identification, which involves matching individuals detected by different cameras, usually consists of two main steps, as shown in Figure 1. These steps are: (1) Feature extraction (FE), which involves obtaining more reliable, robust, and concise features than raw pixel data, and (2) learning the system with ample data to allow it to perform re-identification automatically during online testing. The Gaussian of Gaussian (GOG) and Local Maximal Occurrence (LOMO) descriptors are the two most commonly used FE methods in the field of person re-identification [7, 8]. GOG descriptor, proposed by Matsukawa et al. [9], involves dividing the image into k rectangular blocks and representing each block by 4 Gaussian distributions in different color spaces (RGB, Lab, HSV, and nRnG). On the other hand, LOMO descriptor, introduced by Liao et al. [10], extracts two types of features (scale-invariant local ternary pattern and HSV color histograms) from the imageS by dividing it into horizontal patches and calculating the occurrence of local geometric features. The aim of metric learning is to learn a metric that can effectively compare two pedestrian images. Popular examples of metric learning approaches include KISSME [10] and Cross-view Quadratic Discriminate Analysis (XQDA) [11]. The PRe-ID system uses three main approaches to tackle the problem, as shown in Fig. 2. These approaches include Feature Descriptor Learning, Metric Learning, and Deep Learning. The Feature Descriptor Learning methods aim to learn distinctive and representative features from pedestrian images that distinguish the appearances of individuals in the datasets [13]. Many effective techniques have been proposed in this category, such as Gaussian of Gaussian (GOG) [7] and LOMO (Local Maximal Occurrence) descriptors [8] as well as various other techniques discussed in [14, 15, 16, 17], and [18]. Metric learning is a technique utilized to enhance the precision of machine learning models. It trains a model to compute the similarity between pedestrians images captured from different cameras to achieve greater matching accuracy [11, 12, 19, 20, 21, 22]. On the other hand, deep learning is a sophisticated task that has gained popularity for enhancing PR systems and achieving high performance [23, 24, 25, 26].The deep learning approaches can further be classified into three categories such as CNN, RNN, and GAN-based methods. Figure 2 summarizes a taxonomy of PRe-ID of approaches. Overall, the main contributions of this work can be summarized as follows: Fig. 1: Person re-identification system outline * Utilizing CNN features as a transfer learning process for effective feature representation. * Enhancing the discriminative power in person re-identification by implementing Cross-view Quadratic Discriminant Analysis (XQDA), a robust metric learning method that employs the Mahalanobis distance for matching. * Applying a score normalization technique which greatly improved results on four benchmark datasets: VIPeR [11], GRID [27], CUHK01 [28], and PRID450s [29]. * Evaluating and comparing the proposed approach against the state-of-the-art methods. The paper is organized as follows: Section 2 outlines the methodology and details of the proposed approach, including the CNN feature model, the XQDA metric learning technique, and the score normalization process. Experimental results are reported in Section 3. Finally, the conclusions and future work are discussed in Section 4. ## II Methodology and description of the proposed approach ### _FE based on a pretrained model of CNN_ Person Re-identification (Pre-ID) entails the process of recognizing an individual by correlating the identity of a probe image to a set of images. This task is fundamentally confronted by two main challenges: Feature Extraction (FE) and metric learning. In an effort to address these obstacles, this study leverages a pretrained CNN model derived from the ImageNet dataset in conjunction with the XQDA metric learning method, which exhibited efficacy in our experimental trials. Specifically, the study obtained 4,096-dimensional feature vectors by extracting the features from the fully connected layer 7 (FC7) of the learned CNN, as delineated in Figure 3. The XQDA metric learning method was deployed to bolster the discriminative capability of the target dataset for the task of person re-identification. The research utilized the AlexNet architecture, comprising eight layers in total; the initial five are convolutional, while the last three are fully connected. In particular, the fully connected seventh layer serves as the feature vector, representing the unique biometric signature of each individual within the dataset. ### _Features projection and metric learning using XQDA algorithm_ The XQDA metric learning algorithm is a modification of the KISS metric algorithm [30]. XQDA learns a discriminative feature subspace in a low-dimensional space through the use of Fisher criterion [19]. It is preferred to have a lower dimensional space (\(\Re^{r}\), where \(r<d\)) as the initial dimension of the feature vector \(d\) is quite large, leading to better classification results. The intra-person set difference (\(X_{s}\)) of \(n\) similar pairs is represented as a matrix \(\Re^{d\times n}\), while the extra-person difference set (\(X_{D}\)) of \(n\) dissimilar pairs is represented as a matrix \(\Re^{d\times n}\), where \(d\) is the dimension of the feature vectors. Each column in \(X_{s}\) indicates the difference between a similar pair, and each column in \(X_{D}\) represents the difference between a dissimilar pair. The covariance matrices are calculated using the following equations: \[\Sigma_{s}=\frac{1}{n}X_{s}X_{s}^{T} \tag{1}\] \[\Sigma_{D}=\frac{1}{n}X_{D}X_{D}^{T} \tag{2}\] XQDA calculates the distance between the training samples (\(\mu\) and \(\nu\)) from two camera views as follows: \[d(\mu,\nu)=(\mu-\nu)^{T}W(\Sigma_{s^{\prime}}^{-1}-\Sigma_{D}^{-1})W^{T}(\mu-\nu) \tag{3}\] The new subspace projection matrix (\(W\)) to be learned by XQDA is represented in this equation. The covariance matrices, \(\Sigma_{s^{\prime}}\) and \(\Sigma_{D^{\prime}}\), are obtained by transforming \(\Sigma_{s}\) and \(\Sigma_{D}\) respectively through \(W^{T}\). These variances of the subspace are used to differentiate between intra-person and extra-person differences. As a result, the projection direction \(W\) is estimated through the following equation. \[J(W)=\frac{W^{T}\Sigma_{s}W}{W^{T}\Sigma_{D}W} \tag{4}\] The equation can be converted into a generalized eigenvalue decomposition problem. The \(r\) eigenvectors corresponding to the largest eigenvalues of \(\Sigma^{-1}D\Sigma_{s}\) form the projection matrix \(W=[w_{1},w_{2},...,w_{r}]\). The output metric matrix is represented as \(M=\Sigma^{-1}s^{\prime}-\Sigma_{D^{\prime}}^{-1}\), which can be used to calculate the distance between two given feature vectors. Fig. 3: AlexNet architecture model based CNN. Fig. 2: PRe-ID Classification approaches ### _Matching based Mahalanobis distance_ The Mahalanobis distance is a popular and effective metric for comparing the similarity between two data points. It is often utilized to improve the classification process [29, 38, 39]. Given \(m\) data points \(x_{i}\in\Re^{m}\), the objective is to find a matrix \(M\) such that the distance metric is expressed as follows. \[d_{M}=(x_{i},x_{j})^{T}M(x_{i},x_{j}) \tag{5}\] Where \(x_{i}\) and \(x_{j}\) are two vectors (samples), and \(M\) is a positive semidefinite matrix. ### _Score normalization_ The Mahalanobis distance is calculated using global samples from different cameras with varying image resolutions, resulting in heterogeneous scores. To make these scores comparable, score normalization is performed. A min-max normalization function was applied in this work, which addresses bias and variation that could affect the comparison scores [31].Score normalization aim to mitigate the variations in similarity scores that arise from different camera viewpoints. By normalizing the scores, we can effectively reduce the influence of these factors and enhance the discriminative power of the system. This normalization step improved the results and is defined as follows. \[N=\frac{x-x_{min}}{x_{max}-x_{min}} \tag{6}\] where \(x_{min}\) is the minimum score vector and where \(x_{max}\) is the maximum score vector. ## III Implementations and results ### _Datasets and protocols_ The proposed approach utilizing CNN features has been tested on four demanding datasets: PRID450s, VIPeR, GRID and CUHK01. The traits of these datasets are detailed in Table I. Examples from the utilized datasets are depicted in Figure 4. ### _Evaluation metrics_ To assess the effectiveness of our proposed approach in PRe-ID systems, we employed a 10-fold cross-validation methodology. This involved randomly dividing the image set into ten subsets, with nine sets used for training and one set for testing in each fold. To evaluate the performance, we used the Cumulative Matching Characteristic (CMC) metric, which is a popular method for re-identification performance evaluation. The CMC curve displays the rank matching rates, where a rank "r" matching rate refers to the percentage of probe images that have correct matches within the top "r" ranks among the gallery images. To conduct our experiments, we split the datasets randomly into training and test sets, with an equal number of individuals in each set. It is worth noting that for the GRID dataset, the gallery collection now includes an additional 775 pictures. The number of probe images in all datasets is equivalent to the number of gallery images. To evaluate the effectiveness of the proposed strategy, we employed a 10-fold cross-validation,in which, with CMC curve averages being reported. The CMC score represents the success rate of accurately identifying the matched image at each rank. Specifically, we focused on the accuracy at rank-1 since it indicates the probability of the correct image being displayed as the top result. In addition, we also considered the accuracy at ranks 5, 10, and 20 during the evaluation process. ### _Discussions_ To address the challenges of person re-identification, we investigated the effectiveness of the XQDA metric learning technique, along with the Mahalanobis distance metric, and hence helps in enhancing the classification outcomes. We also employed score normalization to eliminate any potential biases or discrepancies in the scores, which could lead to unfair comparisons. Our results showed that the proposed method was effective in achieving improved performance on the PRID450s, VIPeR, GRID, and CUHK01 datasets, as demonstrated by the CMC curves. In particular, the CMC curves with score normalization were significantly higher than those without it, which highlights the importance of this step in achieving fair comparisons. These findings suggest that the proposed XQDA metric learning approach, along with the Mahalanobis distance metric and score normalization, can significantly improve the accuracy of person re-identification. The application of this technique may have significant implications for the development of robust and effective surveillance systems in the future. However, it is important to note that further studies are needed to assess the generalizability of these findings to other datasets and scenarios. Furthermore, to provide a comprehensive overview of the re-identification accuracy for each dataset, we included Fig. 4: Sample images from the datasets considered in this study: (A) PRID450s, (B) VIPeR, (C) GRID, (D) CUHK01. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline **Dataset** & \begin{tabular}{c} Release \\ Time \\ \end{tabular} & \begin{tabular}{c} Identities \\ Identities \\ \end{tabular} & Images & Cameras & Size \\ \hline PRID450s & 2011 & 450 & 900 & 2 & 128\(\times\)48 \\ \hline VIPeR & 2014 & 632 & 1264 & 2 & 128\(\times\)48 \\ \hline GRID & 2009 & 1025 & 1275 & 8 & Vary \\ \hline CUHK01 & 2012 & 971 & 3884 & 2 & 160 \(\times\) 60 \\ \hline \end{tabular} \end{table} TABLE I: Datasets and its characteristics. a table (Table II) that reports the performance metrics at various ranks, including rank-1, rank-5, rank-10, and rank-20. To better understand the impact of score normalization on the re-identification accuracy, Figure 5 specifically highlights the rank-1 performance in terms of the CMC curves of each dataset without score normalization. Additionally, Figure 6 portrays the accuracy performance of the proposed scheme on the four datasets.The results of our experiments suggest that the proposed XQDA metric learning approach, in combination with the Mahalanobis distance metric and score normalization, can effectively improve the performance of person re-identification on the four challenging datasets. As indicated in Table II, the score normalization improved the rank-1 rate on all datasets. Without normalization, the rank-1 rates were 42.16%, 25.20%, 41.51% and 64.22% for VIPeR, GRID, CUHK01, and PRID450s respectively. With score normalization, the rates improved to 43.16%, 35.68%, 45.24% and 64.32% respectively. The GRID dataset showed a particularly significant improvement of 10.48% in rank-1 after score normalization. We attribute the improvement in the re-identification system to the use of score normalization on our datasets. ### _comparison with the state-of-the-art_ In this particular section, we assess the effectiveness of the proposed approach by contrasting it with several established re-identification approaches based on Rank-1 rate.The summarized findings of this comparison can be observed in table III. From the previous table, we can observe the strength of our approach, as we achieved almost the best results in all four databases. This confirms the robustness of our proposed approach despite the variations in the data. The impressive performance exhibited by our method across multiple datasets highlights its capability to handle different scenarios and reinforces its potential for real-world applications. ## IV Conclusion This research work addresses the challenging task of PRe-ID in security, surveillance, and retail analysis, which involves identifying an individual across multiple cameras and views. The proposed approach combines a CNN-based FE method with Cross-view XQDA for metric learning. In addition, a matching algorithm that employs Mahalanobis distance and a score normalization process to address inconsistencies between camera scores are also implemented. The evaluation results on four challenging datasets, including VIPeR, GRID, CUHK01, and PRID450S, demonstrate the effectiveness of the proposed approach. The implementation of score normalization shows significant improvement in the different rank rate accuracies of the datasets. \begin{table} \begin{tabular}{l|l|l|l|l|l} \hline \multirow{2}{*}{ \begin{tabular}{} \end{tabular} } & \multicolumn{5}{c|}{**Ranks**} \\ \cline{2-6} & Rank-1 & Rank-5 & Rank-10 & Rank-15 & Rank-20 \\ \hline \multicolumn{6}{c}{**CUHK01**} \\ \hline Without & 41.51\% & 65.61\% & 75.11\% & 80.39\% & 83.90\% \\ \hline With & **45.24\%** & **74.35\%** & **82.78\%** & **86.95\%** & **89.30\%** \\ \hline \multicolumn{6}{c}{**GRID**} \\ \hline Without & 25.20\% & 45.52\% & 54.08\% & 60.84\% & 61.92\% \\ \hline With & **35.68\%** & **46.96\%** & **53.04\%** & **56.88\%** & **64.64\%** \\ \hline \multicolumn{6}{c}{**PRID450S**} \\ \hline Without & 64.22\% & 86.44\% & 91.78\% & 94.53\% & 96.22\% \\ \hline With & **64.32\%** & **91.11\%** & **96.18\%** & **97.78\%** & **98.76\%** \\ \hline \multicolumn{6}{c}{**VIPR**} \\ \hline Without & 42.16\% & 71.99\% & 83.01\% & 88.04\% & 92.03\% \\ \hline With & **43.16\%** & **74.11\%** & **85.13\%** & **90.13\%** & **92.78\%** \\ \hline \end{tabular} \end{table} TABLE II: Accuracy on the different datasets with and without score normalization. Fig. 5: CMC curves of all datasets, Samples images from the used datasets, (A) CUHK01, (B) GRID, (C), VIPeR (D) PRID450s Fig. 6: Performance of the proposed scheme on different datasets (with and without normalization). \begin{table} \begin{tabular}{l|l|l|l|l|l} \hline \multirow{2}{*}{**Methods**} & \multicolumn{3}{c|}{**Dataset**} \\ \cline{2-6} & \multicolumn{1}{c|}{**VIeR**} & \multicolumn{1}{c|}{**CUHK01**} & \multicolumn{1}{c|}{**PRID450S**} & \multicolumn{1}{c}{**GRID**} \\ \hline KEPLER [32, 2015] & 42.40 & - & - & - \\ \hline FT-CNN+XQDA [9], & 42.50 & 46.80 & 58.20 & 25.20 \\ 2016 & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline MKSSL+LOMO & 31.20 & - & - & 24.60 \\ [33],2017 & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ \hline OneBrank [34, 2017] & 34.30 & - & 41.40 & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{} \\ \hline EML [36], 2020 & 44.37 & - & 63.58 & 19.47 \\ \hline VISUAI-DAC [37], & 39.70 & - & - & 34.60 \\ 2022 & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{} \\ \hline **CNN+XQDA with score normalization** & 43.16 & 45.24 & 64.22 & 35.68 \\ **(Our) 2023** & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \end{tabular} \end{table} TABLE III: Comparison with the state-of-the-art of rank-1 identification rates(%)
2310.15433
Off-Policy Evaluation for Large Action Spaces via Policy Convolution
Developing accurate off-policy estimators is crucial for both evaluating and optimizing for new policies. The main challenge in off-policy estimation is the distribution shift between the logging policy that generates data and the target policy that we aim to evaluate. Typically, techniques for correcting distribution shift involve some form of importance sampling. This approach results in unbiased value estimation but often comes with the trade-off of high variance, even in the simpler case of one-step contextual bandits. Furthermore, importance sampling relies on the common support assumption, which becomes impractical when the action space is large. To address these challenges, we introduce the Policy Convolution (PC) family of estimators. These methods leverage latent structure within actions -- made available through action embeddings -- to strategically convolve the logging and target policies. This convolution introduces a unique bias-variance trade-off, which can be controlled by adjusting the amount of convolution. Our experiments on synthetic and benchmark datasets demonstrate remarkable mean squared error (MSE) improvements when using PC, especially when either the action space or policy mismatch becomes large, with gains of up to 5 - 6 orders of magnitude over existing estimators.
Noveen Sachdeva, Lequn Wang, Dawen Liang, Nathan Kallus, Julian McAuley
2023-10-24T01:00:01Z
http://arxiv.org/abs/2310.15433v1
# Off-Policy Evaluation for Large Action Spaces ###### Abstract. Developing accurate off-policy estimators is crucial for both evaluating and optimizing for new policies. The main challenge in off-policy estimation is the distribution shift between the logging policy that generates data and the target policy that we aim to evaluate. Typically, techniques for correcting distribution shift involve some form of importance sampling. This approach results in unbiased value estimation but often comes with the trade-off of high variance, even in the simpler case of one-step contextual bandits. Furthermore, importance sampling relies on the common support assumption, which becomes impractical when the action space is large. To address these challenges, we introduce the Policy Convolution (PC) family of estimators. These methods leverage latent structure within actions--made available through action embeddings--to strategically _convolve_ the logging and target policies. This convolution introduces a unique bias-variance trade-off, which can be controlled by adjusting the _amount of convolution_. Our experiments on synthetic and benchmark datasets demonstrate remarkable mean squared error (MSE) improvements when using PC, especially when either the action space or policy mismatch becomes large, with gains of up to \(5-6\) orders of magnitude over existing estimators. + Footnote †: Work done during internship at Netflix. + Footnote †: Work done during internship at Netflix. + Footnote †: Work done during internship at Netflix. + Footnote †: Work done during internship at Netflix. + Footnote †: Work done during internship at Netflix. + Footnote †: Work done during internship at Netflix. ## 1. Introduction Off-policy estimation (OPE) is a fundamental problem in reinforcement learning and decision making under uncertainty. It involves estimating the expected value of a target policy, given access to only an offline dataset logged by deploying a different policy, often referred as the logging policy (see (Nelson, 2002) for a comprehensive survey). This decoupling between data collection and policy evaluation is crucial in many real-world applications, as it allows for the assessment of new policies using historical data without having to deploy them in the environment, which can be costly and/or risky. In this paper, we focus on OPE for the one-step contextual bandit setting, _i.e._, we perform decision making with only an observed context that is assumed to be independently sampled (_e.g._, a user coming to a website), and do not consider any recurrent dependencies in the context transitions as is the case in the general formulation of reinforcement learning. A variety of practical applications naturally fall into the off-policy contextual bandit framework, _e.g._, recommender systems (Sandel et al., 2015; Sohn et al., 2016), healthcare (Sandel et al., 2015; Sohn et al., 2016), robotics (Sandel et al., 2015), _etc._ OPE, in its most general setting, can be a very challenging problem due to its inherently counterfactual nature, as we observe the reward for only those actions taken by the logging policy, while we aim to evaluate _any_ target policy. For example, consider a scenario where the logging policy in a movie recommendation platform, for a given segment of users, rarely recommends romantic movies. This can often happen when we think a user will not like certain type of movies. On the other hand, a target policy--whose value we aim to estimate--due to numerous potential reasons, now chooses to recommend romantic movies for the same user segment. This distribution-shift can lead to irrecoverable bias in our estimates (Sandel et al., 2015), making it difficult to accurately evaluate a target policy or learn a better one, which typically involves optimizing over the value estimates (Sandel et al., 2015; Sohn et al., 2016). Typical off-policy estimators utilize Importance Sampling (IS) to correct for the policy mismatch between the target and logging policies (Sandel et al., 2015; Sohn et al., 2016; Sohn et al., 2016; Sohn et al., 2016; Sohn et al., 2016; Sohn et al., 2016; Sohn et al., 2016; Sohn et al., 2016), leading to unbiased value estimation, at the cost of high variance. The variance problem caused by IS is exacerbated if the target and logging policies exhibit significant divergence, and even more so if the action space is large. Notably, large action spaces frequently occur in practical OPE scenarios, _e.g._, recommender systems which can have millions of items (actions) (Sandel et al., 2015; Sohn et al., 2016), extreme classification (Sandel et al., 2015; Sohn et al., 2016; Sohn et al., 2016), discretized continuous action-spaces (Sandel et al., 2015), _etc._ To address the aforementioned limitations of IS, we propose the Policy Convolution (PC) family of estimators. PC strategically convolves the logging and target policies by exploiting the inherent latent-structure amongst actions--available through action-embeddings--to make importance sampling operate in a more favorable bias-variance trade-off region. Such structure can occur naturally in different forms like action meta-data (text, images, _etc_), action hierarchies, categories, _etc_. Or they can be estimated using domain-specific representation learning techniques (Beng et al., 2016). Notably, the utilization of additional action-structure has also been studied in the online multi-armed bandit literature (Lipschitz bandits) (Sandel et al., 2015; Sohn et al., 2016; Sohn et al., 2016), albeit in the context of regret minimization. To be more specific, the PC framework for OPE consists of two components: (1) conventional IS-based value estimation; and (2) combining _both_ the target and logging policies using action-action similarity. PC allows full freedom over the choice of the _backbone_ IS _estimator_, the convolution function, as well as the _amount of convolution_ to conduct on the target and logging policies respectively. To better understand the practical effectiveness of PC, we compare its performance with various off-policy estimators on synthetic and real-world benchmark datasets, simulating a variety of off-policy scenarios. Our results demonstrate that PC can effectively balance the bias-variance trade-off posited by policy convolutions, leading to up to \(5-6\) orders of magnitude better off-policy evaluation in terms of mean squared error (MSE), particularly when the action space is large or the policy mismatch is high. To summarize, _in this paper_, we make the following contributions: * Introduce the Policy Convolution (PC) family of off-policy estimators that posit a novel bias-variance trade-off controlled by the _amount of convolution_ on the logging and target policies. * Propose four different convolution functions for the PC framework, where each convolution function is accompanied with its unique set of inductive biases, thereby leading to distinct performance comparisons. * Conduct empirical analyses on synthetic and real-world benchmark datasets that demonstrate the superiority of PC over a variety of off-policy estimators, especially when the action space or policy mismatch becomes large. ## 2. Preliminaries ### OPE in Contextual Bandits We study OPE in the standard stochastic contextual bandits setting with a context space \(\mathcal{X}\) and a finite action space \(\mathcal{A}\). In each round \(i\), the agent observes a context \(x_{i}\in\mathcal{X}\), takes an action \(a_{i}\in\mathcal{A}\), and observes a reward \(r_{i}\in[0,1]\). The context \(x_{i}\) is drawn from some unknown distribution \(p(x)\). The action \(a_{i}\) follows some policy \(\pi(\cdot|x_{i})\), and the reward \(r_{i}\) is draw from an unknown distribution \(p(r|x_{i},a_{i})\) with expected value \(\delta(a,x)\triangleq\mathbb{E}_{r\sim p(r|a,x)}\left[r\right]\). The value of a policy is its expected reward \[V(\pi)\triangleq\operatorname*{\mathbb{E}}_{x\sim p(x)}\left[\operatorname*{ \mathbb{E}}_{\left(\cdot|x\right)}\left[\delta(a,x)\right]\right].\] In OPE, given a target policy \(\pi\), we aim to estimate its value \(V(\pi)\) using some bandit feedback data \(\mathcal{D}\triangleq\{(x_{i},a_{i},r_{i})\}_{i=1}^{n}\) collected by deploying a different policy \(\mu\). We call \(\mu\) the logging policy and assume it is known. ### Conventional OPE Estimators We now briefly discuss a few prominent OPE estimators, which will also be used to instantiate our proposed Policy Convolution (PC) estimator discussed in Section 3. #### 2.2.1. Direct Method (DM) Taking a model-based approach, DM leverages a reward-model to estimate the value of the target policy. Formally, given a suitable \(\hat{\delta}:\mathcal{A}\times\mathcal{X}\mapsto\mathbb{R}\), the estimator is defined as follows: \[\hat{V}_{\text{DM}}(\pi)\triangleq\operatorname*{\mathbb{E}}_{(x,\cdot)\sim \mathcal{D}}\left[\sum_{a\in\mathcal{A}}\pi(a|x)\cdot\hat{\delta}(a,x)\right],\] where the outer expectation is over the finite set of logged bandit feedback data \(\mathcal{D}\). Notably, the variance of \(\hat{V}_{\text{DM}}(\cdot)\) is often quite low, since \(\hat{\delta}\) is typically bounded. However, it can suffer from a large bias problem due to model misspecification (Bahdan et al., 2017). #### 2.2.2. Inverse Propensity Scoring (Ips) Ips(Han et al., 2017) estimator uses Monte-Carlo approximation and importance sampling to account for the policy-mismatch between \(\pi\) and \(\mu\) as follows: \[\hat{V}_{\text{IPS}}(\pi)\triangleq\operatorname*{\mathbb{E}}_{(x,a,r)\sim \mathcal{D}}\left[\frac{\pi(a|x)}{\mu(a|x)}\cdot r\right].\] The IPS estimator is unbiased under the following two assumptions which we assume throughout the paper unless otherwise specified: **Assumption 2.1**.: **(Unconfoundedness)** _The action selection procedure is independent of all potential outcomes given the context, i.e., \(\forall x\in\mathcal{X},\pi\in\Pi,a\sim\pi(\cdot|x):\{\delta(a^{\prime},x) \}_{a^{\prime}\in\mathcal{A}}\perp a\mid x\)._ **Assumption 2.2**.: **(Common Support)** _The target policy \(\pi\) shares common support with the logging policy \(\mu,\forall x\in\mathcal{X},a\in\mathcal{A}\): \(\pi(a|x)>0\implies\mu(a|x)>0\)._ However, IPS estimator can suffer from a large variance problem, since the importance weights \(\pi(a|x)/\mu(a|x)\) can be unbounded and huge. Several estimators are proposed to reduce the variance of the IPS estimator. #### 2.2.3. Self-normalized Inverse Propensity Scoring (Smips) Built on the observation that the expected propensity weight in IPS equals \(1\), SNIPS(Kumar et al., 2017) uses the empirical average of the propensity weights as a control variate for IPS as follows: \[\hat{V}_{\text{SHIPS}}(\pi)\triangleq\operatorname*{\mathbb{E}}_{(x,a,r)\sim \mathcal{D}}\left[\frac{\pi(a|x)}{\rho\cdot\mu(a|x)}\cdot r\right]\text{s.t. }\rho\triangleq\operatorname*{\mathbb{E}}_{(x,a,r)\sim \mathcal{D}}\left[\frac{\pi(a|x)}{\mu(a|x)}\right].\] SNIPS typically enjoys smaller variance at the cost of a slight added bias in comparison to IPS, especially when the variance of the propensity weight is large(Bahdan et al., 2017). Further, \(\hat{V}_{\text{SNIPS}}(\pi)\) is a strongly consistent estimator of \(V(\pi)\) by the law of large numbers. #### 2.2.4. Doubly Robust (DR) DR combines the benefits of unbiased estimation in IPS and the low-variance, model-based estimation in DM: \[\hat{V}_{\text{DR}}(\pi) \triangleq\operatorname*{\mathbb{E}}_{(x,a,r)\sim\mathcal{D}} \left[\frac{\pi(a|x)}{\mu(a|x)}\cdot(r-\hat{\delta}(a,x))+\Delta(\pi,x)\right]\] \[\text{s.t.}\ \Delta(\pi,x)\triangleq\sum_{a^{\prime}\in\mathcal{A}} \pi(a^{\prime}|x)\cdot\hat{\delta}(a^{\prime},x),\] where \(\hat{\delta}\) is the same reward-model as used in DM (Section 2.2.1). Intuitively, DR uses the reward-model as a baseline, and performs importance sampling only on the error of the given reward-model. DR is unbiased and can be of smaller variance than IPS when the reward-model \(\hat{\delta}\) is close to the true reward \(\delta\)(Bahdan et al., 2017). #### 2.2.5. Self-normalized Doubly Robust (Smdr) Similar to the idea behind SNIPS (Section 2.2.3); SDOR(Kumar et al., 2017; Dey et al., 2018) performs the same control variate technique on the DR estimator (Section 2.2.4) as follows: \[\hat{V}_{\text{SDOR}}(\pi)\triangleq\operatorname*{\mathbb{E}}_{(x,a,r) \sim\mathcal{D}}\left[\frac{\pi(a|x)}{\rho\cdot\mu(a|x)}\cdot(r-\hat{\delta}(a, x))+\Delta(\pi,x)\right]\] \[\text{s.t.}\ \rho\triangleq\operatorname*{\mathbb{E}}_{(x,a,r)\sim \mathcal{D}}\left[\frac{\pi(a|x)}{\mu(a|x)}\right]\ ;\ \Delta(\pi,x) \triangleq\sum_{a\in\mathcal{A}}\pi(a|x)\cdot\hat{\delta}(a,x).\] Hence, SNOR encapsulates the ideas behind all the aforementioned estimators to conduct strongly consistent, low-variance policy value estimation that might perform well (in terms of MSE) in practice. While effective to some extent, the importance sampling based estimators mentioned above can still suffer from large variance due to large importance sampling weights, especially when the action space is large. In particular, the variance of these importance sampling based estimators grows roughly linearly _w.r.t._ the maximum propensity weight in \(\mathcal{D}\). And the maximum propensity weight can grow linearly _w.r.t._ the size of action space \(\Omega(|\mathcal{A}|)\)(Kang et al., 2019), making these estimators undesirable for OPE for large action-space problems. Further, when Assumption 2.2 is violated, the variance of such importance sampling based estimators becomes unbounded, in addition to incurring a bias of \(\mathbb{E}_{x}\left[\sum_{a\in\mathcal{U}(x)}\pi(a|x)\delta(a|x)\right]\), where \(\mathcal{U}(x)\) is the set of actions where \(\mu(\cdot|x)\) doesn't put any probability mass on (blind spots) (Kang et al., 2019). To address the aforementioned problems of importance sampling based estimators, we introduce policy convolution that makes use of the latent structure within actions in the next section. ## 3. OPE via Policy Convolution In addition to the offline dataset \(\mathcal{D}\), we further posit access to some embeddings \(\mathcal{E}:\mathcal{A}\mapsto\mathbb{R}^{d}\) of the actions, which maps an action \(a\) to a \(d\)-dimensional embedding space \(\mathcal{E}(a)\in\mathbb{R}^{d}\). Let \(\mathcal{E}_{\mathcal{A}}\subset\mathbb{R}^{d}\) be the subspace spanned by \(\mathcal{E}\). Ideally, the embedding should capture action-similarity information, _i.e._, smaller distance in the embedding space should imply smaller difference in terms of expected reward for each context \(x\in X\). Notably, such action-embeddings are typically readily available in many industrial recommender systems, _e.g._, via matrix factorization (Kang et al., 2019). We are now ready to define our Policy Convolution framework for OPE. Taking IPS (Section 2.2.2) as a representative "backbone" estimator for PC, we define PC-IPS as follows: \[\hat{V}_{\text{PC-IPS}}(\pi) \triangleq\operatorname*{\mathbb{E}}_{(x,a)\cdot\mathcal{D}} \left[\frac{(\pi(\cdot|x)*f_{\tau_{1}})(a)}{(\cdot|x)*f_{\tau_{2}})(a)}\cdot r\right]\] \[=\operatorname*{\mathbb{E}}_{(x,a)\cdot\mathcal{D}}\left[\frac{ \sum_{a^{\prime}}\pi(a^{\prime}|x)\cdot f_{\tau_{1}}(\mathcal{E}(a),\mathcal{ E}(a^{\prime}))}{\sum_{a^{\prime}}\mu(a^{\prime}|x)\cdot f_{\tau_{2}}( \mathcal{E}(a),\mathcal{E}(a^{\prime}))}\cdot r\right],\] where '\(\cdot\)' represents the convolution operator specified in the action-embedding domain, and \(f_{\tau}:\mathbb{R}^{d}\times\mathbb{R}^{d}\mapsto\mathbb{R}\) is an action-action similarity (or convolution) function which has a parameter '\(\tau\)' to control the amount of convolution. Notably, PC is not limited to the IPS backbone estimator discussed hitherto, and we analogously define PC for other backbone estimators, namely, Self-Normalized IPS (SNIPS), Doubly Robust (DR), Self-Normalized DR (SNDR) discussed in Sections 2.2.3 to 2.2.5, and call such estimators PC-SNIPS, PC-DR, and PC-SNDR for convenience. We provide their exact specifications in Appendix A. We also illustrate the intuition behind policy convolutions in Figure 1, where PC_strategically_ biases the logging and target policies toward the uniform policy, by leveraging the underlying action structure, in this case, specified via a hierarchical grouping of actions. As we will observe in Section 4.2, such convolutions in-turn lead to a new bias-variance trade-off, controlled by the amount of convolution (\(\tau\)). We propose four suitable instantiations of the action-action convolution (or pooling) function, \(f_{\tau}(\cdot,\cdot)\) for PC: * **Kernel Smoothing**. Perhaps the most intuitive, we use the idea of multi-variate kernel smoothing (Kang et al., 2019) in the action-embedding Figure 1. Intuition for the PC framework demonstrated using a hierarchical action tree, where similar actions (movies in this example) are recursively agglomerated together. The last level (leaf nodes) represents the complete action space, and the higher levels consist of “meta-actions” that represent a group of individual actions. As we go higher, PC defines the convolved policy for a given action, as the mean probability of all actions inside its corresponding _meta-action_. Hence, we obtain the uniform policy at the topmost-level, and recover the original policy at the last-level. space to derive our similarity function as: \[f_{\tau}\left(\mathcal{E}(a),\mathcal{E}(a^{\prime})\right) \triangleq \mathcal{K}\left(\mathcal{E}(a),\mathcal{E}(a^{\prime})\right)\] \[= \frac{1}{\tau^{d}}\prod_{i=1}^{d}\mathbf{K}\left(\frac{\mathcal{E} (a)_{i},\mathcal{E}(a^{\prime})_{i}}{\tau}\right),\] where, \(\mathbf{K}\) is a suitable kernel function (_e.g._, Gaussian), and \(\tau\in\mathbb{R}\) now corresponds to the bandwidth. It is worth noting that such a formulation can also be derived by viewing actions as continuous treatments, as defined by their embeddings (Beng et al., 2016; Chen et al., 2017). However, since an inverse mapping from \(\mathbb{R}^{d}\mapsto\mathcal{A}\) does not exist in our discrete action problem, having treatments outside \(\mathcal{E}_{\mathcal{A}}\) is meaningless. * **Tree Smoothing.** In this setting, we use \(\mathcal{E}\) to recursively partition the action-space (see Figure 1 for a depiction) into a tree-like structure of depth \(D\), where each depth can be specified by \(\mathcal{T}_{d}\triangleq\{\mathbf{a}_{d,i},\mathbf{a}_{d,2},\ldots,\mathbf{a }_{d,k}\}\), where \(\mathbf{a}_{d,i}\) is a _meta-action_ (set) comprising of singular actions, such that \(\mathcal{A}\sqsubseteq\bigcup_{i=1}^{k}\mathbf{a}_{d,i}\), and \(\mathbf{a}_{d,i}\cap\mathbf{a}_{d,j}=\Phi\) for all \(i\neq j\) pairs. Notably, \(\mathcal{T}_{D}\) (root node) consists of a single meta-action with all actions, and \(\mathcal{T}_{1}\sqsubseteq\mathcal{A}\) (last level) consists of each singular action. The similarity function is then defined as: \[f_{\tau}\left(\mathcal{E}(a),\mathcal{E}(a^{\prime})\right)\triangleq\frac{ \mathbb{I}(a^{\prime}\in\mathbf{a}_{\tau}(a))}{|\mathbf{a}_{\tau}(a)|},\] where, \(\mathbb{I}(\cdot)\) represents the indicator function, \(\tau\) signifies the depth of the action-tree to use, and \(\mathbf{a}_{\tau}(a)\) represents the meta-action at depth \(\tau\) corresponding to the action \(a\). * **Ball Smoothing.** In this setting, we define a binary similarity function based on a fixed-radius ball around the given action, as defined by \(\mathcal{E}\) as follows: \[f_{\tau}\left(\mathcal{E}(a),\mathcal{E}(a^{\prime})\right)\triangleq\frac{ \mathbb{I}\left(\left\|\mathcal{E}(a)-\mathcal{E}(a^{\prime})\right\|_{2}^{2}< \tau\right)}{\left|\left\{\|\mathcal{E}(a)-\mathcal{E}(a^{\prime\prime})\right\| _{2}^{2}<\tau\mid\forall a^{\prime\prime}\in\mathcal{A}\right\}\right|},\] where, \(\tau\) signifies the radius of the ball around \(\mathcal{E}(a)\). * **kNN Smoothing.** In this setting, we define a binary similarity function as the k-nearest neighbors decision function: \[f_{\tau}\left(\mathcal{E}(a),\mathcal{E}(a^{\prime})\right)\triangleq\frac{ \mathbb{I}\left(a^{\prime}\in\text{kNN}(a,\tau)\right)}{\tau},\] where, \(\tau\) signifies the number of nearest neighbors to use, and \(\text{kNN}(a,\tau)\) represents the set of \(\tau\) nearest neighbors of \(\mathcal{E}(a)\) in \(\mathcal{E}_{\mathcal{A}}\). We note that our general Policy Convolution framework encompasses the existing OPE estimators designed for large action spaces (Srivastava et al., 2017; Wang et al., 2018). As we will later show in Section 4.2, using the convolution functions proposed in Section 3, PC is able to achieve significantly better performance than existing estimators. More specifically, groupIPS (Srivastava et al., 2017) can be generalized as PC-IPS and of fCEM (Wang et al., 2018) as PC-DR, both using a two-depth tree (_i.e._, flat clustering) in the tree convolution function, both with an additional constraint of \(\tau_{1}=\tau_{2}=1\). As we will further note in our experiments (Section 4.2): (1) using the kernel and kNN convolution functions tend to perform better than others; and (2) convolving the logging and target policies _differently_ (_i.e._, \(\tau_{1}\neq\tau_{2}\)) adds a lot of flexibility to PC, leading to much better estimation than either convolving the two policies equally, or convolving only one out of the two policies. Motivating example.To gain a better intuition of PC, we refer to Figure 1 and construct a four-action, single-context toy example described in Table 1. We conduct OPE using the IPS estimator at various levels of the action-tree, with a sample size of \(|\mathcal{D}|=10\) and repeat the experiment 50k times. The results demonstrate that as we progress to higher levels of the tree (increased pooling), variance decreases, but bias increases. At the leaf level, IPS is unbiased but exhibits high variance. At the top-most level, while variance is the lowest, bias is significantly increased. When \(\tau=2\), we observe the best bias-variance trade-off, leading to the lowest MSE. ## 4. Experiments ### Setup We measure PC's empirical effectiveness on two datasets. Firstly, we simulate a synthetic contextual bandit setup, beginning by sampling contexts \(x\sim\mathcal{N}(0,I)\). Subsequently, to realize our assumption that there indeed exists a latent structure amongst individual actions, we randomly assign each action to one of 32 latent _topics_, denoted by \(\tilde{a}\). We also assign each latent topic a corresponding \begin{table} \begin{tabular}{c c|c c c c c|c c c} \hline \hline & & \(a_{1}\) & \(a_{2}\) & \(a_{3}\) & \(a_{4}\) & \(V^{*}(\cdot)\) & \multicolumn{3}{c}{\(\text{IPS with }|\mathcal{D}|=10\)} \\ & & & & & & & MSE & Bias\({}^{2}\) & Var \\ \hline \multirow{2}{*}{\(\tau=1\)} & \(\delta(\cdot,x_{1})\) & 5 & 10 & 15 & 20 & \(-\) & \(-\) & \(-\) & \(-\) \\ \cline{2-10} & \(\pi(\cdot|x_{1})\) & 0.0 & 0.2 & 0.2 & 0.6 & 17 & \multirow{2}{*}{48.0} & \multirow{2}{*}{\(\approx 0\)} & \multirow{2}{*}{48.0} \\ & \(\mu(\cdot|x_{1})\) & 0.2 & 0.2 & 0.4 & 0.2 & 13 & & & \\ \hline \multirow{2}{*}{\(\tau=2\)} & \((\pi*f_{\tau})(\cdot|x_{1})\) & 0.1 & 0.1 & 0.4 & 0.4 & 15.5 & \multirow{2}{*}{13.4} & \multirow{2}{*}{4.6} & \multirow{2}{*}{8.8} \\ & \((\mu*f_{\tau})(\cdot|x_{1})\) & 0.2 & 0.2 & 0.3 & 0.3 & 13.5 & & & \\ \hline \multirow{2}{*}{\(\tau=3\)} & \(\begin{array}{c}(\pi*f_{\tau})(\cdot|x_{1})\) \\ \((\mu*f_{\tau})(\cdot|x_{1})\) \\ \end{tabular} & 0.25 & 0.25 & 0.25 & 0.25 & 12.5 & \multirow{2}{*}{18.6} & \multirow{2}{*}{16} & \multirow{2}{*}{2.6} \\ & \((\mu*f_{\tau})(\cdot|x_{1})\) & 0.25 & 0.25 & 0.25 & 0.25 & 12.5 & & \\ \hline \hline \end{tabular} \end{table} Table 1. A toy example to intuit PC using the Tree similarity function, and constrained to have \(\tau_{1}=\tau_{2}\). Similar to Figure 1, in this toy example, the action tree is a \(3\)-level complete binary tree, where the action partitioning is defined as \(\{(a_{1},a_{2},a_{3},a_{4})\}\rightarrow\{(a_{1},a_{2}),(a_{3},a_{4})\} \rightarrow\{(a_{1}),(a_{2}),(a_{3}),(a_{4})\}\) from top to bottom. mean \(\{\mu_{i}\sim\mathcal{N}(0,I)\}_{i=1}^{32}\) and covariance \(\{\sigma_{i}\sim\mathcal{N}(0,I)\}_{i=1}^{32}\). To realize the assigned structure in the action-space, we sample each action's embedding from its correspondingly assigned topic, _i.e.,_\(\mathcal{E}(a)\sim\mathcal{N}(\mu_{\oplus},\sigma_{\oplus})\). We then model the reward function \(\delta(a,x)\) as a noisy and non-linear function of the underlying context- and action-embedding: \(\delta(a,x)\triangleq\Phi(x\|\mathcal{E}(a)\|e)\), where "\(\|\)" represents concatenation, \(e\sim\mathcal{N}(0,I)\) is white-noise, and \(\Phi\) is a randomly initialized, two-layer neural network. Such a formulation realizes two crucial assumptions: (a) semantically closer actions are nearby according to \(\mathcal{E}\), and (b) \(\mathcal{E}\) shares a causal connection with the downstream reward function. Finally, we define the logging policy \(\mu(\cdot|x)\) as a temperature-activated softmax distribution on the ground-truth reward distribution \(\delta(\cdot,x)\), and the target policy as the \(\epsilon-\)greedy policy: \[\begin{split}&\mu(a|x)\triangleq\frac{\exp(\beta\cdot\delta(a,x))}{ \sum_{a^{\prime}}\exp(\beta\cdot\delta(a^{\prime},x))}\\ &\pi(a|x)\triangleq(1-\epsilon)\cdot\mathbb{I}\left(\delta(a,x) =\sup_{a^{\prime}\in\mathcal{A}}\left\{\delta(a^{\prime},x)\right\}\right)+ \frac{\epsilon}{|\mathcal{A}|}\end{split} \tag{1}\] Furthermore, to test the practicality of \(\mathtt{PC}\) on a real-world, large-scale data, we also synthesize a bandit-variation of the Movielens-100k dataset (Moriel and others, 2017) which consists of numerous (user,item,rating) tuples. Taking inspiration from previous recommender system \(\rightarrow\) bandit feedback conversion setups (Srivastava et al., 2017), we define a positive reward if the provided rating \(\geq 4\), or else zero. We then define contexts and action-embeddings as the user- and item-factors attained by performing SVD on the binary user-item rating matrix, respectively. Furthermore, to simulate continuous instead of binary reward, for missing entries, we define the reward as the dot product of the corresponding user- and item-factors, estimated using SVD before. We define the target policy similarly as in Equation (1), and aiming to follow a realistic two-stage recommender system setup (Kang et al., 2017), we define the logging policy \(\mu(\cdot|x)\) as follows: (1) shortlist a set of 100 best actions defined by \(\delta(\cdot,x)\), and 400 actions at random; (2) sample a logit from \(U(0,1)\) for each positive action, and from \(U(0,0.8)\) for the random actions; (3) take a temperature softmax as in Equation (1) only on the sampled logits; and (4) perform \(\epsilon-\)greedy on the obtained action probabilities to satisfy Assumption 2.2. As per our setup, \(V^{*}(\mu)\propto\beta\) and \(V^{*}(\pi)\propto\epsilon^{-1}\) for both the datasets. To avoid clutter, we define \(\mu_{\text{uniform}}\) when \(\beta=0\), and \(\mu_{\text{good}}\) when \(\beta=3\). Similarly, we define \(\pi_{\text{bad}}\) when \(\epsilon=0.8\), and \(\pi_{\text{good}}\) when \(\epsilon=0.05\). Unless specifically mentioned, we use the default values of the remaining hyper-parameters in the bandit data generation procedure, as listed in Appendix B. For evaluating the performance of various estimators, we compute the Mean Squared Error (MSE) between the true and predicted value of the target policy. We reserve a large test-set just to compute the true value of the target policy. We also estimate the squared bias and variance of our predicted estimates by repeating each experiment for 50 random seeds, and also compute the 95% confidence interval for visualization purposes. Note that the bias, variance, and MSE of any estimator are naturally linked to each other by the following decomposition: \(\text{MSE}(\cdot)=\text{Bias}(\cdot)^{2}+\text{Var}(\cdot)\). For estimators in the \(\mathtt{PC}\) framework, we chose the optimal convolution values (_i.e.,_\(\tau_{1}\) and \(\tau_{2}\)) using the MSE obtained on a validation set. Notably, while \(\mathtt{PC}\) for any given backbone estimator strictly contains the naive backbone (_i.e._, when \(\tau_{1}=\tau_{2}=0\)); to de-confound the effect of policy convolution and the backbone estimator, we only report results for \(\mathtt{PC}\) with a non-zero amount of pooling. ### Results **(Figure 2) How does \(\mathtt{PC}\) perform with a varying number of actions?** Observing the effect of increasing the number of actions Figure 2. Change in MSE while estimating \(V(\pi_{\text{good}})\) with varying number of actions (\(\log\)-\(\log\) scale) for the synthetic dataset, using data logged by (top) \(\mu_{\text{uniform}}\), or (bottom) \(\mu_{\text{good}}\). Results for \(V(\pi_{\text{bad}})\) can be found in Appendix C, Figures 7 and 8. (\(|\mathcal{A}|\)) on different estimators' performance while keeping the size of the available bandit feedback (\(|\mathcal{D}|\)) constant, we find that the MSE of all 1S-based estimators deteriorates, in accordance with the \(\Omega(|\mathcal{A}|)\) growth of their variance. We further note that utilizing \(\mu_{\text{good}}\) rather than \(\mu_{\text{uniform}}\) as the logging policy, results in an almost 2-orders of magnitude reduction of MSE across all estimators due to the increased overlap between \(\mu_{\text{good}}\) and \(\pi_{\text{good}}\). Finally, a general trend that holds across various convolution strategies is that the improvement in MSE achieved by PC _w.r.t._ its respective backbone, significantly increases with increasing \(|\mathcal{A}|\). Notably, in the extreme scenario of 10k actions, PC-DR outperforms DR by up to 5-orders of magnitude in terms of MSE. While we note that no single backbone estimator or pooling strategy is optimal in every scenario for PC, PC-SNDR and kernel _or_ kNN convolution strategies generally exhibit better performance than others. **(Figure 3) How does PC perform with varying amount of policy-mismatch?** We analyze the impact of increasing the policy mismatch between the target and logging policies on off-policy estimation performance, specifically by tuning the \(\beta\) parameter in Equation (1) that controls the quality of the logging policy, keeping the target policy fixed. We observe that OFE becomes hardest on both ends of the spectrum, _i.e._, when the logging and target policies have large divergence. The difficulties of OPE in such low-overlap scenarios have been well documented in the literature [1, 13, 29]. Figure 4. Change in MSE while estimating \(V(\pi_{\text{good}})\) with varying amounts of bandit feedback (\(\log\)-\(\log\) scale) for the movielens dataset. Results for PC-DR, PC-SNDR, estimating \(V(\pi_{\text{bad}})\), and the synthetic dataset can be found in Appendix C, Figures 17 to 20. Figure 3. Change in MSE while estimating \(V(\pi_{\text{good}})\) with varying policy-mismatch (\(\log\) scale) for (top) synthetic, and (bottom) movielens dataset. The policy-mismatch is higher when \(\beta\) is lower. Results for estimating \(V(\pi_{\text{bad}})\), and the observed bias-variance trade-off can be found in Appendix C, Figures 9 to 12. and we observe that PC--through the use of latent structure amongst actions--is particularly helpful in such conditions. An analysis of the exact bias-variance trade-off provided in Appendix C, Figure 10 reveals that PC is able to effectively (1) reduce the variance when policy-mismatch is low, _i.e._, \(|\beta|\approx 0\), and (2) counteract the bias introduced by IS when Assumption 2.2 is violated, _i.e._, \(|\beta|\) is large. Both of these improvements result in significantly better MSE for PC across the entire policy-mismatch spectrum, as depicted in Figure 3. **(Figure 4) How does PC perform with a varying amount of bandit feedback?** Keeping all other factors fixed, we investigate the impact of increasing the size of the logged bandit feedback (\(|\mathcal{D}|\)) on the MSE of various off-policy estimators. Like other baseline estimators, we observe that PC exhibits consistency and is effectively able to balance the bias-variance trade-off. Specifically, PC is most advantageous in the low-data regime, due to its variance reduction properties. However, as \(|\mathcal{D}|\) continues to increase, PC converges to its respective backbone estimator, _i.e._, when \(\tau_{1}=\tau_{2}=0\) represents the optimal bias-variance trade-off point. This pattern of a decreasing amount of optimal pooling (\(\tau_{1}\), \(\tau_{2}\)) with increasing \(|\mathcal{D}|\) is anticipated, as the variance of IS-based estimators naturally decreases with growing \(|\mathcal{D}|\), and any reduction in variance at the cost of increased bias negatively impacts the overall MSE. We take further note of this observation for even more kinds of logging policies and backbone estimators in Appendix C, Figures 29 to 31. **(Figure 5) How does PC perform with a varying amount of deficient support?** To understand the effect of varying amount of support (or overlap) between the logging and target policies on various estimators, we simulate such a scenario by explicitly forcing the logging policy to only have support (_i.e._, non-zero probability) over a smaller, random set of actions, and have zero probability for all other actions. Varying this deficient action ratio, we firstly observe an expected increase in the MSE and squared bias of baseline estimators like IPS, SNIPS, DR, _etc._ due to the violation of Assumption 2.2, which has been shown to to add irrecoverable bias in importance sampling based estimators [29]. On the other hand, even with an increasing number of deficient actions, the MSE for PC tends to stay relatively constant, with kernel-based convolutions being the best approach. This goes to show that PC is able to accurately Figure 5. Change in MSE, Squared Bias, and Variance for PC-IPS and other baseline estimators while estimating \(V(\pi_{\text{good}})\) with varying support (\(\log\)-\(\log\) scale) for the synthetic dataset (with \(2000\) actions), using data logged by \(\mu_{\text{uniform}}\). Results for other backbones in PC, and estimating \(V(\pi_{\text{bad}})\) can be found in Appendix C, Figures 21 and 22. Figure 6. Visualizing the bias-variance trade-off for PC-SNIPS (Tree pooling) with varying amount of pooling, while estimating \(V(\pi_{\text{good}})\) and using \(\mu_{\text{good}}\) for logging on the synthetic dataset (with \(2000\) actions). When \(\tau_{1}\neq\tau_{2}\), we plot results for \(\tau_{1}\) on the plot. The naive SNIPS estimator is the left-most point, _i.e._, when there’s no pooling. Results for other backbones, pooling methods, and the movielens dataset can be found in Appendix C, Figures 27 and 28. leverage action-embeddings as a guide for appropriately filling-in the blind spots while performing OPE. **(Figure 6) How does the amount of convolution affect the bias-variance trade-off?** We examine the influence that the amount of convolution in \(\mathsf{PC}\) has on the bias-variance trade-off for three variations of \(\mathsf{PC}\) with the tree pooling function: (1) only the target policy is convolved, _i.e._, \(\tau_{2}=1\) which is equivalent to the similarity estimator (Friedman and Rafter, 2017); (2) both the logging and target policies are convolved equally, _i.e._, \(\tau_{1}=\tau_{2}\) which is equivalent to the offCEM (Kolmogorov, 1959) and groupIPS (Kolmogorov, 1959) estimators; and (3) both the logging and target policies are convolved and \(\tau_{1},\tau_{2}\) can be different. From Figure 6, we observe that the amount of convolution results in a bias-variance trade-off, where larger pooling leads to decreased variance, but increased bias. It is worth mentioning that the initial decrease in bias with convolution is due to the use of \(\mu_{\text{good}}\) for logging, which partially violates Assumption 2.2. This results in a biased SNIPS estimate, which the \(\mathsf{PC}\) family of estimators are able to effectively recover. Further, we note that solely convolving the target policy (_i.e._, the similarity estimator) does not necessarily result in a suitable bias-variance trade-off, with the other two convolution strategies being significantly better, and having \(\tau_{1}\neq\tau_{2}\) consistently being the best approach. ## 5. Related Work **Off-policy evaluation.** A wide body of literature in operations research, causal inference, and reinforcement learning studies the problem of off-policy evaluation. Prominent off-policy estimators can be grouped into the following three categories: **(1) Model-based:** dubbed as the direct method (DM), whose key idea is to use a parametric reward model to extrapolate the reward for unobserved (context, action) pairs (Bengio et al., 2017). DM typically has a low variance, at the cost of uncontrollable bias due to model misspecification. **(2) Inverse propensity scoring (IPS)**: IPS uses the propensity ratio between the target and logging policies to account for the distribution mismatch (Kolmogorov, 1959). Though unbiased under mild assumptions, IPS suffers from large variance. Typical remedies for the large variance are propensity clipping (Kolmogorov, 1959; Kolmogorov, 1959) or self-normalization (Kolmogorov, 1959), which might introduce bias. **(3) Hybrid:** some estimators (e.g. the doubly robust estimator (Bengio et al., 2017)) combine DM and IPS together to leverage the benefits of both worlds (Bengio et al., 2017; Bengio et al., 2017; Bengio et al., 2017; Kolmogorov, 1959; Kolmogorov, 1959). However, these estimators still suffer from the large variance problem due to the large propensities especially when the action space is large. **Off-policy evaluation for large action spaces.** Two kinds of major problems occur when attempting to perform OPE in large action spaces. Firstly, the variance of any importance sampling method grows linearly _w.r.t._ the size of the action-space (Kolmogorov, 1959), and the common support assumption tends to become impractical (Kolmogorov, 1959) leading to irrecoverable bias in estimation. Recent work (Friedman and Rafter, 2017; Rafter, 2017; Rafter, 2017) attempts to use some notion of latent structure in the action-space to address both of the aforementioned limitations. The MIPS estimator (Rafter, 2017) builds on the randomness in the available action embeddings to improve OPE. However, in a setting where only a 1:1 action-to-embedding mapping is available (as in this paper), MIPS reduces to vanilla IPS. Further, as we discussed in Section 3, offCEM (Kolmogorov, 1959), similarity estimator (Friedman and Rafter, 2017), and groupIPS (Kolmogorov, 1959) are all specific instantiations of our \(\mathsf{PC}\) family of estimators. **Off-policy evaluation for continuous action spaces.** Another line of work builds off-policy estimators when the action-space is continuous, _e.g._, the dosage of a treatment. If we discretize the action-space into a fixed number of bins as per some resolution, the action-space becomes too large for typical off-policy estimators to work well (Kolmogorov, 1959). Naive use of importance sampling based estimators would be vacuous in this setting, since the probability of selecting any action can be zero for a policy that samples actions according to some probability density function. To this end, typical approaches extend the discrete rejection sampling idea into a smooth rejection operation using standard kernel functions (Kolmogorov, 1959; Kolmogorov, 1959; Kolmogorov, 1959), with the implicit assumption that similar actions (in terms of distance in the continues action space) should lead to similar reward. Our proposed \(\mathsf{PC}\) also leverages the similarity information between actions through action embeddings, but for problem with discrete and large action spaces. ## 6. Conclusion & Future Work In this paper, we proposed the Policy Convolution (\(\mathsf{PC}\)) family of estimators which leverage latent action structure specified via action embeddings to perform off-policy evaluation in large action spaces. More specifically, \(\mathsf{PC}\) convolves both the target and logging policies according to an action-action convolution function, which posits a new kind of bias-variance tradeoff controlled by the amount of convolution. Conducting empirical evaluation over a diverse set of off-policy estimation scenarios, we observe that the estimators from the \(\mathsf{PC}\) framework enjoy up to 5 orders of magnitude improvement over existing baseline estimators in terms of MSE, especially when (1) the action-space is large, (2) the policy mismatch between logging and target policies is high, or (3) the common support assumption for importance sampling is violated. We believe that our findings can expand the potential use of off-policy estimators into new and practical scenarios, and also encourage further exploration into the use of additional structure for efficient OPE. We also discuss limitations and unexplored directions in this paper that we believe are promising for future work. Firstly, having a deeper formal understanding about the statistical properties of \(\mathsf{PC}\) might help in designing more robust off-policy estimators. Next, even though we propose four different action convolution functions, having a better understanding of the inductive biases that various convolution functions posit might guide us in designing even better and more principled OPE approaches. Finally, understanding and developing principled techniques for automatically selecting the level of convolution to conduct on the target and logging policies is an interesting research direction (Kolmogorov, 1959; Kolmogorov, 1959).
2304.09185
Token Imbalance Adaptation for Radiology Report Generation
Imbalanced token distributions naturally exist in text documents, leading neural language models to overfit on frequent tokens. The token imbalance may dampen the robustness of radiology report generators, as complex medical terms appear less frequently but reflect more medical information. In this study, we demonstrate how current state-of-the-art models fail to generate infrequent tokens on two standard benchmark datasets (IU X-RAY and MIMIC-CXR) of radiology report generation. % However, no prior study has proposed methods to adapt infrequent tokens for text generators feeding with medical images. To solve the challenge, we propose the \textbf{T}oken \textbf{Im}balance Adapt\textbf{er} (\textit{TIMER}), aiming to improve generation robustness on infrequent tokens. The model automatically leverages token imbalance by an unlikelihood loss and dynamically optimizes generation processes to augment infrequent tokens. We compare our approach with multiple state-of-the-art methods on the two benchmarks. Experiments demonstrate the effectiveness of our approach in enhancing model robustness overall and infrequent tokens. Our ablation analysis shows that our reinforcement learning method has a major effect in adapting token imbalance for radiology report generation.
Yuexin Wu, I-Chan Huang, Xiaolei Huang
2023-04-18T23:09:36Z
http://arxiv.org/abs/2304.09185v1
# Token Imbalance Adaptation for Radiology Report Generation ###### Abstract Imbalanced token distributions naturally exist in text documents, leading neural language models to overfit on frequent tokens. The token imbalance may dampen the robustness of radiology report generators, as complex medical terms appear less frequently but reflect more medical information. In this study, we demonstrate how current state-of-the-art models fail to generate infrequent tokens on two standard benchmark datasets (IU X-RAY and MIMIC-CXR) of radiology report generation. To solve the challenge, we propose the **T**oken **I**mbalance **A**dapter (_TIMER_), aiming to improve generation robustness on infrequent tokens. The model automatically leverages token imbalance by an unlikelihood loss and dynamically optimizes generation processes to augment infrequent tokens. We compare our approach with multiple state-of-the-art methods on the two benchmarks. Experiments demonstrate the effectiveness of our approach in enhancing model robustness overall and infrequent tokens. Our ablation analysis shows that our reinforcement learning method has a major effect in adapting token imbalance for radiology report generation. + Footnote †: c) 2023 Y. Wu, I.-C. Huang & X. Huang. Data and Code AvailabilityIn this study, we conduct our experiments in two public datasets, IU X-RAY (Demner-Fushman et al., 2015) and MIMIC-CXR (Johnson et al., 2019). The datasets access to download from [https://openi.nlm.nih.gov/](https://openi.nlm.nih.gov/) and [https://physionet.org/content/mimic-cxr/2](https://physionet.org/content/mimic-cxr/2). 0.0/. We publish our code at [https://github.com/woqingdoua/TIMER](https://github.com/woqingdoua/TIMER). Institutional Review Board (IRB)In this study, we use the publicly available datasets after taking required training courses and signing data usage agreements. All the publicly available datasets have been de-identified and anonymized. Our study focuses on computational approaches and does not collect data from human subjects. We applied the institutional IRB determination that an IRB approval is not required for this study. ## 1 Introduction _Radiology report generation_ is to automatically generate a precise description in natural language given medical images, including computed tomography (CT) and X-RAY images. Increasing studies have deployed deep encoder-decoder neural architectures to encode medical images and decode the information to generate radiological reports (Jing et al., 2018, 2019; Chen et al., 2020, 2021; Qin and Song, 2022). Overfitting on frequent tokens is a common challenge in the text generation field that generators fail to predict infrequent tokens (Yu et al., 2022). Our empirical analysis has demonstrated that over 80% medical terms are infrequent tokens, while frequent tokens can count over 82% corpus (Section 2). The complex and lengthy tokens naturally occur less frequently than simple words (Nikkarinen et al., 2021), which is common across medical scenarios (Demner-Fushman et al., 2015; Johnson et al., 2019). However, existing studies have not explicitly consider infrequent tokens for radiology report generation, which can decrease model robustness and lead to imprecise reports. Methods to reduce token-frequency-overfit commonly deploy post-processing (Mu and Viswanath, 2018) or regularization techniques (Welleck et al., 2020; Wang et al., 2020; Yu et al., 2022) for token embeddings. However, the approaches usually work for text-to-text generation given text sentences as inputs, while medical images are inputs in our study. Nishino et al. proposes reinforcement learning approach for the class imbalance issue. However, the approach aims to solve image category imbalance instead of token imbalance. Modeling the token imbalance for the multimodal generation task is an unsolved challenge, especially for medical scenarios. In this study, we propose the Token Imbalance Adapter (TIMER) model using reinforcement learning (Sutton et al., 1999) to adapt infrequent token generation and evaluate on radiology report generation by two publicly available datasets, IU X-RAY (Demner-Fushman et al., 2015) and MIMIC-CXR (Johnson et al., 2019). Our approach deploys an unlikelihood loss to penalize incorrect predictions for frequent tokens and develops a dynamic adaptation module to adjust the optimization process automatically. We compare the TIMER with three state-of-the-art baselines on overall performance and show overall improvements of our approach. By evaluating the performance of low and high-frequent tokens, our approach can significantly improve generation performance on infrequent token sets and maintain stable performance on frequent tokens. Our major contributions are summarized as follows: 1) to our best knowledge, this is the first study that explicitly adapts token imbalance for radiology report generation. We have demonstrated the importance of infrequent tokens in radiology datasets, as most medical terms occur infrequently (Figure 1). 2) We propose a reinforcement learning method that effectively improves model overall performance as well as performance of infrequent tokens. 3) We conduct extensive ablation analysis to illustrate the effectiveness of adapting infrequent tokens. The ablation analysis proves that our method successfully reduces generation biases on frequent tokens and dynamically leverage token frequency. ## 2 Data We retrieved two publicly available datasets of radiology report generation, IU X-RAY (Demner-Fushman et al., 2015) and MIMIC-CXR (Johnson et al., 2019). 1) _IU X-RAY_, collected by Indiana University, provides 3,955 reports and 7,470 X-RAY images from Indiana Network for Patient Care. 2) _MIMIC-CXR_ (denote as MIMIC) is the largest radiography and publically available dataset to date. The dataset contains 227,835 reports with 377,110 images collecting between 2011 and 2016 at the Beth Israel Deaconess Medical Center. Each radiology report may associate with one or more front and side X-rays images. We summarize the data statistics in Table 1. Overfitting on frequent patterns is a common challenge for deep neural models -- radiology report generators can easily overfit on frequent tokens than infrequent tokens. A recent text generation study (Yu et al., 2022) has found that text generators commonly perform much worse on infrequent tokens. The low performance on infrequent tokens can significantly impact radiological report generation, where many important medical terms do not appear frequently. In this study, we argue that _infrequent tokens matter_ in radiology report generation. To verify our claim, we conduct a quantitative analysis of medical terms across token distributions. First, we deploy a named entity recognition (NER) model (en_ner_bc5cdr_md) from Sci-Spacy to extract disease-related terms.1 Next, we split the vocabulary of each dataset into five equal segments. Our empirical counts show that the first 20% (segment) of vocabulary types account for over 82% tokens in each \begin{table} \begin{tabular}{c|c|c|c|c} & Images & Reports & Length & Vocab \\ \hline \hline IU X-RAY & 7,470 & 3,955 & 35.99 & 1,517 \\ MIMIC & 377,110 & 227,835 & 59.70 & 13,876 \\ \end{tabular} \end{table} Table 1: Statistics of the two radiography datasets. Length is the report’s average length, and Vocab is the vocabulary size. Figure 1: Ratios of medical terms across five equal splits of vocabulary. 1 represents the most infrequent token set, and 5 refers to the most frequent token split. dataset while the rest of vocabulary types (80%) only account for less 18% of corpus tokens. We calculate frequency ratios of medical terms in each segment and visualize the results in Figure 1. The figure shows that infrequent tokens contain more medical terms than frequent tokens, especially for complex tokens (average character length per token).2 Footnote 2: In IU X-RAY, the top medical terms in frequent tokens are pleural, effusion, heart, lung, and focal, while the top ranks in the infrequent tokens are diverticula, shrapnel, hemorrhage, prosthesis, arthroplasty. Ignoring the low performance on infrequent tokens can significantly harm robustness of radiological generators (i.e., baselines fail in infrequent token evaluations in Table 3). This is essentially important for medical scenarios where infrequent tokens are more likely as complex domain terms, which are not commonly as frequent words (Nikarinen et al., 2021). However, there is few study in the text generation task adapting infrequent tokens, especially for the radiological report generation. Thus, the issue inspires us to propose our model, **T**oken **I**mbalance **A**d**a**p** (_TIMER_). ## 3 Token Imbalance Adapter In this section, we present our approach _TIMER_ in Figure 2. TIMER consists of three major modules, 1) unlikelihood loss, 2) dynamic adaptation, and 3) joint optimization. We deploy unlikelihood loss to reduce overfit and token imbalance of the text generator by penalizing a frequent-token set. The dynamic adaptation module deploys reinforcement learning to allow adjust the frequent-token set automatically. Finally, We elaborate on how to jointly optimize the text generator and dynamic adaptation module. ### Problem Statement Radiology report generation is an image-to-text generation task. Given a radiology image \(\mathbf{x}\) and the corresponding report \(\boldsymbol{y}=(y_{0},\ldots,y_{L})\) with length \(L\), the task aims to train a text generator (\(p_{\theta}(\mathbf{y}\mid\mathbf{x})\)) by minimizing \[\mathcal{L}_{NLG}(\theta)=-\sum_{l=1}^{L}\log p\left(y_{l}\mid y_{1},\ldots,y _{l-1},\mathbf{x};\theta\right) \tag{1}\] , where \(p\left(y_{l}\mid y_{1},\ldots,y_{l-1},\mathbf{x};\theta\right)\) is from the softmax function for the next token prediction, and \(\theta\) refers to generation model parameters. Token imbalance can cause prediction overfit that leads to low performance on low-frequent tokens. To solve the challenge, we balance the performance by the unlikelihood loss. ### Unlikelihood Loss Inspired by the work (Welleck et al., 2020), we utilize unlikelihood loss to reduce over-fitting effects by penalizing predicted probabilities for frequent tokens. Firstly, we calculate the average predicted possibility \(p(u)\) for each token \(u\) in a report, \[p(u)=\frac{\sum_{l=1}^{L}\log\left(p\left(u\mid y_{1},...,y_{l}\right)\right) }{L} \tag{2}\] Then, given a set of frequent tokens \(\mathcal{U}_{h}\) in a corpus, the unlikelihood loss punishes each token \(u\in\mathcal{U}_{h}\) by \[\mathcal{L}_{\text{UL}}(\mathcal{U}_{h})=-\sum_{u\in\mathcal{U}_{h}}\log(1-p \left(u\right)) \tag{3}\] To decrease \(\mathcal{L}_{\text{UL}}\), the model predicts lower probabilities \(p(u)\) on the frequent tokens, \(u\). However, the unlikelihood loss has two issues: frequent token set and combining optimization objectives. Because statically fixing the frequent token set and equally combining two optimization objectives (\(\mathcal{L}_{NLG}\) and \(\mathcal{L}_{\text{UL}}\)) are not ideal for the report generation task. For example, Section 5.4 shows a static set of frequent tokens is not effective to reduce the token-frequency overfit of report generators. To enable dynamic adaptation, we deploy reinforcement learning to select a frequent token set. We leverage different training objectives by the joint optimization (Section 3.4). ### Dynamic Adaptation We deploy a reinforcement learning (RL) (Sutton et al., 1999) method allowing for a dynamic unlikelihood token set instead of a fixed set. The dynamic adaptation includes three major components, a policy network, a value network, and a reward. The dynamic adaptation paradigm is to adjust the unlikelihood token set and reduce token frequency effects during the report generation. Specifically, the NLG model is jointly optimized by the generation and unlikelihood losses, where the DA module decides the unlikelihood set. Then our imbalance reward can evaluate the improvements between the trained and untrained model with the unlikelihood loss. This reward helps the DA module to dynamically tune a better unlikelihood set, which can guide the NLG model to better training by balancing performance between infrequent and frequent tokens. Such a setting increases flexibility and dynamics of adjusting balances between frequent and infrequent tokens. Policy Network,\(\pi_{\theta}\left(a_{t}\mid s_{t}\right)\), aims to predict a frequent token set as an action \(a_{t}\) from a state \(s_{t}\) at step \(t\). We take a dot product of token distributions between prediction in Eq. 2 and ground truth in a sample as our state. The dot product can reflect total token-level prediction errors. The policy network feeds a state (\(s_{t}\)) into the fully connected layer and sigmoid function and outputs a possibility estimation for an action \(a_{t}\). The fully connected layer has the same input and output sizes as the vocabulary size. Finally, the policy network adopts a Bernoulli sampling on the estimated possibility vector and obtains the final unlikelihood token set \(\mathcal{U}\). The policy network identifies the frequent token set for each sample through learning stages. This setting can help the generation model dynamically adjust the tokens' weight to reduce overfitting during optimization. Value Network,A2C algorithm (Sutton et al., 1999)utilizes a value network to guide the policy network's learning and reduce variance due to the environment's randomness. In our case, the NLG model may overfit to different tokens after each optimization step, which causes the same policy can have a different reward. To overcome such randomness, we also introduce a value network to help the policy network's stable learning. \(Q(s_{t},a_{t})\), is to predict a reward by given a state (\(s_{t}\)) and a action (\(a_{t}\)). In our case, \(Q(s,a)\) is a 1-D convolutional network with two input channels and one output channel. The kernel size was set to 10 in our experiment. The predicted reward is the average of the convolutional network output. RewardWe design a reward to combine generation report quality and leverage the performance variations between frequent and infrequent tokens. Our reward consists of a text generation reward (\(R_{g}\)) from Eq.1 and and imbalanced evaluation reward (\(R_{m}\)). The imbalanced evaluation divides the comparison count of performance variations into four sets according to the tokens' frequency and calculates the F1 score for each token's set, respectively. We denote \(F(\mathcal{U}_{l})\), \(F(\mathcal{U}_{m})\) and \(F(\mathcal{U}_{h})\) as the F1 score of the low, medium and high-frequency token sets, respectively. To avoid F1-score overestimation due to token repetitions, we restrict each token in the prediction only project into one token in the ground truth instead of the token repetition. For example, if the prediction text includes repetitive token such as "bone is is intact" and the ground truth is "the heart size is abnormal", the correct prediction token number is 1 in this case according to our definition. Because the correct prediction token ("is") only occurs one time in the ground truth. Then, we average the F1 score of \(F(\mathcal{U}_{l})\), \(F(\mathcal{U}_{m})\), and \(F(\mathcal{U}_{h})\) as our imbalanced reward Figure 2: Illustration of our proposed TIMER model. We use arrows to indicate model workflow. Blue arrows refer to loss value calculations by Equations 1 and 3. Parameter update processes follow red dotted lines. Our learning progress has inner and outer loops. In the inner loop, we update the NLG model by \(\mathcal{L}_{NLG}+\mathcal{L}_{UL}\). In the outer loop, we implement the dynamic adaptation (DA) via reinforcement learning and update DA module (\(\eta\)) by the updated parameters (\(\theta_{new}\)) and the reward \(r_{t}\) in Eq. 4. \(R_{m}\): \[R_{m}=(|F(\mathcal{U}_{h})-F(\mathcal{U}_{l})|+|F(\mathcal{U}_{m})-F( \mathcal{U}_{l})|\] \[+|F(\mathcal{U}_{h})-F(\mathcal{U}_{m})|)/3\] Compared to calculating the F1 score in one token set, this method can alleviate a biased evaluation due to imbalanced token distribution since \(R_{m}\) can balance performance across different frequency token sets. We can formulate the final reward as follows: \[r_{t}= R_{g}(\theta_{new})+R_{m}(\theta_{new})-R_{g}(\theta)-R_{m}(\theta) \tag{4}\] where \(R_{g}\) is text generation loss in Eq.1, and \(\theta\) refers to model parameters. We include optimization steps in the following section to learn the policy network and value network with the reward. ### Joint optimization Our optimization includes _inner_ and _outer_ loops. In the inner loop, we update the natural language generation (NLG) model with parameters \(\theta\) according to the loss \(\mathcal{L}_{NLG}\) in Eq. 1) and the unlikelihood loss \(\mathcal{L}_{\text{UL}}\) in Eq. 3 as follows, \[\mathcal{L}_{inner}=\mathcal{L}_{NLG}(\theta)+\mathcal{L}_{\text{UL}}( \mathcal{U};\eta) \tag{5}\] \[\theta_{new}=\theta-\nabla_{\theta}\mathcal{L}_{inner} \tag{6}\] In the outer loop, we update the dynamic adaptation module (DA) with parameters \(\eta\) by A2C algorithm (Sutton et al., 1999). The policy network learns how to interact with the environment within time \(t\) by minimizing the following expectation: \[\mathcal{L}_{policy}=-(\mathbb{E}[\sum_{t}\log\pi_{\theta}\left(a_{t}\mid s_{ t}\right)A(s_{t},a_{t})]),\] where \(A(s_{t},a_{t})\) is an advantage estimate and equals to the real advantage in expectation. A2C utilizes Temporal-Difference(TD) to calculate advantage \(A(s_{t},a_{t})\), \[A(s_{t},a_{t})=r_{t}+\gamma Q\left(s_{t+1},a_{t+1}\right)-Q\left(s_{t},a_{t} \right),\] where \(r_{t}\) is a reward after taking the action \(a_{t}\) in the state \(s_{t}\), and \(Q\left(s_{t},a_{t}\right)\) is a value function. \(\gamma\) is a discount factor that denotes the trade-off between immediate rewards and future returns. We predict an action for each sample. Each sample has an individual unlikelihood token set, therefore each step action is independent. Thus, we do not need to consider a future return and the \(A(s_{t},a_{t})\) calculation can be simplified as follows, \[A(s_{t},a_{t})=r_{t}-Q\left(s_{t},a_{t}\right)\] Next, we optimize value function \(Q(s_{t},a_{t})\). A2C trains a value network by minimizing MSE loss of TD, \[\mathcal{L}_{value}=(r_{t}-Q\left(s_{t},a_{t}\right))^{2}\] Then, we add an entropy loss as regularization term to promote action diversity of the policy function, \[\mathcal{L}_{entropy}=-\sum\text{P}(\pi_{\theta})\log\text{P}(\pi_{\theta}).\] We integrate the multiple losses together as the outer loss and update DA module as follows: \[\mathcal{L}_{outer}=\mathcal{L}_{policy}+\mathcal{L}_{value}+\mathcal{L}_{ entropy} \tag{7}\] DA module parameter is optimized by, \[\eta_{new}=\eta-\nabla_{\eta}\mathcal{L}_{\text{outer}}\ (\theta_{new}) \tag{8}\] We show the detailed optimization process of in 1. ``` 0: The training set \(\mathbf{x}_{s}\), maximum iteration \(I\), the iteration of inner loop \(N\); for \(i=1\); \(i<I\); \(i++\); for \(n=1\); \(n<N\); \(n++\) 1: Samples a batch from \(\mathbf{x}_{s}\); 2: Update \(\theta\) via Eq. 5 and Eq. 6 ; 3: Samples a batch from \(\mathbf{x}_{s}\); 4: Update \(\eta\) via Eq. 7 and Eq. 8; ``` **Algorithm 1** Optimization Process of TIMER. ## 4 Experiment We follow the previous studies (Jing et al., 2018; Chen et al., 2020, 2021) for data preprocessing, data splits (training, development, and test splits), and model evaluations. We use natural language generation (NLG) metric and clinical efficacy to evaluate our model and baseline, such as BLEU (Papineni et al., 2002), METEOR (Denkowski and Lavie, 2011), and ROUGE-L (Lin, 2004). To evaluate clinical efficacy, we utilize CheXpert (Irvin et al., 2019) to annotate generated reports for MIMIC and IU X-RAY. While there are other evaluation methods (e.g., RadGraph (Jain et al., 2021) and CheXbert (Smit et al., 2020)), we deploy the same metrics as the baselines for consistence. More details of data preprocessing and our implementations are in the Appendix, which allows for experiment replications. ### Baselines To demonstrate the effectiveness of our model, we compare our approach with four state-of-the-art (SOTA) baselines3 that use the same experimental settings, BiLSTM (Jing et al., 2018), R2Gen (Chen et al., 2020), CMN (Chen et al., 2021), and CMM+RL (Qin and Song, 2022). To ensure comparisons, we utilized their open-sourced models and followed their experimental settings. Footnote 3: We chose the SOTA methods that achieved the best performance during our experimental steps. **Note** that this direction evolves rapidly in recent years, and therefore there might be newer methods published during our study’s submission and review. **BiLSTM**(Jing et al., 2018) incorporates semantic tags into visual feature representations to generate radiology reports. To obtain the semantic tags, BiLSTM utilizes the Convolutional Neural Networks (CNNs) to predict semantic tags and corresponding visual abnormality regions, which later will be fed to the Long Short-Term Memory (LSTM) for report generation. The model utilizes a co-attention mechanism to localize regions containing abnormalities and generate narrations. **R2Gen**(Chen et al., 2020) designs a memory-driven Transformer, where a relational memory is used to capture critical information in the generation process. Then the model proposes a memory-driven conditional layer normalization to incorporate the memory into the decoder of the Transformer. We reuse the code and released models from the study. **CMN**(Chen et al., 2021) proposes a shared memory mechanism to record the alignment between images and texts so as to facilitate the interaction and generation across modalities. The memory mechanism is an intermediate medium and contains querying and mapping processes that enhance and smooth mappings between text and image representations. We use the author's released code and model to reproduce the results. **CMM + RL**(Qin and Song, 2022) proposes a method for enhancing text and image representation alignments using reinforcement learning (RL). The RL treats the generation model as the agent that interacts with an external environment (image and text representations). The method provides appropriate supervision from NLG evaluation metrics to search for better mappings between features from different modalities. Our TIMER model uses the RL in a different way that our RL is to dynamically adapt token imbalance instead of aligning modalities. ## 5 Results and Analysis This section provides an overview of the performance by both natural language generation (NLG) metrics and clinical efficacy (Irvin et al., 2019). We also present an imbalanced evaluation focusing on high and low frequencies token sets. Furthermore, we per \begin{table} \begin{tabular}{c||c c c c c c c} Methods & BLEU\_1 & BLEU\_2 & BLEU\_3 & BLEU\_4 & Meteor & Rouge\_L & Clinical Metric \\ \hline \hline \multicolumn{8}{c}{IU X-RAY} \\ \hline BiLSTM & 41.83 & 29.30 & 21.27 & 15.49 & 18.75 & 34.26 & 65.06 \\ R2GEN & 48.80 & 31.93 & 23.24 & 17.72 & 20.21 & 37.10 & 63.62 \\ CMN & 45.53 & 29.50 & 21.47 & 16.53 & 18.99 & 36.78 & 64.83 \\ CMM+RL & 49.30 & 30.08 & 21.45 & 16.10 & 20.10 & 38.20 & 40.79 \\ TIMER & **49.34** & **32.49** & **23.84** & **18.61** & **20.38** & **38.25** & **94.40** \\ \hline \(\Delta\) & 6.42 & 7.57 & 9.07 & 13.06 & 4.45 & 4.55 & 61.16 \\ \hline \hline \multicolumn{8}{c}{MIMIC} \\ \hline BiLSTM & 26.81 & 15.77 & 10.12 & 7.00 & 11.26 & 26.00 & 49.50 \\ R2GEN & 35.42 & 21.99 & 14.50 & 10.30 & 13.75 & 27.24 & 34.77 \\ CMN & 35.60 & 21.41 & 14.07 & 9.91 & 14.18 & 27.14 & 41.21 \\ CMM+RL & 38.10 & 22.10 & 14.45 & 10.02 & 14.53 & 27.66 & 28.36 \\ TIMER & **38.30** & **22.49** & **14.60** & **10.40** & **14.70** & **28.00** & **75.86** \\ \hline \(\Delta\) & 11.27 & 9.66 & 9.01 & 10.50 & 8.64 & 3.54 & 49.30 \\ \end{tabular} \end{table} Table 2: Performance summary. \(\Delta\) indicates averaged percentage improvements of TIMER over baselines. Clinical Metric calculates the F1 score. form an ablation analysis to assess the impact of individual modules in our approach. ### Overall Performance Table 2 presents overall performance and the details of improvements percentage are shown in Table 5 in the Appendix. The results show that our approach significantly outperforms baselines, ranging from 3.54% to 61.16% improvements. Compared to the BiLSTM, the transformer-based models (R2GEN, CMN, CMM+RL) consistently perform well on all datasets among the four baseline generators. However, while CMM+RL's performance is on par with other baselines (e.g.,). Its BLEU_4 scores are relatively lower, indicating inefficiency in generating fluent sentences. One reason could be that CMM+RL employs a greedy sampling approach in self-critic learning, which fails to consider long-term returns. In contrast, our model significantly improves language fluency by achieving a 13.06% and 10.50% increase in BLEU_4 scores on IU-XRAY and MIMIC, respectively. Our model's most significant improvement is in the clinical metric, which we attribute to its focus on improving the prediction of infrequent tokens, particularly clinical vocabulary. ### Imbalanced Performance Table 3 presents a performance evaluation on infrequent tokens, which demonstrates effectiveness of our approach to learning token imbalance. We use F1-score to evaluate model performance of high and low-frequency token sets. We define three high-frequency by three levels, 1/4, 1/6, and 1/8, which refers to the percentage of top frequent tokens as high-frequency in the vocabulary and the rest are low frequent tokens. The setting is to demonstrate effectiveness of our approach in adapting imbalanced tokens. By comparing the baselines, we find that our method significantly outperforms them in the low-frequency token set, both on IU X-RAY and MIMIC datasets by 1% to 245.6%. Our model improves performance of rare tokens by over 100% on IU X-RAY dataset and 41% on MIMIC dataset, highlighting the importance of addressing token imbalance issue. Neural models trained on skewed token distributions tend to overfit on frequent tokens, resulting in a greater prediction error for rare tokens. Furthermore, our method does not sacrifice model performance on the frequent token set. Compared to CMN, our approach achieves a 10.78% improvement on IU X-RAY and a 15.49% improvement on MIMIC with a ratio of 1/8. ### Ablation Analysis We performed an ablation analysis to evaluate the effectiveness of individual modules in our model. We used the notation "-DA+UL" to denote removing the DA module and replacing the module with a fixed frequent token set for the unlikelihood loss. We empirically selected the top 100 frequent tokens in IU X-RAY and the top 600 frequent tokens in MIMIC as the token set size. We used the same evaluation metrics as in the previous sections and summarized the performance results in Table 4. Our results show that both the unlikelihood loss and DA modules can improve model performance across all metrics, proving the effectiveness of the proposed modules in generating radiology reports. However, we observed that adding the DA module has more performance improvements compared to the unlikelihood loss. This suggests that dynamic adaptation has a greater contribution in promoting model robustness overall and infrequent token adaptation. \begin{table} \begin{tabular}{c l c c c c} \hline \hline & & \multicolumn{3}{c}{IU X-RAY} & \multicolumn{2}{c}{MIMIC} \\ Ratio & Method & low & high & low & high \\ \hline \hline \multirow{4}{*}{1/8} & Bi-LSTM & 3.85 & 47.31 & 1.27 & 37.48 \\ & R2GEN & 4.46 & **62.73** & 2.52 & 52.01 \\ & CMN & 5.88 & 55.86 & 2.23 & 45.60 \\ & CMM + RL & 5.19 & 49.36 & 0.21 & 43.64 \\ & TIMER & **13.23** & 61.89 & **3.15** & **52.66** \\ \hline \multirow{4}{*}{1/6} & Bi-LSTM & 1.97 & 44.06 & 0.89 & 30.77 \\ & R2GEN & 2.80 & 61.62 & **2.00** & 49.86 \\ & CMM & 5.75 & 65.12 & 0.85 & **52.02** \\ & CMM + RL & 5.08 & 49.26 & 0.14 & 43.36 \\ & TIMER & **5.93** & **67.79** & **2.02** & 51.72 \\ \hline \multirow{4}{*}{1/4} & Bi-LSTM & 0.00 & 44.41 & 0.28 & 30.09 \\ & R2GEN & 1.16 & 59.98 & 0.00 & 48.77 \\ \cline{1-1} & CMN & 2.60 & 63.92 & 0.33 & 51.09 \\ \cline{1-1} & CMM + RL & 2.19 & 47.21 & 0.07 & 43.05 \\ \cline{1-1} & TIMER & **8.66** & **64.00** & **0.58** & **51.39** \\ \hline \hline \end{tabular} \end{table} Table 3: The imbalanced evaluation in the high- and low-frequency token set. We evaluate the F1 score by dividing tokens into high and low-frequency set with three different bucket size, (e.g., 1/8 represents top 1/8 frequent tokens as a high-frequency set). ### Dynamic Adaptation Analysis TIMER dynamically adjusts the size of the unlikelihood token set to improve generator performance. This ablation analysis is to evaluate dynamic adaptation by comparing dynamic and static adjustments. For the static adaptation, the size of the unlikelihood token set is fixed at different sizes for the IU X-RAY and MIMIC datasets. The results of changes in the fixed sizes are visualized in Figure 3. The results show that different fixed sizes yield lower BLEU, METEOR, and ROUGE scores than dynamic adaptation, indicating that the size of the unlikelihood token set impacts generator performance. The effectiveness of dynamic adaptation provides further evidence of TIMER can improve overall performance and incorporate imbalance token adaptation. ### Qualitative Analysis To further investigate the effectiveness of our model's generation, we performed a qualitative analysis by selecting three generated samples from IU X-RAY and MIMIC-CXR datasets and compared them with the ground truth, as shown in Figure 4. Our analysis revealed that TIMER can generate descriptions that closely match the ground truth. Furthermore, compared to CMN, TIMER generated more medical terms such as "pleural effusion," "pneumothorax," and "medisatinal silhouette." Additionally, TIMER accurately diagnosed the medical condition and produced a semantically similar sentence to the ground truth. For instance, the sentence "there are no acute bony findings" generated by TIMER was semantically similar to the ground truth sentence "bony structures are intact." ## 6 Related Work Radiology report generationis to generate descriptive text from radiology images. An encoder-decoder network is the primary neural architecture of the task. For instance, (Jing et al., 2018) built a multi-task learning framework that employs a hierarchical LSTM model to generate long radiology reports, and a co-attention mechanism to jointly perform the prediction of tags. Recent studies deployed Transformer architecture (Vaswani et al., 2017) to improve the task performance (Lovelace and Mortazavi, 2020; Chen et al., 2020, 2021). For example, (Chen et al., 2020) proposed a relational memory to record key information in the generation process and applied a memory-driven conditional layer normalization to incorporate the memory into the decoder of the Transformer. (Chen et al., 2021) designed a shared memory between the encoder and decoder to record the alignment between images and texts. Several recent works (Dalla Serra et al., 2022; Li et al., 2022; Yang et al., 2022) have enhanced the quality of text generation by developing models that can learn clinical knowledge directly from reports. For \begin{table} \begin{tabular}{c c c c} \hline \hline & \multicolumn{2}{c}{IU X-RAY} & \\ \hline & -DA + UL & -DA & full \\ \hline BLEU\_1 & 44.73 & 47.91 & 49.34 \\ BLEU\_2 & 29.08 & 30.60 & 32.49 \\ BLEU\_3 & 21.08 & 21.76 & 23.84 \\ BLEU\_4 & 16.36 & 15.98 & 18.61 \\ METEOR & 18.72 & 19.73 & 20.38 \\ ROUGE\_L & 35.41 & 36.41 & 38.25 \\ F1 & 82.43 & 89.72 & 94.40 \\ \hline \hline & \multicolumn{2}{c}{MIMIC} & \\ \hline & -DA + UL & -DA & full \\ \hline BLEU\_1 & 34.02 & 35.31 & 38.30 \\ BLEU\_2 & 20.90 & 21.02 & 22.49 \\ BLEU\_3 & 13.99 & 13.73 & 14.60 \\ BLEU\_4 & 10.02 & 9.54 & 10.40 \\ METEOR & 13.76 & 14.03 & 14.69 \\ ROUGE\_L & 27.43 & 26.66 & 28.00 \\ F1 & 68.86 & 70.28 & 75.86 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation analysis. “-DA+UL” denotes removing DA and using unlikelihood loss with a fixed token set. “-DA” denotes the model is trained by negative log-likelihood loss 1 without unlikelihood loss. F1 is the clinical metric. Figure 3: The performance comparison with different sizes of unlikelihood token sets. F1 is the clinical metric. instance, Yang et al. (2022) devised an automatic information extraction mechanism to extract clinical entities and relations directly from training reports, while Dalla et al.(Dalla Serra et al., 2022) extracted entities and relations from images and then generated complete reports based on the extracted entities and relations. To better evaluate the factualness of generated reports, (Smit et al., 2020) trained a BERT-based model to label the reports' diseases. More recently, Some works (Miura et al., 2021; Delbrouck et al., 2022; Qin and Song, 2022) have incorporated reinforcement learning (RL) into radiology reports to improve their quality. Delbrouck et al.(Delbrouck et al., 2022) improved the quality of report generation by predicting more precise entities, while Qin et al. (Qin and Song, 2022) improved report generation performance by better aligning the images and text with RL. However, none of these studies have explicitly worked on token imbalance, which is the main focus of our study. Reinforcement Learning (RL)in NLG tasks is to improve sequence prediction (Shi et al., 2018; Bahdanau et al., 2016; Hao et al., 2022). The actor-critic approach proposed by Bahdanau et al. considers the task objective during training, improving maximum likelihood training but suffering from sparse reward. To address this, Shi et al. uses the maximum entropy Inverse Reinforcement Learning to mitigate the issues of reward sparsity. Wu and Huang (2022) employs reinforcement learning to address label imbalance issues across various domains. However, this is a classification task and it is challenging to apply it directly to generation tasks. In this study, we employ an actor-critic approach to optimize our model, but instead of applying it to the NLG model directly, we propose a novel learning strategy that updates the unlikelihood token set dynamically. While (Nishino et al., 2020) applies RL to radiology report generation, a key difference is that they focus on document label imbalance rather than the token imbalance, which is the primary target of our study. Imbalance modelingrefers to the task of modeling skewed distributions. Various strategies have been proposed to handle imbalanced data in natural language processing tasks, such as oversampling, under-sampling, and the few-shot technique (Tian et al., 2021; Yang et al., 2020). While existing solutions focus on classification tasks, those techniques may not apply to radiology report generation, a multimodal image-to-text generation in healthcare. In Figure 4: Qualitative comparison between TIMER and CMN. We highlight correct predictions of infrequent tokens. We set top 20% tokens in the vocabulary as frequent and the rest as infrequent tokens. radiology report generation, the imbalance between normal and abnormal samples has been addressed in prior studies through methods such as data augmentation with reinforcement learning (Nishino et al., 2020) and using separate LSTMs for abnormal and normal sentence generation (Harzig et al., 2019). However, no previous study has focused explicitly on imbalanced token distributions in medical report generation. In this work, we propose a novel approach that leverages reinforcement learning techniques to address this issue. ## 7 Conclusion In this study, we have proved the importance of infrequent tokens in radiology report generation and proposed a reinforcement-learning method to adapt token imbalance. We demonstrate effectiveness of our approach (TIMER) over three state-of-art baselines on radiology report generation. TIMER automatically penalizes overfitting on frequent tokens and dynamically adjusts rewards on infrequent token generations. Extensive experiments and ablation analysis show that TIMER can obtain significant improvements on infrequent token generations while maintaining performance on frequent tokens by multiple evaluation metrics. While we evaluate our approach on radiology report generation, we expect its broad applicability to text generation tasks and will extend applications in our future work. ### Limitations While we have proved the effectiveness of our proposed method, three primary limitations must be acknowledged to appropriately interpret our evaluation results, task and preprocessing. Task.We primarily focus on the task of radiology report generation, while the task is only one of the downstream evaluations in the text generation field. Experiments on the radiological benchmarks may not generalize to all text generation tasks, and infrequent tokens may not contain critical and complex medical terms. In this study, to ensure consistency, we compare with the SOTA baselines on the same datasets of radiology report generation. **Note** that this task has evolved rapidly in recent years, and therefore there might be newer methods published during our study's publishing processes. Preprocessing.We utilize the datasets and follow the same preprocessing steps from the previous study (Chen et al., 2020). Existing studies (Nguyen et al., 2021; Qin and Song, 2022) may have variations in the preprocessing steps that can directly impact the final results. For example, (Nguyen et al., 2021) selected the top 900 frequent tokens for performance evaluations, which shows a significant improvement in the frequent tokens. To ensure a consistent comparison, we keep the same experimental settings and deploy the released models from the baselines (Jing et al., 2018; Chen et al., 2020, 2021) that reported state-of-the-art performance on the same datasets. Our future work will develop more comprehensive evaluations of rare tokens across the existing models. Human Evaluation.It is necessary to invite radiologists for human evaluations. However, we did not include the approach due to _subjectivity_ and _domain_ challenges. A common approach is to sample limited generated reports from a pair of methods (Miura et al., 2021; J Kurisinkel et al., 2021). However, the same baselines with different human evaluators may yield varied evaluation results (Liu et al., 2021, 2021). It is a challenge to have enough certified radiologists to evaluate the task. We conducted preliminary human evaluations by the qualitative analysis in Table 4, though we miss enough support from radiologists. Having human evaluations will be our future work to provide more comprehensive perspectives. ## Ethic and Privacy Concerns We follow data agreement and training procedures to access the two radiology report datasets. To protect user privacy, we have followed corresponding data agreements to ensure proper data usage and experimented with the de-identified data. Our experiments do not store any data and only use available multimodal entries for research demonstrations. Due to privacy and ethical considerations, we will not release any clinical data associated with patient identities. Instead, we will release our code and provide detailed instructions to replicate our analysis and experiments. This study only uses publicly available and de-identified data. ## Acknowledgments The authors want to thank the anonymous reviewers for their constructive suggestions. This work was supported by a gift from West Cancer Foundation, Ralph E. Powe Junior Faculty Enhancement Award, and the National Science Foundation with award number IIS-2245920. With gifts from Adobe Research, we purchased a GPU workstation for this study.
2307.10184
A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives
Backdoor attacks pose serious security threats to deep neural networks (DNNs). Backdoored models make arbitrarily (targeted) incorrect predictions on inputs embedded with well-designed triggers while behaving normally on clean inputs. Many works have explored the invisibility of backdoor triggers to improve attack stealthiness. However, most of them only consider the invisibility in the spatial domain without explicitly accounting for the generation of invisible triggers in the frequency domain, making the generated poisoned images be easily detected by recent defense methods. To address this issue, in this paper, we propose a DUal stealthy BAckdoor attack method named DUBA, which simultaneously considers the invisibility of triggers in both the spatial and frequency domains, to achieve desirable attack performance, while ensuring strong stealthiness. Specifically, we first use Discrete Wavelet Transform to embed the high-frequency information of the trigger image into the clean image to ensure attack effectiveness. Then, to attain strong stealthiness, we incorporate Fourier Transform and Discrete Cosine Transform to mix the poisoned image and clean image in the frequency domain. Moreover, the proposed DUBA adopts a novel attack strategy, in which the model is trained with weak triggers and attacked with strong triggers to further enhance the attack performance and stealthiness. We extensively evaluate DUBA against popular image classifiers on four datasets. The results demonstrate that it significantly outperforms the state-of-the-art backdoor attacks in terms of the attack success rate and stealthiness
Yudong Gao, Honglong Chen, Peng Sun, Junjian Li, Anqing Zhang, Zhibo Wang
2023-07-03T12:28:44Z
http://arxiv.org/abs/2307.10184v1
# A Dual Stealthy Backdoor: From Both Spatial and Frequency Perspectives ###### Abstract. Backdoor attacks pose serious security threats to deep neural networks (DNNs). Backdoored models make arbitrarily (targeted) incorrect predictions on inputs embedded with well-designed triggers while behaving normally on clean inputs. Many works have explored the invisibility of backdoor triggers to improve attack stealthiness. However, most of them only consider the invisibility in the spatial domain without explicitly accounting for the generation of invisible triggers in the frequency domain, making the generated poisoned images be easily detected by recent defense methods. To address this issue, in this paper, we propose a **DU**al stealthy **BA**ckdoor attack method named **DUBA**, which simultaneously considers the invisibility of triggers in both the spatial and frequency domains, to achieve desirable attack performance, while ensuring strong stealthiness. Specifically, we first use Discrete Wavelet Transform to embed the high-frequency information of the trigger image into the clean image to ensure attack effectiveness. Then, to attain strong stealthiness, we incorporate Fourier Transform and Discrete Cosine Transform to mix the poisoned image and clean image in the frequency domain. Moreover, the proposed DUBA adopts a novel attack strategy, in which the model is trained with weak triggers and attacked with strong triggers to further enhance the attack performance and stealthiness. We extensively evaluate DUBA against popular image classifiers on four datasets. The results demonstrate that it significantly outperforms the state-of-the-art backdoor attacks in terms of the attack success rate and stealthiness. Backdoor Attack, DNNs, Dual Stealthy, Spatial and Frequency + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + Footnote †: journal: Computer vision + FootnoteFootnote †: journal: Computer vision + backdoor triggers are high-frequency semantics, and it trained a DNN model that classifies images in the frequency domain to effectively defend against most attacks. That is, most backdoors are perceptible in the frequency domain. Therefore, it is essential to ensure the trigger's invisibility in the both the spatial and frequency domains to launch a powerful yet stealthy backdoor attack. A few recent works began to study how to implant stealthy backdoor attacks from frequency perspective. For example, the work in [33] employs a low-pass filter to implant a backdoor invisible in the frequency domain while visible in the spatial domain. Motivated by the above discussions, in this paper, we propose a **DU**al stealthy **BA**ckdoor attack called **DUBA**, which crafts invisible triggers in both the spatial and frequency domains while achieving desirable attack performance. Specifically, we first embed the high-frequency information of the trigger image into the clean image by discrete wavelet transform (DWT), yielding the initial poisoned image. Then, to ensure strong stealthiness, we fuse the initial poisoned image (which has high-frequency triggers) with the clean image in the Fourier and Cosine transform domains. Furthermore, we propose an attack strategy, which can greatly reduce the embedded high-frequency information of the trigger images and randomly mask more parts of the trigger images in the training phase to make the victim model better learn the triggers. The major contributions of this paper are summarized as follows: * We design a **DU**al stealthy **BA**ckdoor attack named DUBA that achieves desirable invisibility in both spatial and frequency domains by embedding high-frequency trigger information through DWT and smoothing it in Fourier Transform and Cosine Transform domains. * We propose a novel attack strategy for DUBA where the model is trained with weak triggers and attacked with strong triggers to attain satisfactory attack performance while ensuring stealthiness. * We conduct extensive experimental evaluation of DUBA on four datasets and popular models. The results demonstrate its outstanding performance in terms of both attack effectiveness and stealthiness. ## 2. Related Work ### Backdoor Attack Backdoor attack has drawn wide attention since its introduction [24]. According to the trigger generation method, existing backdoor attacks can be roughly divided into two categories, i.e., spatial domain backdoors and frequency domain backdoors. **Spatial Domain Backdoors.** Researchers in BadNets [11] first reveals the existence of backdoors in DNNs. This attack embeds a visible square in the bottom right corner of the clean image and manipulates the associated label to the target label. Then, the backdoor can be injected to the model after training on the poisoned data. In the inference phase, input images with the same trigger will be misclassified into the attacker-chosen target label. Inspired by BadNets, researchers have also investigated other backdoor attacks. Blend [1] advocates image blending backdoors, whereas another work [15] employs a fixed watermark as a trigger to insert backdoors. However, these early backdoors are all visually visible and thus can be easily detected and removed. Therefore, how to generate and implant visually invisible backdoors has recently become a hot research topic. For example, ISSBA [18] embeds trigger information by steganography; WaNet [22] crafts triggers by distorting fields; and LIRA [6] searches for triggers in a highly nonlinear parameter space. Though these methods can successfully generate invisible triggers and bypass mainstream backdoor defenses, none of them explicitly account for the characteristics of the image in the frequency domain. Thus, these backdoor attacks can be easily detected by models empowered by Fourier transform (which is often employed as a part of the task pipeline) or the frequency-oriented defense methods. **Frequency Domain Backdoors.** Recently, [33] starts to explore backdoor attacks in the frequency domain. To avoid high-frequency artifacts after the Discrete Cosine Transform (DCT) [3], it applies a low-pass filter to generate a smooth trigger. However, this method yields visible artifacts in the spatial domain. FIBA [7] crafts triggers in the frequency domain by mixing the low-frequency components of two images after Fast Fourier Transform (FFT) [21], which is visually imperceptible in spatial domain but still visible in frequency domain. Another work FTROJAN [29] first transforms the clean image with YUV or UV (two color coding methods), then applies DCT with modifications on the high-frequency or mid-frequency components to generate poisoned image. However, the transformations required and the frequency components to be modified differ across different images thus increasing the computation overheads. Moreover, the trigger generated in FTROJAN is also visible in the frequency domain. ### Backdoor Defense To defend DNNs against various backdoor attacks, researchers have proposed many defense methods accordingly [19; 30]. Generally, backdoor defenses can be categorized into input-based, model-based, and output-based methods. **Input-based Defenses.** Input-based defenses focus on input abnormalities [2; 33]. Grad-Cam [25] uses a saliency map to dissect the regions of the input image that the model focuses on. If the model does not focus on the object or keeps focusing on the same region, the image is considered as poisoned. FTD [33] employs DCT to distinguish whether the input image has high-frequency artifacts. They design a DNN-based discriminator to classify images with high-frequency artifacts as poisoned images. **Model-based Defenses.** They focus on the investigation of the victim model [16]. Fine-Pruning [20] mitigates backdoors by pruning dormant neurons since it is likely that these neurons provide specialized support to backdoors. Neural Cleanse [28] identifies whether there is a backdoor in the model by reverse engineering the triggers and utilizes anomaly detection to determine the most potential backdoor. **Output-based Defenses.** Defense methods of this type often observe output anomalies [8; 14]. STRIP [8] superimposes various image patterns on the suspicious image to observe its output. Higher poisoning odds yield lower output randomness. To circumvent the existing backdoor defenses from both spatial and frequency domains, in this work, we aim to craft a powerful backdoor attack that is invisible in both domains. ## 3. Methodology ### Threat Model **Attacker's Capabilities.** In the training phase, following prior studies (Krishnan et al., 2017; Wang et al., 2018), we consider that the attacker is only allowed to tamper with a part of the training data, but has no access to other model training components (e.g., the victim model architecture and loss function). In the inference phase, the attacker can only manipulate the input images (i.e., embedding the crafted backdoor trigger). Such a threat model can be seen in many real-world scenarios like the outsourcing of model training to third-parties. **Attacker's Goals.** Generally, an effective backdoor attack should mislead the models into making arbitrarily (targeted) incorrect predictions on the tampered testing images without compromising the models' performance on normal inputs. Furthermore, a powerful backdoor attack should satisfy the following two objectives: * Invisibility. The poisoned images should be invisible in both spatial and frequency domains. * Robustness. The backdoor attack can successfully circumvent state-of-the-art defense methods. ### Problem Formulation We focus on the typical supervised image classification, which is widely used in face recognition, traffic signal recognition and other security-sensitive fields. Formally, the image classification can be described as a mapping function \(f_{0}:\mathcal{X}\to C\), where \(\mathcal{X}\) is the input domain and \(C\) is the set of target classes. In the image classification, the model parameters \(\theta\) can be learned from the training dataset \(D_{\text{train}}=\{(x_{i},y_{i})\}_{i=1}^{N}\) of \(N\) data samples, where \(x_{i}\in\mathcal{X}\), \(y_{i}\in C\). The core of backdoor attacks is to craft a set of poisoned data samples \(D_{\text{poison}}=\{(T\,(x_{i}),\,\gamma\,(y_{i}))\}_{i=1}^{M}\), where \(T(\cdot)\) denotes the trigger implantation method and \(\gamma(\cdot)\in C\) represents the designated target label. Specifically, \(\gamma\,(y_{i})=c\), where \(c\) is a constant, stands for the All-to-One attack, while \(\gamma\,(y_{i})=y_{i}+1\) represents All-to-All attack. In summary, backdoor attacks aim to manipulate a subset of the training data (note that \(M<N\) and \(\partial=\frac{M}{N}\) is the poisoning ratio) by injecting adversarial triggers such that the model trained on the tampered dataset yields the following behaviors when deployed: \[f_{\theta}\left(x_{i}\right)=y_{i},\quad f_{\theta}\left(T\,(x_{i})\right)= \gamma\,(y_{i}). \tag{1}\] In this paper, we focus on designing the trigger implantation method \(T(\cdot)\). ### The Proposed Attack **Overview of DUBA.** Figure 2 shows the framework of DUBA, which is composed of three steps and an attack strategy. First, to attain desirable backdoor attack performance, we employ DWT to embed the high-frequency information of a fixed trigger image into the clean image to generate the initial poisoned image. Second, to ensure strong stealthiness in both spatial and frequency domains, we incorporate FFT and DCT to mix the initial poisoned image with clean image to generate the intermediate poisoned image. Third, to ensure that the victim model learns the backdoor scattered over the entire poisoned image while ensuring good invisibility, we propose to randomly mask the trigger of intermediate poisoned image to get the final poisoned image. Besides, we propose an attack strategy where the victim model is trained with weak triggers and attacked with strong triggers to achieve higher attack success rates (ASRs). **Step 1: High-Frequency Information Embedding.** Inspired by prior works (Zhu et al., 2017; Wang et al., 2018) that utilize high-frequency semantic information for trigger embedding, we propose to extract the high-frequency information from a fixed image (different from the image to be poisoned) as the initial trigger. Specifically, we employ DWT for high-frequency information extraction given that DWT can finely dissect images at high frequencies but coarsely analyze images at low frequencies. Formally, as shown in Step 1 in Figure 2, given a clean image \(x_{c}\) and a random initial trigger image \(x_{p}\), the idea is to embed the high-frequency part of \(x_{p}\) into the deep high-frequency region of \(x_{c}\). DWT decomposes image \(x\) into one Figure 2. Schematic diagram of DUBA. Step 1: embedding the high-frequency information of the trigger image into the clean image by DWT. Note that the initial trigger \(x_{p}\) is a randomly selected image. Step 2: smoothing the high-frequency trigger in the FFT and DCT domains. Step 3: random trigger masking. In the training phase, we embed weaker high-frequency triggers and mask more pixels of the trigger image. In the attack phase, we use stronger high-frequency triggers to launch the attack. low-frequency part and three high-frequency parts, which are represented as: \[W\left(x\right)=\left\{L,H_{1},H_{2},H_{3}\right\}, \tag{2}\] where \(L\) represents the low-frequency approximate component, while \(H_{1}\), \(H_{2}\), and \(H_{3}\) denote the high-frequency components at the vertical, diagonal, and horizontal directions, respectively. Accordingly, the image can also be recovered by inverse discrete wavelet transform (IDWT) \(W^{-}\): \[W^{-}\left(L,H_{1},H_{2},H_{3}\right)=x. \tag{3}\] To embed a sufficiently hidden trigger, we apply three DWTs to clean image \(x_{c}\), which is expressed as: \[\left\{L_{i+1},H_{i+1,1},H_{i+1,2},H_{i+1,3}\right\}=W\left(L_{i}\right),i=0,1,2, \tag{4}\] where \(L_{0}\) equals \(x_{c}\) and \(i\) stands for the \(i\)-th DWT. Then, we apply one DWT to trigger image \(x_{p}\). Note that since the image size will change after DWT and we perform three DWTs on \(x_{c}\), trigger image \(x_{p}\) first needs to be resized into two different sizes, which are referred to as \(x_{p1}\) and \(x_{p2}\). Then, for both \(x_{p1}\) and \(x_{p2}\), we apply DWT once to obtain two different high-frequency trigger information. \[\begin{cases}Lp_{1},HP_{1,1},HP_{1,2},HP_{1,3}\end{cases}=W(x_{p1})\\ Lp_{1}^{{}^{\prime}},HP_{1,1}^{{}^{\prime}},HP_{1,2}^{{}^{\prime}},HP_{1,3}^{{ }^{\prime}}\end{cases}=W(x_{p2})\enspace. \tag{5}\] Next we embed the high-frequency information of the trigger image, i.e., \(HP_{1,j}\) and \(HP_{1,j}^{{}^{\prime}}\) (\(j=1\),2,3), into different high-frequency parts of the clean image \(x_{c}\). Formally, we have: \[\left\{\begin{array}{l}H_{3,j}^{{}^{\prime}}=H_{3,j}\times\alpha+HP_{1,j}^{ {}^{\prime}}\times(1-\alpha)\\ H_{2,j}^{{}^{\prime}}=H_{2,j}\times\beta+HP_{1,j}\times(1-\beta)\end{array} \right.,\;j=1,2,3, \tag{6}\] where \(\alpha\) and \(\beta\) indicate the embedding intensity. With \(H_{3,j}^{{}^{\prime}}\) and \(H_{2,j}^{{}^{\prime}}\), initial poisoned image \(P_{i}\) can be derived using the IDWT as: \[L_{i}^{{}^{\prime}}=W^{-}\left(L_{i+1}^{{}^{\prime}},H_{i+1,1}^{{}^{\prime}}, H_{i+1,2}^{{}^{\prime}},H_{i+1,3}^{{}^{\prime}}\right),i=0,1,2, \tag{7}\] where \(L_{3}^{{}^{\prime}}\) equals \(L_{3}\), \(H_{1,j}^{{}^{\prime}}\) equals \(H_{1,j}\) and the final low-frequency component \(L_{0}^{{}^{\prime}}\) is exactly the generated poisoned image in Step 1, which is denoted as \(P_{i}\). Considering that the high-frequency information is testlather than the low-frequency information, the above method can generate an almost invisible backdoor image in the spatial domain (actually it depends on the intensity of \(\alpha\) and \(\beta\), we will discuss the effect of \(\alpha\) and \(\beta\) on the visual effect in the ablation experiments). In what follows, we aim to answer the question on how to craft a poisoned image that is also invisible in the frequency domain. **Step 2: Frequency Domain Smoothing.** Previous studies [10, 21] have shown that the phase spectrum of an image after FFT retains information about the edges and the overall structure of the image, which captures high-level semantic information. Meanwhile, the amplitude spectrum obtains the underlying semantic information, preserving the frequency information [7]. Besides, it has been observed in [32] that changes in the amplitude spectrum do not significantly affect the perception of high-level semantics. Moreover, since the result of FFT is in the complex domain, while the image we usually observe in the frequency domain is actually the amplitude spectrum, we choose a straightforward yet effective approach. Specifically, to ensure both the backdoor attack performance and the amplitude spectrum's invisibility, we directly swap the amplitude spectrums of the \(P_{i}\) and \(x_{c}\) after FFT. Formally, as shown in Step 2 of Figure 2, let \(F^{S}\left(\cdot\right)\) and \(F^{P}\left(\cdot\right)\) be the amplitude and phase components of FFT results, the amplitude and phase spectra of \(P_{i}\) and \(x_{c}\) after FFT are obtained as: \[\left\{\begin{array}{l}S^{c}=F^{S}\left(x_{c}\right),P^{c}=F^{P}\left(x_{c} \right)\\ S^{p}=F^{S}\left(P_{i}\right),PP=F^{P}\left(P_{i}\right)\end{array}\right.. \tag{8}\] Then the smoothing poisoned image \(P_{mi}\) is calculated by the amplitude spectrum of \(x_{c}\) and the phase spectrum of \(P_{i}\). Formally, \[P_{mi}=F^{-}\left(S^{c},P^{p}\right), \tag{9}\] where \(F^{-}(\cdot)\) represents the inverse FFT. We would like to note that the FFT-based smoothing is conducted in the complex domain. Thus, despite the spectrogram of the image after FFT-based smoothing being theoretically hidden, the image is still perceivable in the real domain (see the ablation experiment for a visual demonstration). Moreover, the poisoned image may be detected by the DCT-based defense, which is unacceptable even though the detection probability is low. To address this issue, next we incorporate the DCT, which is a special case of FFT in real domain, to fuse \(P_{mi}\) and \(x_{c}\). Due to the linear property of DCT, fusing the two images after only one DCT and then inversing the fused result is equivalent to directly fusing the two images, which can not achieve the purpose of deep smoothing. Therefore, we apply two DCTs on \(P_{mi}\) and \(x_{c}\) to achieve deeper information fusion. Let \(D\) be the DCT while \(D^{-}\) be the inverse DCT (IDCT), we obtain the deep information in DCT domain as follows: \[D_{c}^{k}=D\left(D_{c}^{k-1}\right),D_{p}^{k}=D\left(D_{p}^{k-1}\right),\quad k =1,2, \tag{10}\] where \(k\) stands for the \(k\)-th DCT, \(D_{c}^{0}\) equals \(x_{c}\), and \(D_{p}^{0}\) equals \(P_{mi}\). As shown in Figure 2, the DCT smoothing is then implemented according to: \[D_{p}^{k-1}=D^{-}\left[D_{p}^{k}\times\lambda+D_{c}^{k}\times(1-\lambda) \right],\quad k=1,2, \tag{11}\] where \(\lambda\) indicates the fusing intensity. After two steps of IDCT, we obtain the new \(D_{p}^{0}\), which is exactly the intermediate poisoned image (denoted as \(P_{m}\)) generated in Step 2. **Step 3: Random Trigger Masking.** To ensure that the victim model learns the backdoor scattered over the entire image while well preserving the desirable attack stealthiness in both the spatial and frequency domains, we propose to randomly mask the trigger image. As shown in Step 3 of Figure 2, we first obtain the trigger embedded in \(P_{m}\) by subtracting \(x_{c}\) from \(P_{m}\). Then, randomly mask it. Finally, we again embed the trigger pattern after masking into the clean image, yielding the final poisoned image \(P_{f}\). **Attack Strategy Design.** We further devise an attack strategy for DUBA to enhance both attack performance and stealthiness. Specifically, in the training phase, we adopt a weak trigger pattern via two operations, i.e., shrinking the values of \(\alpha\) and \(\beta\) as much as possible and masking more pixel points of the trigger image in Step 3. We employ a strong trigger pattern in the inference phase, which means that we make \(\alpha\) and \(\beta\) as large as possible while ensuring the triggers' invisibilities and masking fewer pixel points in Step 3. Moreover, considering that the triggers can be visible when the pixel values of points in the clean image are close to \(0\) or \(255\), we further mask the corresponding regions in the trigger image. In particular, in the training phase, we appropriately expand such regions for better attack stealthiness. ## 4. Experiments ### Experimental Settings **Datasets.** To evaluate the performance of DUBA on different tasks, we conduct experiments on four different datasets: 1) Cifar10 (Cifar10) (Cifar10), 2) Gtsrb (Cifar10), 3) ImageNet (Dong et al., 2017), and 4) Fer2013 (Fang et al., 2017). Cifar10 and ImageNet are object classification datasets that include horses, aircraft, and other objects. Fer2013 is the face expression recognition dataset, while Gtsrb is the traffic signal recognition dataset. We present the details of these datasets in Table 1. Note that the ImageNet is too large and we only use a subset of it. **Models.** We conduct experiments on three models: ResNet18 (He et al., 2016), RepVGG (Cifar et al., 2017), and Conformer (Liu et al., 2017). ResNet18 is a classic classification model. RepVGG is the latest VGG model. Conformer is the latest transformer model for image classification. **Baseline Backdoor Attacks.** We compare DUBA with BadNets (He et al., 2016), Blend (Bengio et al., 2017), Wawel (Wang et al., 2018), and FIBA (Fang et al., 2017). BadNets and Blend are representational visible backdoor attacks. WaNet is the latest invisible backdoor attack in the spatial domain while FIBA is the latest backdoor attack proposed from the frequency perspective. **Evaluation Metrics.** We evaluate DUBA and compare with baselines from two perspectives, i.e., attack performance and attack stealthiness. For attack performance evaluation, we employ the attack success rate (ASR), which is defined as the proportion of poisoned examples that are misclassified as the target label among all poisoned examples used for testing. Additionally, we utilize the benign accuracy (BA) to characterize the model's performance on clean testing data. For attack stealthiness evaluation, we use the following similarity metrics: peak signal-to-noise ratio (PSNR) (Wang et al., 2018), structural similarity (SSIM) (Liu et al., 2017) and learned perceptual image patch similarity (LPIPS) (Wang et al., 2019). There are some correlations among these three metrics. Generally speaking, increased PSNR and SSIM indicate improved image steganography but decreased LPIPS. **Implementation Details.** In our experiments, we randomly select an image with a dog's ear as the initial trigger image. In the training phase, we set both \(\alpha\) and \(\beta\) to 0.4 and mask the regions in the trigger image where the corresponding pixel points of the clean image are lower than 30 and more than 220. In the attack phase, we set \(\alpha\) and \(\beta\) both to 0.6, \(\lambda\) to 0.7 and mask the regions in the trigger image which corresponds to pixel points lower than 5 or more than 245 in the clean image. We employ the SGD optimizer to train the victim model for 200 periods. The learning rate is set to 0.01 with a decay factor of 0.1 and decay periods of 50, 100, 150. The batch size is configured as 64. Following other studies (Fang et al., 2017; Li et al., 2017), all attack settings are set to All-to-One attacks, which is sufficient for evaluating attack effectiveness. The defense experiments are all conducted on RepVgg model. ### Attack Performance Evaluation **Attack Effectiveness.** We evaluate the effectiveness of different backdoor attacks with ASR and BA. The relevant results are summarized in Table 2, which shows that our proposed DUBA achieves higher or comparable ASRs under most datasets and models. But in some cases such as experiments under ImageNet, the ASRs of DUBA are slightly lower than BadNets. Considering that the crafted trigger by DUBA is invisible in both spatial and frequency domains (which will be validated next), such a result is acceptable. Besides, DUBA only incurs negligible loss (lower than 1%) of BA compared with the clean benchmark. The above results show that DUBA achieves desirable attack effectiveness. **Attack Stealthiness.** Now we examine the stealthiness of different backdoor attacks. Figure 1 shows the poisoned images of different methods and Figure 3 provides more visual comparison between clean images and images poisoned by DUBA. Compared with other methods, DUBA achieves the best invisibility in both the spatial and frequency domains. The backdoor generated by DUBA is visually invisible in the spatial domain and the residual image in the frequency domain is also close to pure black image, indicating that the poisoned image in the frequency domain is very similar to the clean image. In Table 3, the visual outcomes of various methods are quantified. The PSNR and LPIPS of DUBA is the best in most cases. Although the SSIM of DUBA is slightly lower than BadNets, it is also close to 1 and higher than most methods. It can be validated from Figure 1 that BadNets has the worst stealthiness due to its obvious square trigger in the corner. In summary, DUBA achieved the best stealthy results with comprehensive visual perception and different metrics. ### Robustness to Defenses In this subsection, we test DUBA against five state-of-the-art defenses, including GradCam (Cifar et al., 2017), Neural Cleanse (Wang et al., 2018), STRIP (Fang et al., 2017), Fine-Prunning (Wang et al., 2018), and FT (Liu et al., 2017). **Robustness to GradCam.** The GradCam-based defense method uses the saliency map to analyze the model's decision process. Specifically, given an input sample to the model, GradCam yields the model's heat value. For a clean image, GradCam will focus on the object. As shown in Figure 4, for small triggers such as BadNets, the heat map locks the highest heat value on the trigger, resulting in an abnormal heat map. The results show that GradCam for DUBA \begin{table} \begin{tabular}{c c c c} \hline \hline **Datasets** & **Training/Testing Size** & **Lables Size** & **Image Size** \\ \hline Cifar10 & 50000/10000 & 10 & 32\text is similar to clean images, and even locks onto the object more than clean images. This indicates that GradCam fails to detect DUBA. Robustness to Neural Cleanse.Neural Cleanse reconstructs the trigger for each class label, and then checks whether there exists a class with a significantly smaller reverse-engineered trigger, which will be treated as a poisoned sample. Specifically, this method quantifies the deviations of reverse-engineered triggers based on their sizes using the anomaly index and considers models with an anomaly index greater than 2 as poisoned models. Table 4 shows that the anomaly index of DUBA is only 1.22, which is smaller than those of the baseline methods. This validates that our proposed DUBA can effectively circumvent Neural Cleanse. Robustness to STRIP.STRIP determines whether a model is poisoned or not by superimposing input images to observe the consistency of predicted classes. Specifically, the entropy values is used to quantify the level of consistency. Models with an average entropy value lower than 0.2 and the clean results are classified as poisoned models. Figure 5 shows the entropy values different methods on different datasets and the corresponding clean results. All the entropy values of DUBA are larger than 0.2 and very close to that of clean images, which are significantly better than BadNets and Blend. DUBA also achieves comparable entropy values to WaNet and FIBA. Thus, DUBA can effectively bypass STRIP. **Robustness to Fine-Pruning.** Fine-pruning assumes that the backdoor behavior of the model is related to the dormant neurons in the model. By simply clipping these neurons, a clean model will be obtained. Specifically, it records the activation values of clean samples passing through each neuron and considers the neuron with the smallest activation value as the most dormant neuron. The neurons are gradually pruned from small to large according to neuronal activation values. Usually, the attack is considered successful if the BA of clean images drops below 50% before (in terms of pruning ratio) the ASR of poisoned images. Figure 6 shows the results of different methods on Cifar10. Among all the methods, the ASR of DUBA is the last to decline. In particular, when the ASR of DUBA starts to decrease, the pruning ratio almost reaches 96%. Figure 7 provides more detailed results on DUBA regarding the four datasets, which shows that all the BAs decrease to 50% before ASRs. Thus, we can conclude that DUBA remains effective against network pruning-based defenses. **Robustness to FTD.** FTD detects whether an image has high-frequency artifacts, which is regarded as poisoned image. It trains a DNN model that can classify poisoned images and clean images after DCT. Table 5 presents that the detection rate of FTD with respect to DUBA is below 50%, implying that DUBA can effectively bypass FTD. This is because the trigger image is smoothed twice in the frequency domain. Note that FIBA also presents a low detection ratio (higher than DUBA) as it uses a low-frequency trigger. ### Ablation Studies In this section, we conduct ablation experiments to study the impact of some important parts on DUBA. Experiments are conducted on Cifar10 for training RepVgg. **High-Frequency Embedding Rate.** We first examine the effect of \(\alpha\) and \(\beta\) (the embedding ratio of the trigger image) on the ASR. Table 6 shows that DUBA yields a lower ASR when the embedding ratio during training is small. This can be attributed to the inability of the model to learn the complete backdoor or the large differences in the embedding amount between the training and inference phases. When the embedding ratio rises in both the training and attacking phases, DUBA achieves higher ASRs. **DCT Smoothing Parameters.** We also investigate the effect of the DCT smoothing parameter on the ASR. Intuitively, when \(\lambda\) decreases, the poisoned images will be closer to clean ones. Thus, the ASR is compromised while the attack stealthiness is enhanced. Table 7 shows that the ASR increases with \(\lambda\), which is consistent with the intuition. Initial Trigger Selection.We explore the effect of different initial trigger images on DUBA. In addition to the image of dog's ear used as the initial trigger image in the above experiments, three other images in Cifar10, Gtsrb, and ImageNet are also tested. Table 8 shows that there is no substantial association between the initial trigger and ASR. The Necessity of Three Frequency Domain Transforms.In the following three subsections, we conduct ablation experiments to show the necessity of three frequency domain transforms. Use DWT Only.We conduct experiments using only DWT and do not apply any subsequent smoothing steps to the output poisoned image, as shown in Step 1 of Figure 2. Figure 8 visualizes the results. Although the PSNR and SSIM values between the poisoned and clean images are high enough when the embedding coefficients are small, the poisoned images inevitably have visible artifacts in the frequency domain, making it impossible to achieve dual stealth (similar to the previous single stealth backdoor study). This demonstrates the necessity of following smoothing operations. Use DWT and FFT.We then conduct experiments where the poisoned image is only smoothed in the FFT domain. Figure 9 shows that even when \(\alpha\) and \(\beta\) are large enough, the residual images in the frequency domain are pure black (i.e., the poisoned images are invisible in the frequency domain). Although the embedding ratio is small enough, the poisoned images (first row) still have some line-like artifacts in the spatial domain. Furthermore, when \(\alpha\) and \(\beta\) are set to 0.4, FTD can detect our attack with a probability of about 65%, which is already lower than most attacks but is unacceptable. This demonstrates the necessity of the DCT-based smoothing operations. Use DWT, FFT and DCT.According to Table 7, we set \(\lambda\) to 0.7. As shown in Figure 10, after using the three transforms, the poisoned image is stealthy in both domains (both the PSNR and SSIM have been improved). Thus, the three frequency domain transforms are adopted in the proposed DUBA to achieve dual stealth. ### Summary of Experiments After the experimental comparsion, we can conclude that DUBA is substantially more stealthy in the spatial domain than other attacks, which is also the only attack that is simultaneously invisible in both the spatial and frequency domains. Furthermore, DUBA achieves remarkable ASRs that outperform other methods in most cases. It is validated that the five advanced defenses fail to detect DUBA. And also it shows that DUBA has stronger robustness than other methods. Although there are some specific cases, such as the \begin{table} \begin{tabular}{c|c c c c c} \hline \hline \(\lambda\) & 0.1 & 0.2 & 0.3 & 0.5 & 0.7 & 0.8 \\ \hline **ASR(\%)** & 20 & 52.36 & 70.11 & 95.42 & 99.98 & 99.98 \\ \hline \hline \end{tabular} \end{table} Table 7. Impact of DCT smoothing parameter \(\lambda\) on ASR. Figure 8. The visual results of using DWT only. From left to right, the embedding coefficient gradually rises from 0 to 0.8. The first row shows the poisoned images with different embedding coefficients (the first one is the clean picture). The second row shows the corresponding residual images in the spatial domain and the third row are the residual images in the frequency domain. The corresponding PSNR and SSIM values (PSNR/SSIM) are at the bottom of the figure. \begin{table} \begin{tabular}{c|c c c c} \hline \hline **Initial Trigger Image** & Dog’s ear & Cifar10 & Gtsrb & ImageNet \\ \hline **ASR on Cifar10 (\%)** & 99.98 & 99.23 & 99.57 & 99.62 \\ \hline \hline \end{tabular} \end{table} Table 8. Impact of different initial trigger images on ASR. Figure 10. The visual results of using three transforms. Figure 9. The visual results of using DWT and FFT. The red circles indicate the line-like artifacts. robustness on STRIP of Fer2013, where DUBA performs slightly worse than WaNet or FIBA, in most cases, especially in terms of the invisibility in frequency domain, DUBA significantly outperforms all the methods. Thus we can conclude that the proposed DUBA is effective, which outperforms the state-of-the-art backdoor attacks. ## 5. Conclusion In this paper, we showed that most backdoor attacks are visible in the frequency domain. In order to completely break the defense proposed from the frequency perspective while remaining stealthy in the spatial domain, we proposed a **DU**al stealthy **B**Ackdoor called DUBA that is invisible in both the spatial and frequency domains. To hide high-frequency backdoor information in both the spatial and frequency domains, we leveraged the benefits of different frequency domain transforms. A novel attack strategy was also devised in order to enhance the efficiency of DUBA. We conducted an extensive experimental evaluation of DUBA. The results corroborate its outstanding performance in terms of attack success rates and attack stealthiness.
2308.00512
Grading Structure for Derivations of Group Algebras
In this paper we give a way of equipping the derivation algebra of a group algebra with the structure of a graded algebra. The derived group is used as the grading group. For the proof, the identification of the derivation with the characters of the adjoint action groupoid is used. These results also allow us to obtain the analogous structure of a graded algebra for outer derivations. A non-trivial graduation is obtained for all groups that are not perfect.
Andronick Arutyunov, Igor Zhiltsov
2023-08-01T12:53:37Z
http://arxiv.org/abs/2308.00512v1
# Grading Structure for Derivations of Group Algebras ###### Abstract In this paper we give a way of equipping the derivation algebra of a group algebra with the structure of a graded algebra. The derived group is used as the grading group. For the proof, the identification of the derivation with the characters of the adjoint action groupoid is used. These results also allow us to obtain the analogous structure of a graded algebra for outer derivations. A non-trivial graduation is obtained for all groups that are not perfect. Calculation of derivations in a group algebra is a well-known problem. Present work elaborates results from articles [1, 2, 3] focused on studying derivations in terms of characters of adjoint action gruppoid. An important result of this research are handy formulas for quick calculation of derivations. These articles explore derivations' link to combinatorial properties of the group. Among applications note use in coding theory (see [5, 4]), Novikov algebras (see recent work [6]) and more general constructions, like \((\sigma,\tau)-\)derivations (see [7]). Aim of the present work is grading the derivation algebra by identifying derivations and characters on a certain groupoid (all the necessary definitions are given in **Section 1**). The main result of the paper follows. Let \(N\) be a fixed normal subgroup in \(G\) such that \(G/N\) is abelian. **Theorem 1**.: _If \(|G/N|>1\), \(\mathrm{Der}\) is graded with \(G/N\), that is_ \[\mathrm{Der}=\bigoplus_{k\in G/N}\mathrm{Der}_{k},\] \[\forall k,l\in G/N:[\mathrm{Der}_{k},\mathrm{Der}_{l}]\subset \mathrm{Der}_{kl}.\] Here \(Der_{k}\) is a subalgebra of derivation whose characters' support is localised entirely in one coset \(aN=k\). The structure of the work follows. **Section 1** provides main definitions and propositions. **Section 2** describes the construction of grading and contains the main result and its proof. **Section 3** provides an example of grading for \(G\) equal to discrete Heisenberg group along with an example of localising central derivations for such groups \(G\) that \(G\) is not a stem group. Fix an infinite finitely-generated group \(G\) for the rest of the text. ## 1 Preliminaries Recall that **Group algebra**\(\mathbb{C}[G]\) is an algebra of formal finite sums of type \((a_{1},\ldots,a_{n}\in\mathbb{C},g_{1},\ldots,g_{n}\in G)\) \[a_{1}g_{1}+\cdots+a_{n}g_{n}\] We define **derivation** as a linear operator \(d\) that satisfies the Leibniz rule (for all \(a,b\in\mathbb{C}[G]\)) \[d(ab)=d(a)\cdot b+a\cdot d(b)\] Derivations over a group algebra form a Lie algebra with respect to commutator. _We will denote this algebra as \(\mathrm{Der}\) or \(\mathrm{Der}(\mathbb{C}[G])\)._ ### Characters We use the technique of characters following [1, 2, 3]. **Def 1.** For a given group \(G\) consider a **small groupoid**\(\Gamma\): 1. **objects**\((Obj)\) -- elements of \(G\), 2. **arrows**\((Hom)\) -- pairs of elements of \(G\). For an arrow \((u,v)\) its source \(\mathrm{S}(u,v)\) is given by \(v^{-1}u\), and its target \(\mathrm{T}(u,v)\) -- by \(uv^{-1}\) (\(Hom(a,b)\) denotes a set of all arrows for which the source is \(a\) and target is \(b\)), 3. Consider two arrows \(\varphi=(u_{2},v_{2})\in Hom(b,c),\psi=(u_{1},v_{1})\in Hom(a,b)\) (we will call a(n ordered) pair of arrows \(\varphi,\psi\) such that \(\mathrm{S}(\varphi)=\mathrm{T}(\psi)\)**composable**). The **composition** for these two arrows is given by: \[(u_{2},v_{2})\circ(u_{1},v_{1}):=(v_{2}u_{1},v_{2}v_{1})\] The formula for composition does not comprise \(u_{2}\) since if we have a pair of composable arrows \((u_{2},v_{2}),(u_{1},v_{1})\), \(u_{2}\) can be expressed in terms of \(u_{1},v_{1},v_{2}\). The reader may consider this as an exercise. \(\Gamma\) is the groupoid of group's inner action on itself. Fix an element \(a\) of \(G\). Define following symbols: **Def 2.** \([a]=\{xax^{-1}:x\in G\}\) is \(a\)**'s conjugacy class** in \(G\), * \[G^{G}:=\{[g]:g\in G\}\] * \(\Gamma_{[a]}\) is \(\Gamma\)'s subgroupoid, _informally, a connected component in \(\Gamma\)_, given by: \[Obj(\Gamma_{[a]}) :=[a]=\{x\in Obj:x\in[a]\}\] \[Hom(\Gamma_{[a]}) :=\{(u,v)\in Hom:u,v\in[a]\}\] **Def 3.** A **character** on \(\Gamma\) is a function \(\chi:Hom\to\mathbb{C}\), such that: * _(Composition)_ for each pair of composable arrows \(\varphi,\psi\): \[\chi(\varphi\circ\psi)=\chi(\varphi)+\chi(\psi)\] * _(Locally finite)_\(\forall\mathbf{y}\in G\) there is a **finite** set of \(\mathbf{x}\in G\), such that \(\chi(x,y)\neq 0\). Holds the following decomposition: **Lemma 1**.: _(Decomposition) \(\Gamma=\bigsqcup_{[a]\in G^{G}}\Gamma_{[a]}\)_ _Remark_.: Although characters being locally finite may seem as an alien and a bit too technical detail, it is deliberately placed in the definition to stress that we will **not** consider _"non-locally finite characters"_. The reasons will become clear, among the rest, in **Theorem 2**. We will need the following statement: **Statement 1**.: _Let \((u,v)=\varphi\in Hom\), \(a\in G\). Then the following statements are equivalent \(\varphi\in Hom(\Gamma_{[a]})\), \(\mathrm{S}(\varphi)\in[a]\), \(\mathrm{T}(\varphi)\in[a]\)._ It is proved by direct calculation. ### Connection between Characters and Derivations The following theorem motivates to consider **(locally finite)** characters when studying derivations. _Informally,_ **Theorem 2** _shows that characters may be seen as a generalization of linear operator's matrix._ **Theorem 2** (Derivation formula and derivation character, [1, section 2]).: _For each derivation \(d\) there exists a unique character \(\chi\) such that for each \(x\in G\) holds_ \[d(x)=\sum_{k\in G}\chi(k,x)k \tag{1}\] Consider \(d\), \(\chi\) from **Theorem 2**. We will say that character \(\chi\)**gives** derivation \(d\) (derivation \(d\)**is given by character \(\chi\)**; we will omit words "derivation" and "character"). For a derivation \(d\) let \(\chi^{d}\) be a character such that \(\chi^{d}\) gives \(d\). **Theorem 2** implies: **Corollary 1**.: _Let \(d,\partial\) be derivations given by characters \(\chi^{d},\chi^{\partial}\) correspondingly, Then \(d+\partial\) be given by \(\chi^{d}+\chi^{\partial}\)._ **Def 4**.: Let \(d\) be given by \(\alpha\), \(\partial\) be given by \(\beta\). Then \(\{\alpha,\beta\}\) is the character that gives \([d,\partial]\). **Statement 2** (_"Matrix" product_, [2, Proposition 2.4]).: _Let \(\alpha,\beta\) be characters. Then \(\{\alpha,\beta\}\) satisfies (\(a,b\in G\))_ \[\{\alpha,\beta\}(a,b)=\sum_{k\in G}\alpha(a,k)\beta(k,b)-\beta(a,k)\alpha(k,b)\] Two examples of derivations follow. **Example 1** will be needed to prove **Theorem 1**. Let \(a\in G\). Recall that derivation \(d_{a}\) is called **inner** if for any \(x\in\mathbb{C}[G]\) \[d_{a}(x)=[x,a]=xa-ax\] _Example 1_ (Character of inner derivation [3, Proposition 3]).: Let \(a\in G\). Then character \(\chi_{a}\) given by formula \[\chi_{a}(\varphi)=\begin{cases}\ 1,&a=\mathrm{S}(\varphi),\\ -1,&a=\mathrm{T}(\varphi),\\ \ 0,&\mathrm{otherwise}.\end{cases} \tag{2}\] gives \(d_{a}(x)=[x,a]\). _Example 2_.: Another possible example of derivations are _central derivations_. We will call derivation \(d\) central if there exists such central element \(z\in Z(G)\) and homomorphism \(\tau:G\rightarrow(\mathbb{C},+)\) such that for all _basis elements_\(g\in G\): \[d(g)=\tau(g)gz\] Such an operator is indeed a derivation, see [3, Proposition 4]. [3, Proposition 5] shows that non-trivial central derivations are not inner. Moreover, [3, Proposition 6] shows that central derivations form a Lie subalgebra in \(\operatorname{Der}(G)\). **Def 5**.: For a given character \(\chi\) we define support of \(\chi\) as following \[\operatorname{supp}\chi=\{\varphi\in Hom:\chi(\varphi)\neq 0\}\] For the given subset \(M\subset G\) denote by \(\operatorname{\mathbf{Der}}_{M}\) the set of derivations \(d\) such that for character \(\chi\) that gives \(d\): \(\operatorname{supp}\chi\subset M\). _Example 3_.: Recall character \(\chi_{a}\) from Example 1 (where \(a\in G\).) Its support is easily calculated \[\operatorname{supp}\chi_{a}=\{\varphi:\operatorname{S}(\varphi)=a\}\cup\{ \psi:\operatorname{T}(\psi)=a\}\] By **Statement 1**, we can localize \(\operatorname{supp}\chi_{a}\) in a single conjugacy class \(a\) (we will need such a localisation later in **Theorem 1**) \[\operatorname{supp}\chi_{a}\subset\Gamma_{[a]}\] ### Applying Decomposition **Lemma 1** establishes decomposition of groupoid \(\Gamma\). The current section presents _decompositions for **(locally finite)** characters and derivations._ The following two lemmas are equivalent. We prove the first one. **Lemma 2**.: _Let \(\chi\) be a character. Then there exists finitely many \(a_{1},\dots,a_{N}\in G\) such that_ \[\operatorname{supp}\chi\leq\bigcup_{k=1}^{N}\Gamma_{[a_{k}]}\] **Lemma 3** (Derivation decomposition).: _Then for each \(u\in G\) such that \(\chi_{u}\) is a character, holds decomposition \(d=\sum_{[u]\in G^{G}}d_{u}\), and the set \(\{[u]\in G^{G}:\exists x\in G:d_{u}(x)\neq 0\}\) is finite._ Proof for **Lemma 2**.: Let \(G=\langle X\mid R\rangle\), where \(X=:\{x_{1},\dots,x_{k}\}\) is finite (\(G\) is assumed to be finitely-generated throughout the text). Consider \((u,v)\in\Gamma\). Let \(n=n(v)\) be minimal nonnegative integer such that \[\exists y_{0},\dots,y_{n}\in X\cup X^{-1}:v=y_{0}y_{1}\dots y_{n}\] 1. Show that \[\exists z_{0},\dots,z_{n}\in G:(z_{0},y_{0})\circ\dots\circ(z_{n},y_{n})=(u,v)\] (3) _Subproof._ Induction by \(n=n(v)\). **Base:**\(n=0\) -- \(z_{0}=u\). **Step:** Consider \(z_{0}=uv^{-1}y_{0}\). Then \((z,y_{0}),(y_{0}^{-1}u,y_{0}^{-1}v)\) are composable since \[\begin{split}&\mathrm{S}(z_{0},y_{0})=y_{0}^{-1}z_{0}=y_{0}^{-1} uv^{-1}y_{0}=\\ &=y_{0}^{-1}u(y_{0}^{-1}v)^{-1}=\mathrm{T}(y_{0}^{-1}u,y_{0}^{-1 }v)\end{split}\] Moreover, \[(z_{0},y_{0})\circ(y_{0}^{-1}u,y_{0}^{-1}v)=(u,v)\] Notice that \(y_{0}^{-1}v=y_{1}\ldots y_{n}\), thus \(n(y_{0}^{-1}v)<n=n(v)\). Therefore, applying the step of induction, get eq. (3). 2. Since \(\chi\) is _locally finite_, the set \(B=(G\times(X\cup X^{-1}))\cap\mathrm{supp}\chi\) is finite. By 1 for each arrow \(\varphi\) there exists a unique element \(a\in G\) such that \(\varphi\in Hom(\Gamma_{[a]})\); thus, there exists a finite set \(A=\{a_{1},\ldots,a_{N}\}\) such that for any \(a\notin A\): \(B\cap Hom(\Gamma_{[a]})=\O\). Thus, by item 1 for any \(a\notin A\): \(\mathrm{supp}\,\chi\cap Hom(\Gamma_{[a]})=\O\). Therefore: \[\mathrm{supp}\,\chi\leq\bigcup_{k=1}^{N}\Gamma_{[a_{k}]}\] A very nice alternative proof for **Lemma 2** was submitted in an anonymous review. Alternative proof for **Lemma 2**. Let \(d\) be the derivation given by character \(\chi\). Let \(P\) be a union of conjugacy classes such that \[\mathrm{supp}\,\chi\subset\bigcup_{g\in P}\Gamma_{[g]}=:U\] The following statements are equivalent: * \((x,y)\in Hom(U)\), * \(y^{-1}x=\mathrm{S}(x,y)\in P\) Consider an element \(y\in G\) such that \[y^{-1}d(y)\in\langle P\rangle\] _Here \(\langle X\rangle\) for \(X\subset G\) denotes a set of all finite sums \(a_{1}x_{1}+\cdots+a_{n}x_{n}\) such that \(a_{1},\ldots,a_{n}\in\mathbb{C}\) and \(x_{1},\ldots,x_{n}\in X\)._ Let's calculate \(y^{-1}d(y)\) by **Statement 2**. \[y^{-1}d(y)=y^{-1}\sum_{x\in G}\chi(x,y)x=\sum_{x\in G}\chi(x,y)y^{-1}x=\sum_{x \in G}\chi(x,y)\mathrm{S}(x,y)\in\langle P\rangle\] Therefore, for each \(x\) such that \(\chi(x,y)\neq 0:y^{-1}x\in P\). Consider a set \[H=\left\{y:y^{-1}d(y)\in\langle P\rangle\right\}\] As a simple exercise, check \(H\)'s being a subgroup in \(G\). To summarize, if \(y\) is in subgroup \(H\leq G\) (that is \(y^{-1}d(y)\in\langle P\rangle\)) then for each \(x\) such that \(\chi(x,y)\neq 0:y^{-1}x\in P\). To finish the proof let's choose such \(P\) that \(P\) is a union of a **finite number** of conjugacy classes and \(H=G\). To achieve this, consider a finite generating set \(S\) for \(G\) and a **finite** subset \(M\subset G\) such that for any \(s\in S\) there exist complex numbers \(a_{m},m\in M\) such that \[s^{-1}d(s)=\sum_{m\in M}a_{m}m\] _Informally, calculate all \(s^{-1}d(s),s\in S\), which are finite sums, and store all elements in \(G\) present in at least one of finite sums._ Since \(M\) is finite, \(P=\bigcup_{m\in M}[m]\) is a union of finite number of conjugacy classes. Moreover, for such \(P\) \[S\subset H\] Thus, \(G=H\). All in all, there exists a union \(P\) of a **finite** number of conjugacy classes such that \[\operatorname{supp}\chi\subset\bigcup_{g\in P}\Gamma_{[g]}=:U\] \(\blacksquare\) Let \(d\) be the derivation given by character \(\chi\), and \[\chi_{u}(\varphi):=\begin{cases}\chi(\varphi),&\varphi\in Hom(\Gamma_{[u]}), \\ 0,&otherwise\end{cases}.\] We will denote the derivation given by \(\chi_{u}\) as \(d_{u}\). ## 2 Constructing Graded Algebra ### Grading with Abelian Quotients **Def 6.** Let \(A\) be an abelian group, \(e\) be a neutral element in \(A\), \(\mathfrak{A}\) be a Lie algebra, \(\mathfrak{A}\) can be expressed as a direct sum \[\mathfrak{A}=\bigoplus_{n\in A}\mathfrak{A}_{n},\text{ such that}\] \[\forall n,l\in A:[\mathfrak{A}_{n},\mathfrak{A}_{l}]\subset \mathfrak{A}_{nl}\] such that \(\mathfrak{A}_{e}\neq\mathfrak{A}\). Then \(\mathfrak{A}\) is called **graded (with \(A\))**. The direct sum Def 6 is called \(\mathfrak{A}\)'s grading with \(A\). Notice that trivial gradings are **excluded**. **Def 7.** A commutator subgroup of group \(G\) is \[G^{\prime}:=\{xyx^{-1}y^{-1}:x,y\in G\}\] Recall a few well-known definitions and statements that will be required below. **Statement 3.**_For any group \(G\) a subgroup \(G^{\prime}\) is normal, \(G^{\prime}\triangleleft G\)_ Let \(N\) be a fixed normal subgroup in \(G\) such that \(G/N\) is abelian. _Conceptually, the latter condition derives from the following **Def 6**, however it is also needed for more technical, yet crucial details, like **Lemma 4**. **Statement 4**.: \(G^{\prime}\subset N\)_._ **Lemma 4**.: _Let \(a\in N\). Then \([a]\subset aN\)._ Proof.: A calculation for any \(a,t\in G\) proves lemma: \[tat^{-1}N=tat^{-1}a^{-1}Na=Na=aN\] The first and the last equivalences hold since \(N\) is normal; the second equivalence holds by **Statement 4** since \(G/N\) is abelian (\(tat^{-1}a^{-1}\in G^{\prime}\subset N\).) _Note that **Lemma 4** would have been false if we did not require \(G/N\) to be abelian. Consider \(G=S_{4},N=V_{4},a=(12)\) for counterexample._ **Lemma 4** motivates the following symbols: \[\Gamma_{aN}=\bigcup_{k\in aN}\Gamma_{[k]},\] \[\mathrm{Der}_{aN}=\Bigg{\{}d\in Der:\mathrm{supp}\,\chi^{d}\subset\Gamma_{aN} \Bigg{\}}.\] **Lemma 5**.: _Let \(a,b\in G\), \(d\) is given by character \(\alpha:\mathrm{supp}\,\alpha\subset\Gamma_{aN}\), \(\partial\) is given by character \(\beta:\mathrm{supp}\,\beta\subset\Gamma_{bN}\). Then \(\mathrm{supp}\,\{\alpha,\beta\}\subset\Gamma_{abN}\)._ Proof.: **Statement 2** implies: \[\{\alpha,\beta\}(h,g)=\sum_{k\in G}\alpha(h,k)\beta(k,g)-\beta(h,k)\alpha(k,g)\] Consider an arrow \((h,g):\{\alpha,\beta\}(h,g)\neq 0\). There exists \(k\in G\): \[\begin{bmatrix}\alpha(h,k)\beta(k,g)\neq 0\\ \beta(h,k)\alpha(k,g)\neq 0\end{bmatrix} \tag{4}\] \[\begin{bmatrix}\alpha(h,k)\neq 0\\ \beta(k,g)\neq 0\\ \begin{cases}\beta(h,k)\neq 0\\ \alpha(k,g)\neq 0\end{cases}\end{bmatrix}\] Expressing eq. (4) in terms of \(\mathrm{supp}\,\): \[\begin{bmatrix}(h,k)\in\mathrm{supp}\,\alpha\subset Hom(\Gamma_{aN})\\ (k,g)\in\mathrm{supp}\,\beta\subset Hom(\Gamma_{bN})\\ \begin{cases}(k,g)\in\mathrm{supp}\,\alpha\subset Hom(\Gamma_{aN})\\ (h,k)\in\mathrm{supp}\,\beta\subset Hom(\Gamma_{bN})\end{cases}\end{bmatrix} \tag{5}\] By **Statement 1**, an arrow \(\varphi\) belongs to \(\Gamma_{[x]}\subset\Gamma_{xN}\) iff its target \(\mathrm{T}(\varphi)\) belongs to \([x]\subset xN\). Thus, eq. (5) implies: \[\begin{bmatrix}hk^{-1}=u\in aN\\ kg^{-1}=v\in bN\\ \begin{cases}kg^{-1}=u\in aN\\ hk^{-1}=v\in bN\end{cases}\end{bmatrix} \tag{6}\] Multiplying the equations in eq. (6): \[\begin{bmatrix}hg^{-1}=uv\\ hg^{-1}=vu\end{bmatrix},\ \ \textbf{for}\ u\in aN,v\in bN\] By definition, \(\mathrm{T}(h,g)=hg^{-1}\). By **Statement 1**: \[\begin{bmatrix}(h,g)\in Hom(\Gamma_{[uv]})\\ (h,g)\in Hom(\Gamma_{[vu]})\end{bmatrix}\] Since \([uv]=[u\cdot vu\cdot u^{-1}]=[vu]\) and \(uv\in abN\), then by **Lemma 4**: \[[uv]\subset uvN=abN\] Thus: \[(h,g)\in\Gamma_{[uv]}\subset\Gamma_{abN},\ \textbf{for}\ u\in aN,v\in bN\] All in all, \(\{\alpha,\beta\}(h,g)\neq 0\Rightarrow(h,g)\in\Gamma_{abN}\), therefore \(\mathrm{supp}\ \{\alpha,\beta\}\subset\Gamma_{abN}\). **Theorem 1**.: _If \(|G/N|>1\), \(\mathrm{Der}\) is graded with \(G/N\), that is_ \[\mathrm{Der}=\bigoplus_{k\in G/N}\mathrm{Der}_{k},\] \[\forall k,l\in G/N:[\mathrm{Der}_{k},\mathrm{Der}_{l}]\subset \mathrm{Der}_{kl}.\] Proof.: Consider the sum \(\sum_{k\in G/N}\mathrm{Der}_{k}\). 1. First, show that \(\sum_{k\in G/N}\mathrm{Der}_{k}\) is equal to \(\mathrm{Der}\). * \(\sum_{k\in G/N}\mathrm{Der}_{k}\subset\mathrm{Der}\) -- since \(\mathrm{Der}\) is closed under finite sums. * Now we show an opposite inclusion. Consider an arbitrary \(d\in\mathrm{Der}\). **Lemmas 3 and 4** imply (see **Lemma 3** for definition of \(d_{u}\)) \[d=\sum_{[u]\subset G}d_{u}=\sum_{k\in G/N}\Big{(}\sum_{[u]\subset k}d_{u}\Big{)}\] Consider for given \(k\in G/N\) \[s_{k}:=\sum_{[u]\subset k}d_{u}\in\mathrm{Der}_{k}\] By **Lemma 3**, there is only a finite number of such \([u]\subset G\) that \(d_{u}\) is not constant \(0\). Thus, there exists an integer \(N\) and \(k_{1},\dots,k_{N}\in G/N\) such that for any \(k\in G/N,k\neq k_{1},\dots,k_{N}\): \(s_{k}\) is constant \(0\). Thus, \[d=\sum_{i=1}^{N}s_{k_{i}}\] Thus, \[\mathrm{Der}\subset\sum_{k\in G/N}\mathrm{Der}_{k}\] * All in all, \(\mathrm{Der}=\sum_{k\in G/N}\mathrm{Der}_{k}\). 2. \(\sum_{k\in G/N}\mathrm{Der}_{k}\) is direct. _Subproof._ Let \(d_{k}\in\mathrm{Der}_{k}\) be given by \(\chi_{k}\). Suppose that \[\sum_{k\in G/N}d_{k}\equiv 0\] Since constant \(0\) is a derivation given by constant character (equal to \(0\)), by **Corollary 1** for any \(\varphi\in Hom\) \[\sum_{k\in G/N}\chi_{k}(\varphi)=0\] Since \(\mathrm{supp}\;\chi_{k},\mathrm{supp}\;\chi_{l}\) are disjoint (for \(k\neq l\)), for each \(\varphi\in Hom\) exists no more than one \(k\in G/N\) such that \(\chi_{k}(\varphi)\neq 0\). Thus, for each \(\varphi\in Hom\) and for each \(k\in G/N:\chi_{k}(\varphi)=0\), and for each \(k\in G/N:d_{k}\equiv 0\). Therefore, \(\sum_{k\in G/N}\mathrm{Der}_{k}=\bigoplus_{k\in G/N}\mathrm{Der}_{k}\) is direct by definition of direct sum. \(\square\) 3. We established that \[\mathrm{Der}=\bigoplus_{k\in G/N}\mathrm{Der}_{k}\] Since there exists such \(m\in G\) that \(mN\neq N\). Thus, by Examples 1 and 3 the inner derivation \([x,m]\) is given by a character \(\chi_{m}\) such that \(\mathrm{supp}\;\chi_{m}\subset[m]\subset mN\) by **Lemma 4**. Thus, \(\mathrm{Der}\neq\mathrm{Der}_{N}\) _(i.e. the grading is not trivial.)_ Finally, check that \((\forall k,l\in G/N)\) \[\forall k,l\in G/N:[\mathrm{Der}_{k},\mathrm{Der}_{l}]\subset\mathrm{Der}_{kl}\] _Proof for item 3._ Let \(d_{k}\in\mathrm{Der}_{k},d_{l}\in\mathrm{Der}_{l}\). Let character \(\chi_{k}\) give \(d_{k}\), character \(\chi_{l}\) give \(d_{l}\). Thus, by definition of \(\mathrm{Der}_{k},\mathrm{Der}_{l}\) \[\mathrm{supp}\;\chi_{k}\leq\Gamma_{k},\qquad\mathrm{supp}\;\chi_{l}\leq\Gamma_ {l}\] Therefore, by **Lemma 5** \[\mathrm{supp}\;\{\chi_{k},\chi_{l}\}\leq\Gamma_{kl}\] Finally, \[[d_{k},d_{l}]\in\mathrm{Der}_{kl}\] \(\square\) _Example 4_.: Let \(G\) be a perfect group (\(|G/G^{\prime}|=1\)), that is \(G^{\prime}=G\). **Theorem 1** yields _a trivial grading_(which we do not regard as a grading in this text) for \(\mathrm{Der}(\mathbb{C}[G])\) since \(|G/G^{\prime}|=1\). And vice versa: **Corollary 2**.: _If \(G\) is NOT a perfect group (\(G\neq G^{\prime}\)) then \(\mathrm{Der}\) admits a (non-trivial) grading with \(G/G^{\prime}\)._ _Example 5_.: Let \(G\) be a knot group (i.e. let there exist some knot \(K\) such that \(G\) is the knot group of \(K\)). It is well-known that in this case \(G/G^{\prime}=\mathbb{Z}\), therefore \(Der(G)\) admits a grading with \(\mathbb{Z}\). Examples ### Discrete Heisenberg Group Consider discrete Heisenberg group (a group of \(3\times 3\) upper unitriangular matrices with integer entries). Following [3], we use this group as a handy example since it admits easy calculations. **Def 8.** Consider a group of integer unitriangular matrices with respect to matrix multiplication: \[\boldsymbol{H}=\left\{\begin{array}{ccc}1&a&c\\ 0&1&b\\ 0&0&1\end{array}\right\}:a,b,c\in\mathbb{Z}\right\}\] \[\begin{pmatrix}1&a&c\\ 0&1&b\\ 0&0&1\end{pmatrix}\begin{pmatrix}1&x&z\\ 0&1&y\\ 0&0&1\end{pmatrix}:=\begin{pmatrix}1&a+x&c+z+ay\\ 0&1&b+y\\ 0&0&1\end{pmatrix}\] Since all the matrices in \(\boldsymbol{H}\) have determinant 1, the inverse is well-defined and given by \[\begin{pmatrix}1&a&c\\ 0&1&b\\ 0&0&1\end{pmatrix}^{-1}=\begin{pmatrix}1&-a&ab-c\\ 0&1&-b\\ 0&0&1\end{pmatrix}\] Our goal is to grade \(\mathrm{Der}(\boldsymbol{H})\). **Def 9.** The centre of \(G\) is \[Z(G)=\{z\in G:\forall g\in G:gz=zg\}\] The follwing statements are well-known and trivial. **Statement 5.** \[\boldsymbol{H}^{\prime}=Z(\boldsymbol{H})=\left\{\begin{array}{ccc}1&0&a\\ 0&1&0\\ 0&0&1\end{array}\right\}:a\in\mathbb{Z}\right\}\] **Statement 6.** \[\boldsymbol{H}/\boldsymbol{H}^{\prime}\simeq\mathbb{Z}\oplus\mathbb{Z}\] Let \(\psi:\mathbb{Z}\oplus\mathbb{Z}\to\boldsymbol{H}/\boldsymbol{H}^{\prime}\) be an isomorphism. Recall the symbols: \[\Gamma_{aN}=\bigcup_{k\in aN}\Gamma_{[k]},\] \[\mathrm{Der}_{aN}=\Bigg{\{}d\in Der:\mathrm{supp}\,\chi^{d}\subset\Gamma_{aN} \Bigg{\}}.\] Define: \[\mathrm{Der}_{(i,j)}:=\mathrm{Der}_{\psi(i,j)}\] **Corollary 3** (From **Statement 6**, **Theorem 1**).: \(\mathrm{Der}(\boldsymbol{H})\) is graded with \(\mathbb{Z}\oplus\mathbb{Z}\), that is \[\mathrm{Der}(\boldsymbol{H})=\bigoplus_{(i,j)\in\mathbb{Z}\oplus \mathbb{Z}}\mathrm{Der}_{(i,j)}\] \[\forall(i,j),(k,l)\in\mathbb{Z}\oplus\mathbb{Z}:[\mathrm{Der}_{(i,j)},\mathrm{Der}_{(k,l)}]\subset\mathrm{Der}_{(i+k,j+l)}\] _Example 6_.: **Statement 5** implies: if \(d\) is given by such \(\chi\) that \(\operatorname{supp}\chi\leq\Gamma_{[z]}\) for \(z\in Z(\boldsymbol{H})\), then \(d\in\operatorname{Der}_{(0,0)}\). **Def 10**.: \(G\) is a stem group if \[Z(G)\leq G^{\prime}\] **Example 6** can be generalised under the assumption that \(G\) is a stem group. Note that \(\boldsymbol{H}\) is a stem group since \(\boldsymbol{H}^{\prime}=Z(\boldsymbol{H})\) by **Statement 5**. See [8] for more details on (finite, which is not our case) stem groups. **Proposition 1**.: _Let \(G\) be a stem group. If \(d\) is given by such \(\chi\) that \(\operatorname{supp}\chi\leq\Gamma_{[z]}\) for \(z\in Z(G)\), then \(d\in\operatorname{Der}_{G^{\prime}}\)._ Central derivations were introduced in [3] as operators given on a group algebra generators \(g\in G\) with the homomorphism \(\tau:G\to(\mathbb{C},+)\) and the central element \(z\in Z(G)\) by formula \[d_{\tau,z}:g\mapsto\tau(g)gz.\] Recall that [3, Proposition 6] shows that central derivations form a Lie subalgebra in \(\operatorname{Der}(G)\) (denote as \(\operatorname{ZDer}(G)\)). **Proposition 2**.: _Let \(G\) be NOT a stem group. Then there is an "induced" nontrivial grading of \(\operatorname{ZDer}(G)\) with \(G^{\prime}\)._ Note that if \(G\) is a stem group, _the grading is trivial_, i.e. not a grading at all in our terms. Another example of an induced grading follows. Let \(a\in G\). Recall that derivation \(d_{a}\) is called **inner** if for any \(x\in\mathbb{C}[G]\) \[d_{a}(x)=[x,a]=xa-ax\] Let InnerDer be the set of all derivations of the form \((a_{1},\dots,a_{n}\in\mathbb{C};y_{1},\dots,y_{n}\in G)\) \[G\ni x\mapsto a_{1}[x,y_{1}]+\dots+a_{n}[x,y_{n}]=[x,a_{1}y_{1}+\dots+a_{n}y_{ n}]\] A direct calculations shows that InnerDer is an ideal in \(\operatorname{Der}\) (i.e. for any \(d\in\operatorname{InnerDer}\) and for any \(\partial\in\operatorname{Der}:[d,\partial],[\partial,d]\in\operatorname{InnerDer}\); recall that \([d,\partial](x)=d(\partial(x))-\partial(d(x))\).) Therefore, there exists a factor-algebra \(\operatorname{OuterDer}:=\operatorname{Der}/\text{InnerDer}\). **Corollary 4**.: _If \(G\) is NOT a stem group, there is an induced (non-trivial) grading of \(\operatorname{OuterDer}\) with \(G^{\prime}\) of the form_ \[\operatorname{OuterDer}=\bigoplus_{k\in G/G^{\prime}}\operatorname{Der}_{k}/ \text{InnerDer}_{k},\] _where \(\operatorname{InnerDer}_{k}:=\operatorname{Der}_{k}\cap\operatorname{InnerDer}\)_
2308.15889
Sorting Strategies for Interactive Conflict Resolution in ASP
Answer set programs in practice are often subject to change. This can lead to inconsistencies in the modified program due to conflicts between rules which are the results of the derivation of strongly complementary literals. To facilitate the maintenance of consistency in answer set programs, in this paper we continue work on a recently presented framework that implements interactive conflict resolution by extending the bodies of conflicting rules by suitable literals, so-called $\lambda$-extensions. More precisely, we present strategies to choose $\lambda$-extensions that allow for resolving several conflicts at a time in an order that aims at minimizing (cognitive) efforts. In particular, we present a graphical representation of connections between conflicts and their possible solutions. Such a representation can be utilized to efficiently guide the user through the conflict resolution process by displaying conflicts and suggesting solutions in a suitable order.
Andre Thevapalan, Gabriele Kern-Isberner
2023-08-30T09:04:55Z
http://arxiv.org/abs/2308.15889v1
# Sorting Strategies for Interactive Conflict Resolution in ASP ###### Abstract Answer set programs in practice are often subject to change. This can lead to inconsistencies in the modified program due to conflicts between rules which are the results of the derivation of strongly complementary literals. To facilitate the maintenance of consistency in answer set programs, in this paper we continue work on a recently presented framework that implements interactive conflict resolution by extending the bodies of conflicting rules by suitable literals, so-called _\(\lambda\)-extensions_. More precisely, we present strategies to choose \(\lambda\)-extensions that allow for resolving several conflicts at a time in an order that aims at minimizing (cognitive) efforts. In particular, we present a graphical representation of connections between conflicts and their possible solutions. Such a representation can be utilized to efficiently guide the user through the conflict resolution process by displaying conflicts and suggesting solutions in a suitable order. ## 1 Introduction Answer set programming provides valuable features for usage in real-world applications where complex decisions have to be made, thanks to its declarative nature and the availability of both strong negation and default negation. Such programs, though, are often subject to change. Adding rules to an answer set program can however potentially lead to inconsistency due the activation of conflicting rules, i. e., rules with complementary literals in their heads.The approach in [10] deals with inconsistency caused by the derivation of strongly complementary literals. Herefore, the notion of \(\lambda\)-extensions for conflicting rules has been introduced that enables an interactive conflict resolution process in which a knowledge expert can help restore the consistency of an updated program in a professionally adequate way. A \(\lambda\)-extension is a set of (default) literals with which the body of a conflicting rule can be extended to constrain its applicability and thus resolves the conflict. However, the paper [10] focuses on conflicts between two rules only. But conflicts can involve more than two rules, and solutions to one conflict can affect solutions to other conflicts. It is clear that no new conflicts may arise by extending rule bodies. So we might expect even synergetic positive effects when considering all interactions between conflicts in a program at the same time and finding a clever order to solve conflicts. This is exactly the topic here. In this work, we extend the approach of [10] by first defining a graph that shows the connections between the conflicts and their solutions, thereby embedding the conflicts of a program and their possible solutions within the overall context of the program's conflict resolution. This graphical representation of possible solutions is then utilized to define suitable orders over conflicts and the respective \(\lambda\)-extensions so that one solution can help solving subsequent other conflicts. In particular, the strategies presented in this paper can also help shrink the (sometimes large) sets of possible \(\lambda\)-extensions by choosing such extensions that are involved in more than one conflict. In this way, also the cognitive burden for knowledge experts during the conflict resolution process can be reduced. The main contributions of this paper can be summarized as follows: * We introduce a suitable graph structure named \(\lambda\)-graphs for \(\lambda\)-extensions to display the relationships between conflicts w.r.t. their possible solutions. * We show how \(\lambda\)-graphs provide the necessary, syntax-based information to find suitable orders over conflicts. * Utilizing \(\lambda\)-graphs, we furthermore explain how one can obtain an order over the possible solutions of a conflict. * Based on these results, we present an explicit sorting strategy that defines an order over conflicts and their solutions for the application in a conflict resolution framework as proposed in [9, 10]. This paper begins by laying out the necessary preliminaries in Section 2. Section 3 provides further terminology regarding conflict resolution which is then used in Section 4 to construct \(\lambda\)-graphs that provide crucial information regarding the connections between conflicts and \(\lambda\)-extensions. Based on these results in Section 5.1, a sorting strategy is proposed that defines an order over conflicts and their possible solutions by taking their relationships among each other into account. Section 6 gives a brief overview of related work. We conclude this paper by summarizing our findings in Section 7 and briefly outlining possible future work. ## 2 Preliminaries In this paper, we look at non-disjunctive _extended logic programs_ (ELPs) [5]. An ELP is a finite set of rules over a set \(\mathcal{A}\) of propositional atoms. A literal \(L\) is either an atom \(A\) (_positive literal_) or a negated atom \(\overline{A}\) (_negative literal_). For a literal \(L\), the _strongly complementary_ literal \(\overline{L}\) is \(\overline{A}\) if \(L=A\) and \(A\) otherwise. A _default-negated literal_\(L\), called _default literal_, is written as \(\sim\)\(L\). Given a set \(S\) of literals, we say a literal \(L\)_is true in \(S\)_ (symbolically \(S\vDash L\)) iff \(L\in S\) and \(\sim\)\(L\) is true in \(S\) (symbolically \(S\vDash\sim\)\(L\)) iff \(L\notin S\). A set of literals is _inconsistent_ if it contains strongly complementary literals. We are now ready to specify the form of ELPs. A _rule_\(r\) is of the form \[L_{0}\!\leftarrow\!L_{1},\ldots,L_{m},\sim\!\!L_{m+1},\ldots,\sim\!\!L_{n}., \tag{1}\] with literals \(L_{0},\ldots,L_{n}\) and \(0\leq m\leq n\). The literal \(L_{0}\) is the _head_ of \(r\), denoted by \(H(r)\), and \(\{L_{1},\ldots L_{m},\)\(\sim\!\!L_{m+1},\ldots\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ## 3 Conflicts and \(\lambda\)-extensions In [9, 10], a framework is outlined that supports knowledge experts in restoring consistency interactively in answer set programs. To this aim, the bodies of rules involved in a conflict are extended suitably by known literals, so-called \(\lambda\)-extensions. In this way, conflicts that cause inconsistency can be resolved. In particular, a method is provided that for every conflict generates all possible \(\lambda\)-extensions, from which the expert can then choose the most adequate ones. We briefly recall the basic techniques from [9, 10], in particular the terms _conflicts_ and \(\lambda\)_-extensions_, and illustrate them with the following (running) example. **Example 1**.: _Let \(\mathcal{P}_{ex}\) be specified by the following rules:_ \[r_{1}\colon a\!\leftarrow\!b,\!\sim\!c. r_{2}\colon\overline{a}\!\leftarrow\!b. r_{3}\colon x\!\leftarrow\!d,e,f,\!\sim\!c. r_{4}\colon\overline{x}\!\leftarrow\!d,e.\] \[r_{5}\colon y\!\leftarrow\!g,h,f. r_{6}\colon\overline{y}\!\leftarrow\!g. r_{7}\colon z\!\leftarrow\!j,k,\!\sim\!l. r_{8}\colon\overline{z}\!\leftarrow\!j,\!\sim\!l.\] \[r_{9}\colon w\!\leftarrow\!f,m,n. r_{10}\colon\overline{w}\!\leftarrow\!m. r_{11}\colon\overline{w}\!\leftarrow\!n. r_{12}\colon p\!\leftarrow\!o,h,f,\!\sim\!q.\] \[r_{13}\colon\overline{p}\!\leftarrow\!o,\!\sim\!q. r_{14}\colon u\!\leftarrow\!s. r_{15}\colon\overline{u}\!\leftarrow\!s,\overline{t},h. r_{16}\colon\overline{u}\!\leftarrow\!s,t,h.\] Note that \(\mathcal{P}_{ex}\) in Example 1 is trivially consistent because it is a so-called _program core_[10], i.e., it has no facts. Program cores are usable with different instances by expanding the program by a corresponding set \(\mathcal{F}\) of facts (instance data). A typical example would be a medical expert system where the instance data are provided by the patients and so cannot be part of the generic program. **Example 2**.: _Suppose the following program \(\mathcal{P}\) describing the symptoms of and treatments for two diseases disA and disB:_ \[\text{disA}\leftarrow\text{sympM},\text{sympN}.\qquad\text{disB}\!\leftarrow\! \text{sympM},\text{sympO}.\qquad\text{treatX}\!\leftarrow\!\text{disA}.\qquad \text{treatY}\!\leftarrow\!\text{disB}.\] _Program \(\mathcal{P}\) is a program core. Adding the instance data \(\mathcal{F}=\{\text{sympM},\text{sympO}\}\) as patient data to \(\mathcal{P}\) yields the unique answer set \(\mathcal{F}\cup\{\text{disB},\text{treatY}\}\) describing that the corresponding patient has condition disB and should be treated with treatment treatY._ However, it is easy to see that there exist multiple instance data \(\mathcal{F}\) for \(\mathcal{P}_{ex}\) that would yield an inconsistent program \(\mathcal{P}_{ex}\cup\mathcal{F}\) because of rules in \(\mathcal{P}_{ex}\) that, if their bodies are satisfied simultaneously, derive strongly complementary literals. **Example 3** (Example 1 contd.).: _Let \(\mathcal{F}_{ex}\) be a set of instance data for \(\mathcal{P}_{ex}\) such that \(b\in\mathcal{F}_{ex}\) and \(c\not\in\mathcal{F}_{ex}\). Then in \(\mathcal{P}_{ex}\cup\mathcal{F}_{ex}\), both \(r_{1}\) and \(r_{2}\) are simultaneously satisfied. Hence, \(\mathcal{P}_{ex}\cup\mathcal{F}_{ex}\) is inconsistent as both a and \(\overline{a}\) are derivable._ Rule sets like \(\{r_{1},r_{2}\}\) of the previous example are called _conflicting rules_ or _conflicts_. **Definition 1** (Conflict and conflict group (cf. [9])).: _In a logic program \(\mathcal{P}\), two rules \(r,r^{\prime}\) are conflicting iff there exists a set \(S\) of literals such that \(S\) satisfies \(B(r)\) and \(B(r^{\prime})\) simultaneously and the head literals \(H(r)\) and \(H(r^{\prime})\) are strongly complementary. A conflict is a set \(\gamma=\{r,r^{\prime}\}\) of two conflicting rules \(r,r^{\prime}\). For a rule \(r\), the corresponding conflict group \(\Gamma(r)\) is the set of all conflicts \(\gamma\) in \(\mathcal{P}\) with \(r\in\gamma\). The size of a conflict group is the number of different conflicts in a conflict group._ Intuitively, two rules are conflicting if their bodies can be true simultaneously and their head literals are contradictory. The following example illustrates conflict groups and their sizes in \(\mathcal{P}_{ex}\). **Example 4** (Example 1 contd.).: _The conflict groups for \(r_{1}\) and \(r_{4}\) are the sets \(\Gamma(r_{1})=\{\{r_{1},r_{2}\}\}\) and \(\Gamma(r_{4})=\{\{r_{3},r_{4}\}\}\), respectively. Both groups have each size 1 while conflict group \(\Gamma(r_{14})=\{\{r_{14},r_{15}\},\)\(\{r_{14},r_{16}\}\}\) has size 2._ If both rules of a conflict in a program core are simultaneously satisfiable, a program is potentially inconsistent [10]. In the rest of this paper, we deal with programs that are inconsistent due to _conflicts_ and we require a program to be coherent, that is, each given program has at least one answer set (cf. [2]). In order to guarantee that a program has an answer set whenever it is extended by consistent instance data, all conflicts have to be resolved. For that, we use an approach called _\(\lambda\)-extensions_. In the following, we will present the main aspects of \(\lambda\)-extensions and properties that are relevant to this work. For more details, we refer the reader to [10]. **Definition 2** (\(\lambda\)-extensions, cf. [10]).: _Suppose a program \(\mathcal{P}\) and a rule \(r\in\mathcal{P}\) that is in conflict with a non-empty set of rules \(R\subset\mathcal{P}\). Let \(X\) be a set of (default) literals built from atoms occurring in \(R\), and let \(r^{\prime\prime}\) be a rule obtained from \(r\) where \(B(r)\) is extended by \(X\), viz., \(r^{\prime\prime}\): \(H(r)\!\leftarrow\!B(r)\cup X\). This set \(X\) is a (conflict-resolving) \(\lambda\)-extension for rule \(r\) if for every rule \(r^{\prime}\in R\) it holds that \(r^{\prime}\) and \(r^{\prime\prime}\) are no longer conflicting. Such a rule \(r^{\prime\prime}\) is called a \(\lambda\)-extended rule w.r.t. \(X\). \(A\)\(\lambda\)-extension \(X\) for \(r\) is minimal iff there exists no set \(X^{\prime}\subset X\) such that \(X^{\prime}\) is also a \(\lambda\)-extension for \(r\). A conflict group \(\Gamma(r)\) is called resolvable if there exists a rule \(r^{\prime}\) and a \(\lambda\)-extension \(X\) for \(r^{\prime}\) such that replacing \(r^{\prime}\) in every conflict of \(\Gamma(r)\) by the corresponding \(\lambda\)-extended rule w.r.t. \(X\) leads to all conflicts in \(\Gamma(r)\) being resolved. Rule \(r^{\prime}\) is then called the representative of \(\Gamma(r)\)._ Intuitively, a \(\lambda\)-extension \(X\) for a representative \(r\) of a resolvable conflict group \(\Gamma(r)\) is a set of (default) literals such that if the body of \(r\) is expanded by \(X\), any previous conflicts of \(r\) are resolved. In [10], the authors show that a conflict \(\{r,r^{\prime}\}\) can only be resolved iff there exists at least one atom in \(B(r^{\prime})\) that is not in \(B(r)\) or vice versa. They also demonstrate that resolving each conflict in a program core \(\mathcal{P}\) using \(\lambda\)-extensions yields a _uniformly non-contradictory_ program core \(\mathcal{P}^{\star}\), meaning, the program core can be used with any set of consistent instance data that consists of literals that only appear in rule bodies of \(\mathcal{P}\). We will illustrate the workings of \(\lambda\)-extensions in the following example. **Example 5** (Example 4 contd.).: _Consider conflict groups \(\Gamma(r_{14})\) in Example 1 which consists of the conflicts \(\{r_{14},r_{15}\}\) and \(\{r_{14},r_{16}\}\). For these conflicts, we get the \(\lambda\)-extensions \(\{\sim\!h\}\), \(\{\overline{h}\}\) and \(\{\sim\!t,\sim\!\overline{t}\}\) as possible extensions for the rule body of \(r_{14}\). This means both conflicts of \(r_{14}\) can be solved by replacing \(r_{14}\) in \(\mathcal{P}_{ex}\) by one of the following rules:_ \[r^{\prime}_{14}\colon u\!\leftarrow\!s,\sim\!h. r^{\prime\prime}_{14}\colon u\!\leftarrow\!s,\overline{h}. r^{\prime\prime}_{14}\colon u\!\leftarrow\!s,\sim\!\overline{t},\sim\!t.\] Note that no additional conflicts are introduced as the premise for \(r\) is solely extended yielding a more specific condition. However, not every conflict group of a program is resolvable. The following \begin{table} \begin{tabular}{l l l l l l l} \hline \hline _Group_ & _Representative_ & _Conflicts_ & \(\lambda\)_-Ext._ & _Node_ & _Cliques_ \\ \hline \(\Gamma(r_{2})\) & \(r_{2}\) & \(\{r_{1},r_{2}\}\) & \(\{c\}\) & \((\Gamma(r_{2}),1)\) & \(Q_{3}\) \\ \(\Gamma(r_{4})\) & \(r_{4}\) & \(\{r_{3},r_{4}\}\) & \(\{\sim\!f\},\{c\}\) & \((\Gamma(r_{4}),1)\) & \(Q_{2}\),\(Q_{3}\) \\ \(\Gamma(r_{6})\) & \(r_{6}\) & \(\{r_{5},r_{6}\}\) & \(\{\sim\!h\},\{\sim\!f\}\) & \((\Gamma(r_{6}),1)\) & \(Q_{1}\),\(Q_{2}\) \\ \(\Gamma(r_{8})\) & \(r_{8}\) & \(\{r_{7},r_{8}\}\) & \(\{\sim\!k\}\) & \((\Gamma(r_{8}),1)\) & \(Q_{4}\) \\ \(\Gamma(r_{10})\) & \(r_{10}\) & \(\{r_{9},r_{10}\}\) & \(\{\sim\!f\}\) & \((\Gamma(r_{10}),1)\) & \(Q_{1}\) \\ \(\Gamma(r_{11})\) & \(r_{11}\) & \(\{r_{9},r_{11}\}\) & \(\{\sim\!f\}\) & \((\Gamma(r_{11}),1)\) & \(Q_{1}\) \\ \(\Gamma(r_{13})\) & \(r_{13}\) & \(\{r_{12},r_{13}\}\) & \(\{\sim\!h\},\{\sim\!f\}\) & \((\Gamma(r_{13}),1)\) & \(Q_{1}\),\(Q_{2}\) \\ \(\Gamma(r_{14})\) & \(r_{14}\) & \(\{r_{14},r_{15}\},\{r_{14},r_{16}\}\) & \(\{\sim\!h\},\{\sim\!\overline{t},\sim\!t\}\) & \((\Gamma(r_{14}),2)\) & \(Q_{2}\),\(Q_{5}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Components of \(\lambda\)-graph example shows that in order to utilize conflict groups for the resolution of conflicts, their proper selection is a crucial step. **Example 6** (Example 4 contd.).: _For rules \(r_{1}\) and \(r_{2}\) it holds that \(\Gamma(r_{1})=\Gamma(r_{2})=\{\{r_{1},r_{2}\}\}\). As \(B(r_{2})\subseteq B(r_{1})\) holds, \(r_{1}\) can not be picked as a representative. However, \(B(r_{1})\backslash B(r_{2})=\{\sim\!\!c\}\) holds and thus the \(\lambda\)-extension \(\{c\}\) resolves the conflict of both conflict groups. Similarly, regarding \(\Gamma(r_{4})\), we get the \(\lambda\)-extensions \(\{\overline{f}\}\), \(\{\sim\!\!f\}\), and \(\{c\}\) for \(r_{4}\). Note that conflict group \(\Gamma(r_{9})\) has no possible representative as there does not exist any \(\lambda\)-extension for \(r_{9}\) that solves all conflicts in \(\Gamma(r_{9})\). It is therefore necessary to pick the conflict groups \(\Gamma(r_{10})\) and \(\Gamma(r_{11})\) which are both resolvable via the representative \(r_{10}\) and \(r_{11}\) respectively. Table 1 shows the \(\lambda\)-extensions of the different resolvable conflict groups of \(\mathcal{P}_{ex}\)._ In the rest of this paper for any atom \(a\), we omit the \(\lambda\)-extension that contains \(\overline{a}\) whenever \(\sim\!\!a\) and \(\overline{a}\) can both be used because \(\sim\!\!a\) is more cautious. In Example 6, we henceforth solely state \(\{c\}\) and \(\{\sim\!\!f\}\) as the \(\lambda\)-extensions for \(r_{4}\). Furthermore, note that conflicts like \(\{a\!\leftarrow\!b.,\overline{a}\!\leftarrow\!b.\}\) require an expansion of both rules bodies by complementary literals in order to be resolved. We argue that suitable expansions for such conflicts can only be determined by the knowledge expert. Thus, conflicts of this type will not be considered in this paper. ## 4 Relationship between \(\lambda\)-extensions Naturally, a practical implementation of the interactive framework for conflict resolution, as presented in [9, 10], has to provide a proper workflow to resolve each conflict and suggest solutions in a suitable order. We propose that a syntax-based approach considers the different connections between possible solutions in order to condense the resolution of multiple conflicts. For this reason, we introduce \(\lambda\)_-graphs_ and corresponding _clique covers_ that can be used to point out such connections. Based on these results in Section 5, we demonstrate how \(\lambda\)-graphs and clique covers can be used to define explicit strategies that specify in which order conflicts and solutions should be presented to the knowledge expert. For resolving conflicts in a program thoroughly, we have to make sure that each rule that is involved in a conflict must be taken into regard. This idea is formalized by conflict group covers. **Definition 3** (Conflict group cover).: _Let \(\mathcal{P}_{\mathit{cf}}\) be the set of all rules in \(\mathcal{P}\) that are part of a conflict, and let \(\mathbf{\Gamma}\) be a set of resolvable conflict groups. Then, \(\mathbf{\Gamma}\) is a conflict group cover of \(\mathcal{P}\) if each rule in \(\mathcal{P}_{\mathit{cf}}\) appears in at least one conflict group of \(\mathbf{\Gamma}\). The set of all inclusion-minimal conflict group covers of a logic program \(\mathcal{P}\) is denoted by \(\mathit{CCG}(\mathcal{P})\)._ A conflict group cover \(\mathbf{\Gamma}\) of \(\mathcal{P}\) therefore implies via the representative of each conflict group in \(\mathbf{\Gamma}\) which rules in \(\mathcal{P}\) shall be modified. Notice that a conflict group cover \(\mathbf{\Gamma}\) involves a sufficient set of rules that have to be modified to resolve every conflict. **Proposition 1**.: _Given a conflict group cover \(\mathbf{\Gamma}\), expanding the body of each rule \(r\) in \(\{r\mid\text{ there is }\Gamma(r^{\prime})\in\mathbf{\Gamma}\) such that \(r\) is a representative of \(\Gamma(r^{\prime})\}\) by one of its respective \(\lambda\)-extensions yields a consistent program._ Proof.: Let \(\mathbf{\Gamma}\) be a conflict group cover of a program \(\mathcal{P}\) with conflicts and \(\mathcal{P}_{\mathit{cf}}\) the set of conflicting rules in \(\mathcal{P}\). By Definition 3, a set \(\mathbf{\Gamma}\in\mathit{CCG}(\mathcal{P})\) is a set of conflict groups such that every rule in \(\mathcal{P}_{\mathit{cf}}\) appears in at least one conflict group \(\Gamma\in\mathbf{\Gamma}\). Since every conflict group in \(\mathbf{\Gamma}\) is resolvable by Definition 3, there exists at least one \(\lambda\)-extension for the representative of each conflict group in \(\mathbf{\Gamma}\). This in turn means that applying a respective \(\lambda\)-extension to the representative of each conflict group in \(\mathbf{\Gamma}\) resolves every conflict in \(\mathcal{P}\), thus, yielding a conflict-free program \(\mathcal{P}^{\prime}\). This result implies that regarding conflicting rules, it is mandatory for the knowledge expert to initially decide which rules are allowed to be modified and which rules must stay unaffected. This allows to determine all appropriate conflict groups and consequently all appropriate conflict groups covers. Choosing the most suitable cover then ensures that a sufficient and moreover the most suitable set of rules can be modified using \(\lambda\)-extensions to obtain a uniformly non-contradictory program. We can now define \(\lambda\)-graphs w.r.t. a conflict group cover \(\mathbf{\Gamma}\) by making use of weights for the nodes, and labels for the edges to store information which is crucial for the resolution process. **Definition 4** (\(\lambda\)-graph).: _Given a conflict group cover \(\mathbf{\Gamma}\in\mathit{CCG}(\mathcal{P})\) of a logic program \(\mathcal{P}\), the \(\mathbf{\Gamma}\)-induced \(\lambda\)-graph\(G(\mathbf{\Gamma})\) is a tuple \((V,E)\) of weighted nodes \(V\) and labeled edges \(E\) where \(V\) contains a weighted node \((\Gamma,w_{\Gamma})\) for each conflict group \(\Gamma\) in \(\mathbf{\Gamma}\) with size \(w_{\Gamma}\), and \(E\) contains labeled edges \((\Gamma,\Gamma^{\prime},\lambda)\) whenever the representatives of two different conflict groups \(\Gamma\) and \(\Gamma^{\prime}\) have a common \(\lambda\)-extension \(\lambda\), and for any \(\lambda\)-extension \(\lambda\) that is an extension for a representative of only one conflict group \(\Gamma\), there exists a self-loop \((\Gamma,\Gamma,\lambda)\)._ Since every conflict group in a program \(\mathcal{P}\) is represented by a weighted node and each node has either an edge to itself or to another node that shares a common \(\lambda\)-extension, every conflict is considered in a \(\lambda\)-graph. We now illustrate \(\mathbf{\Gamma}\)-induced \(\lambda\)-graphs using the running example. **Example 7** (Example 1 contd.).: _Suppose \(\mathcal{P}_{\mathit{ex}}=\mathcal{P}_{\mathit{cf}}\) and the conflict group cover \(\mathbf{\Gamma}_{\mathit{ex}}=\{\Gamma(r_{2}),\,\Gamma(r_{4}),\)\(\Gamma(r_{6}),\)\(\Gamma(r_{8}),\)\(\Gamma(r_{10}),\)\(\Gamma(r_{11}),\)\(\Gamma(r_{13}),\)\(\Gamma(r_{14})\}\in\mathit{CCG}(\mathcal{P}_{\mathit{ex}})\) are given. Table 1 shows the conflict groups in \(\mathbf{\Gamma}_{\mathit{ex}}\) and the \(\lambda\)-extensions of the corresponding representatives as stated in Table 1. The \(\mathbf{\Gamma}_{\mathit{ex}}\)-induced \(\lambda\)-graph \(G(\mathbf{\Gamma}_{\mathit{ex}})=(V,E)\) is obtained in the following way: for each conflict group \(\Gamma\) and its size \(w_{\Gamma}\), we define \((\Gamma,w_{\Gamma})\) as the corresponding weighted node. Then, \(V\) consists of all such weighted nodes, viz., \(V=\{(\Gamma(r_{2}),1),\)\((\Gamma(r_{4}),1),\)\((\Gamma(r_{6}),1),\)\((\Gamma(r_{8}),1),\)\((\Gamma(r_{10}),1),\)\((\Gamma(r_{11}),1),\)\((\Gamma(r_{13}),1),\)\((\Gamma(r_{14}),2)\}\). The set \(E\) consists of all labeled edges \((\Gamma,\Gamma^{\prime},\mathit{lb})\) such that \(\Gamma\) and \(\Gamma^{\prime}\) are pairs of different nodes in \(V\) that share the common label lb, or pairs of identical nodes \(\Gamma\) if the extension corresponding to lb is only a solution for \(\Gamma\), viz., \(E=\{\)\((\Gamma(r_{2}),\Gamma(r_{4}),\{c\}),\)\((\Gamma(r_{8}),\Gamma(r_{8}),\{\neg k\})\), \((\Gamma(r_{14}),\Gamma(r_{14}),\{\neg\ell\})\)\(\}\cup F\cup H\) where \(F=\{(\Gamma(r),\Gamma(r^{\prime}),\{\neg f\})\mid r,r^{\prime}\in\{r_{4},r_{6},r_{10},r_{11},r_{13}\},r \neq r^{\prime}\}\) and \(H=\{(\Gamma(r),\Gamma(r^{\prime}),\{\neg h\})\mid r,r^{\prime}\in\{r_{6},r_{13},r_{14}\},r\neq r^{\prime}\}\)._ _The graphical representation of the resulting \(\lambda\)-graph \(G(\mathbf{\Gamma}_{\mathit{ex}})\) is displayed in Figure 1._ The graph illustrates several complete subgraphs which are sets of nodes where all nodes are connected to each other by an edge with the same label. Such subgraphs we call \(\lambda\)_-cliques_. **Definition 5** (\(\lambda\)-clique).: _Suppose a logic program \(\mathcal{P}\) and a conflict group cover \(\mathbf{\Gamma}\in\mathit{CCG}(\mathcal{P})\). A \(\lambda\)-clique w.r.t. a label lb in a \(\lambda\)-graph \(G(\mathbf{\Gamma})=(V,E)\) is a maximal subgraph \(G(\mathbf{\Gamma},\mathit{lb})=(V^{\prime},E^{\prime})\) of \(G(\mathbf{\Gamma})\) where (1) \(E^{\prime}\subseteq E\) contains all edges in \(E\) with label lb, and (2) \(V^{\prime}\subseteq V\) contains every node that is connected to an edge in \(E^{\prime}\). We define the weight of a \(\lambda\)-clique as the sum of the weights of all nodes that occur in the \(\lambda\)-clique. The set of all \(\lambda\)-cliques in a \(\lambda\)-graph \(G(\mathbf{\Gamma})\) is denoted by \(\mathit{CLQ}(\mathbf{\Gamma})\)._ In the following, given a set of edges \(E\), its subset of all edges with label \(\mathit{lb}\) is denoted by \(E^{\mathit{lb}}\). Since a \(\lambda\)-clique \(G(\mathbf{\Gamma},\mathit{lb})\) contains all edges of \(E^{\mathit{lb}}\) and all their connected nodes, every \(\lambda\)-clique is a complete graph. **Example 8** (Example 7 contd.).: \(G(\mathbf{\Gamma}_{\mathit{ex}})=(V,E)\) _contains the following five \(\lambda\)-cliques:_ \(Q_{1}=(\{(\Gamma(r_{6}),1),(\Gamma(r_{13}),1),(\Gamma(r_{14}),2)\},E^{\neg h}, \{\neg h\})\) _with weight 4_ \(Q_{2}\) = \((\{(\Gamma(r_{4}),1),(\Gamma(r_{6}),1),(\Gamma(r_{10}),1),(\Gamma(r_{11}),1),( \Gamma(r_{13}),1)\},E^{\sim f},\{\sim f\})\) with weight 5 \(Q_{3}\) = \((\{(\Gamma(r_{2}),1),(\Gamma(r_{4}),1)\},E^{\sim c},\{c\})\) with weight 2 \(Q_{4}\) = \((\{(\Gamma(r_{8}),1)\},E^{\sim k},\{\sim k\})\) with weight 1 \(Q_{5}\) = \((\{(\Gamma(r_{14}),2)\},E^{\sim t\sim\overline{t}},\{\sim t,\sim\overline{t}\})\) with weight 2 _Table 1 shows which conflicts are involved in the different cliques_ As all conflicts represented in a \(\lambda\)-clique \((V^{\prime},E^{\prime})\) share a \(\lambda\)-extension \(\lambda\), its weight \(\omega\) indicates that \(\omega\) different conflicts can be solved by extending \(B(r)\) of each representative rule \(r\) in \((\Gamma(r),w)\in V^{\prime}\) by \(lb\). To find sets of cliques such that every conflict in a program is considered, we introduce _clique covers_ for \(\lambda\)-graphs. **Definition 6** (Clique cover).: _Suppose a logic program \(\mathcal{P}\) and a set \(\mathbf{Q}\subseteq\mathit{CLQ}(\mathbf{\Gamma})\) of \(\lambda\)-cliques in a graph \(G(\mathbf{\Gamma})=(V,E)\) are given. We say \(\mathbf{Q}\) is a clique cover for \(G(\mathbf{\Gamma})\) if every node in \(V\) appears in at least one clique of \(\mathbf{Q}\). Moreover, \(\mathbf{Q}\) is the minimal clique cover if there is no clique cover \(\mathbf{Q}^{\prime}\) for \(G(\mathbf{\Gamma})\) s. t. \(|\mathbf{Q}^{\prime}|\leq|\mathbf{Q}|\)._ A minimal clique cover for a graph \(G(\mathbf{\Gamma})\), therefore, provides us with minimal compositions of cliques where every conflict is considered. For this reason, our approach uses clique covers to determine which conflicts can be solved by the same \(\lambda\)-extensions which in turn can be used to find a suitable order in which conflicts and their solutions can be suggested to the user. Herewith, we arrive at the following result which will be useful in the following. **Proposition 2**.: _Given a program \(\mathcal{P}\) with conflicts and a \(\lambda\)-clique \(\mathbf{\Gamma}\in\mathit{CCG}(\mathcal{P})\), a minimal clique cover for \(G(\mathbf{\Gamma})\) provides a minimal set of \(\lambda\)-extensions \(\mathbf{L}=\{lb\mid(V^{\prime},lb)\in\mathbf{Q}\}\) that is required to obtain a program without conflicts._ Proof.: Let \(\mathbf{\Gamma}\in CCG(\mathcal{P})\) be a \(\lambda\)-clique of a program \(\mathcal{P}=(V,E)\) with conflicts, and \(\mathbf{Q}\subseteq CLQ(\mathbf{\Gamma})\) a clique cover for \(G(\mathbf{\Gamma})\). By Definition 4, \(V\) contains a node for each conflict group of a conflict group cover \(\mathbf{\Gamma}\). By Proposition 1, \(\mathbf{\Gamma}\) considers all conflicting rules of \(\mathcal{P}\). Thus, every conflict in \(\mathcal{P}\) is implicitly represented by at least one node in \(V\). By Definition 5, a \(\lambda\)-clique \((V^{\prime},E^{lb},lb)\) in \(G(\mathbf{\Gamma})\) condenses conflict groups that share a common \(\lambda\)-extension \(lb\), and by Definition 6, each node in \(V\) appears in at least one clique of \(\mathbf{Q}\). Hence, \(\mathbf{Q}\) implies a set of \(\lambda\)-extensions \(\mathbf{L}=\{lb\mid(V^{\prime},lb)\in\mathbf{Q}\}\) that are sufficient to resolve all conflicts in \(\mathcal{P}\), by extending the body of representative rules in the conflict groups suitably. Likewise if such a clique cover is minimal, \(\mathbf{\Gamma}\) implies the smallest set of solutions for \(\mathcal{P}\). The notion of minimal cover is illustrated in the following example. **Example 9** (Example 7 contd.).: _The minimal clique cover in \(G(\mathbf{\Gamma}_{ex})\) is the set \(\{Q_{1},Q_{2},Q_{3},Q_{4}\}\). Therefore, the four \(\lambda\)-extensions \(\{c\}\), \(\{\sim\!f\}\), \(\{\sim\!h\}\), and \(\{\sim\!k\}\) suffice to obtain a program without conflicts._ Now we are ready to show how the results provide the crucial basis to define suitable orders over conflicts and \(\lambda\)-extensions for their usage in conflict resolution frameworks. ## 5 Sorting conflicts and \(\lambda\)-extensions In this section we show how \(\lambda\)-graphs and clique covers can be utilized to define strategies that compute (1) an order over all conflict groups of a program, and (2) an order over the \(\lambda\)-solutions of each conflict group. These orders can then be used to define in which sequence they should be presented to the knowledge expert. The goal is to improve the efficiency of the interactive conflict resolution process with the expert and to facilitate the resolution process overall by prioritizing those conflict groups whose solutions can be used for other conflict groups and thereby solve the most amount of conflicts simultaneously. For that reason in this section, we provide an example of such a strategy that makes use of the technical notions introduced in the preceding sections. We begin by introducing the notion of _relationships_: We say a \(\lambda\)-clique \(Q\)_is related to another \(\lambda\)-clique \(Q^{\prime}\)_ iff \(Q\) and \(Q^{\prime}\) share a common node. Moreover, we say that a conflict group \(\Gamma\) is _part of a \(\lambda\)-clique \(Q\)_ if the node that corresponds to \(\Gamma\) is in \(Q\). With these conventions, we are now able to define a strategy to sort conflicts and their \(\lambda\)-extensions such that the user can resolve all conflicts in a more suitable way. ### Order strategy The goal of an order strategy is to establish an order in which the conflict groups are presented to the user and to additionally obtain an order for each conflict group that specifies how the respective \(\lambda\)-extensions are suggested. The primary objective is to assist the knowledge expert to efficiently find the correct resolution for each conflict by preferring solutions that solve the most amount of conflicts simultaneously. Recall that a clique cover of \(G(\mathbf{\Gamma})\) contains a set of conflict groups that considers all conflicts in \(\mathcal{P}\) and, by definition, also implicitly provides possible solutions for every conflict. The clique cover furthermore specifies explicitly which rules of \(\mathcal{P}\) will be modified. The following strategy will illustrate how these properties can be used to obtain an order over conflict groups and \(\lambda\)-extensions. The first step employs two sorting criteria in lexicographical order. First, a general order over the conflict groups in \(\mathbf{\Gamma}\) is determined by viewing the number of edges of each node in \(G(\mathbf{\Gamma})\) in order to prefer conflict groups with less possible solutions. Conflicts with only one possible solution can hereby be dealt with first which can potentially reduce the complexity of subsequent conflict resolutions. However, since many conflict groups can have the same amount of solutions, we refine this order in a subsequent action by taking the corresponding clique weights into account. The second step of the strategy determines an order over \(\lambda\)-extensions for each conflict group. Here again, clique weights are utilized. Consequently, this strategy defines an order over conflict groups and possible solutions where those groups with the least possible solutions are presented first and for each conflict group those solutions are preferred that can be used to resolve related conflicts in parallel. We now define these steps in more detail. Each step is explained by means of our running example. Step 1a:In the first step, the conflict groups are ordered by the number of cliques they are part of. The user is shown those conflict groups first that are related to the least amount of cliques. By this, the knowledge expert is being presented with as few choices at a time as possible. **Example 10** (Example 9 contd.).: _The respective representatives of conflict groups \(\Gamma(r_{2})\), \(\Gamma(r_{8})\), \(\Gamma(r_{10})\), and \(\Gamma(r_{11})\) each have only one possible \(\lambda\)-extension. The representatives of all remaining conflict groups have two. For \(G(\mathbf{\Gamma}_{ex})\), we thus get the preliminary order of conflict groups_ \[\Gamma(r_{2}),\Gamma(r_{8}),\Gamma(r_{10}),\Gamma(r_{11})>_{\mathbf{\Gamma}} \Gamma(r_{4}),\Gamma(r_{6}),\Gamma(r_{13}),\Gamma(r_{14})\] _that says that all conflict groups on the left side should be presented before those on the right._ It is easy to see that even in smaller programs, this kind of order can be too coarse-grained. To order conflict groups that have the same amount of possible solutions, we propose the following subsequent step. Step 1b:To refine the order obtained in Step 1a, we use the weight of the cliques. As mentioned before, the weight of a clique represents the amount of different conflict groups that can be solved by the \(\lambda\)-extension that is represented by the label of the clique. Therefore, the conflict groups with the same amount of solutions should additionally be arranged by the sum of weights of all cliques they are part of in descending order. Thereby, we provide the knowledge expert with the opportunity to resolve as many conflicts as possible as soon as possible. If there are conflict groups with the same total weight, we simply arrange them in alphanumerical order. **Example 11** (Ex. 10 contd.).: _Conflict group \(\Gamma(r_{2})\) is only part of clique \(Q_{3}\) that has weight 2. Conflict group \(\Gamma(r_{8})\) is only part of clique \(Q_{4}\) with weight 1. Conflict groups \(\Gamma(r_{10})\) and \(\Gamma(r_{11})\) are only part of clique \(Q_{2}\) that has weight 5. For conflict group \(\Gamma(r_{4})\), which is both in \(Q_{2}\) and \(Q_{3}\), we get the total weight of 7 as \(Q_{3}\) has weight 2 and \(Q_{2}\) has weight 5. Likewise for \(\Gamma(r_{6})\) and \(\Gamma(r_{13})\), we get a total weight of 9, and for \(\Gamma(r_{14})\) a total weight of 6. This way, for \(G(\mathbf{\Gamma}_{ex})\) we obtain the following, more specific order:_ \[\Gamma(r_{10})>_{\mathbf{\Gamma}}\Gamma(r_{11})>_{\mathbf{\Gamma}}\Gamma(r_{2} )>_{\mathbf{\Gamma}}\Gamma(r_{8})>_{\mathbf{\Gamma}}\Gamma(r_{6})>_{\mathbf{ \Gamma}}\Gamma(r_{13})>_{\mathbf{\Gamma}}\Gamma(r_{4})>_{\mathbf{\Gamma}} \Gamma(r_{14})\] Step 2:Step 1 provides us with a suitable order over conflict groups. As conflict groups can have multiple \(\lambda\)-extensions (see Example 5), Step 2 defines how one can obtain an order \(\succ_{\Gamma}\) over all \(\lambda\)-extensions for the representative of each conflict group \(\Gamma\). For that we will use the weight of their respective cliques. That is, the \(\lambda\)-extensions for the representative in each group are ordered by the weight of their respective clique in descending order. If for two extensions the clique weight is identical, again, we sort them in alphanumerical order. **Example 12** (Ex. 11 contd.).: _The representative of conflict groups \(\Gamma(r_{2})\), \(\Gamma(r_{8})\), \(\Gamma(r_{10})\), and \(\Gamma(r_{11})\) each have their own unique solution, viz., \(\{c\}\), \(\{\sim\!\!\!\!\!\sim\!\! program modification is however expedient as it provides the means to implement the crucial functionality to postpone the resolution of certain conflicts as the orders of conflict groups and \(\lambda\)-extension are only a recommendation based on the syntactical properties of the program. Algorithm 1 summarizes the complete workflow. We recommend that in an actual implementation, the choices stated in lines 3 and 5 should be made in interaction with the expert by presenting the conflicts in an appropriate fashion as Example 6 shows that choosing the representatives of conflicts is a crucial step and not straightforward especially if multiple rules of a conflict are eligible for modification by a \(\lambda\)-extension. An explicit implementation should therefore provide the ability to revise the chosen conflict groups and representatives if the final program is not deemed satisfactory. To conclude this section, we illustrate the workings of this strategy by applying it on the running example. This example will also propose possible ways to provide the knowledge expert with additional information using the \(\lambda\)-graph and simulate a possible line of thought of the expert. **Example 13** (Example 12 contd.).: _Assume that a knowledge expert is assigned to resolve all conflicts in \(\mathcal{P}_{ex}\). According to the conflict group order obtained in Example 11, conflict group \(\Gamma(r_{10})\) is presented first. As stated in Table 1, \(r_{10}\) only has the possible solution \(\{\sim\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[r_{1}\colon a\!\leftarrow\!b,\sim\!c. r_{2}\colon\overline{a}\!\leftarrow\!b,c. r_{3}\colon x\!\leftarrow\!d,e,f,\sim\!c. r_{4}\colon\overline{x}\!\leftarrow\!d,e,c.\] \[r_{5}\colon y\!\leftarrow\!g,h,f. r_{6}\colon\overline{y}\!\leftarrow\!g,\sim f. r_{7}\colon z\!\leftarrow\!j,k,\sim\!l. r_{8}\colon\overline{z}\!\leftarrow\!j,\sim\!l,\sim\!k.\] \[r_{9}\colon w\!\leftarrow\!f,m,n. r_{10}\colon\overline{w}\!\leftarrow\!m,\sim f. r_{11}\colon\overline{w}\!\leftarrow\!n,\sim f. r_{12}\colon p\!\leftarrow\!o,h,f,\sim\!q.\] \[r_{13}\colon\overline{p}\!\leftarrow\!o,\sim\!q,\sim f. r_{14}\colon u\!\leftarrow\!s,\sim\!h. r_{15}\colon\overline{u}\!\leftarrow\!s,\overline{t},h. r_{16}\colon\overline{u}\!\leftarrow\!s,t,h.\] Note that Example 13 illustrates just one of many possible ways how ordering strategies and \(\lambda\)-graphs can be utilized for the implementation of interactive resolution of conflicts. For instance, instead of completely omitting conflicts during the process once they are resolved, these conflicts can be shown further on, only flagging them as resolved. This would provide the expert with additional information that is otherwise removed once the cliques of resolved conflict groups are removed. Such a functionality could also be extended by the possibility to revert previous modifications. ## 6 Related work The method of conflict resolution is closely related to the topic of _ASP debugging_. There we find several approaches that deal with the modification of logic programs [4, 7, 8]. These programs are not necessarily inconsistent. They rather help the user knowledge expert to fix the mismatch between the current program's semantics and the semantics intended by the program's modeller. In [3], the authors utilize the notion of incoherence to implement the debugging of programs. All these approaches, however, require the knowledge expert to provide further information in order to detect and resolve the faulty parts of the program. In [6], the authors present an approach to resolve inconsistency (by contradictions or incoherence) by finding minimal sets of rules that are causing inconsistency. Similar to [10], it also presents a way to compute possible solutions, which in this case are minimal correction sets of rules whose removal guarantee that the reduced program is consistent. These minimal sets are identical to those found in the presented approach if the inconsistency is caused by contradictory literals. Compared to that work, our approach in this paper helps to preserve information by not removing those rules, but instead exploiting dependencies and subtle differences in conflicting rules so that the knowledge expert is provided with suitable information to sharpen the rules by extending them. In this way, (potential) conflicts are resolved and actually help to make the knowledge expressed by the program more professionally adequate. In this work, we provide an interactive solution strategy by suggesting an order over the problem causes and an order over the possible solutions. Once suitable solutions are available, both, the approach using \(\lambda\)-extensions as well as debugging approaches like those based on the _meta-programming technique_[4] can be used to obtain a consistent program. ## 7 Conclusion and future work In practice, it is often imperative that a program (core) is usable with different instance data. For example, in the medical sector, decision support systems are often required to be usable with different patient data. Resolving conflicts in this manner ensures that program cores can be used in such real-world applications in a safely manner w.r.t. consistency. This paper extends the work in [9, 10] that propose a framework for obtaining uniformly non-contraditory logic programs by enabling knowledge experts to interactively resolve conflicts. Since conflicting rules can have a large amount of possible solutions, methods are
2310.06301
Dynamical versus Bayesian Phase Transitions in a Toy Model of Superposition
We investigate phase transitions in a Toy Model of Superposition (TMS) using Singular Learning Theory (SLT). We derive a closed formula for the theoretical loss and, in the case of two hidden dimensions, discover that regular $k$-gons are critical points. We present supporting theory indicating that the local learning coefficient (a geometric invariant) of these $k$-gons determines phase transitions in the Bayesian posterior as a function of training sample size. We then show empirically that the same $k$-gon critical points also determine the behavior of SGD training. The picture that emerges adds evidence to the conjecture that the SGD learning trajectory is subject to a sequential learning mechanism. Specifically, we find that the learning process in TMS, be it through SGD or Bayesian learning, can be characterized by a journey through parameter space from regions of high loss and low complexity to regions of low loss and high complexity.
Zhongtian Chen, Edmund Lau, Jake Mendel, Susan Wei, Daniel Murfet
2023-10-10T04:26:04Z
http://arxiv.org/abs/2310.06301v1
# Dynamical versus Bayesian Phase Transitions in a Toy Model of Superposition ###### Abstract We investigate phase transitions in a Toy Model of Superposition (TMS) (Elhage et al., 2022) using Singular Learning Theory (SLT). We derive a closed formula for the theoretical loss and, in the case of two hidden dimensions, discover that regular \(k\)-gons are critical points. We present supporting theory indicating that the local learning coefficient (a geometric invariant) of these \(k\)-gons determines phase transitions in the Bayesian posterior as a function of training sample size. We then show empirically that the same \(k\)-gon critical points also determine the behavior of SGD training. The picture that emerges adds evidence to the conjecture that the SGD learning trajectory is subject to a sequential learning mechanism. Specifically, we find that the learning process in TMS, be it through SGD or Bayesian learning, can be characterized by a journey through parameter space from regions of high loss and low complexity to regions of low loss and high complexity. ## 1 Introduction The apparent simplicity of the Toy Model of Superposition (TMS) proposed in Elhage et al. (2022) conceals a remarkably intricate _phase structure_. During training, a plateau in the loss is often followed by a sudden discrete drop, suggesting some development in the network's internal structure. To shed light on these transitions and their significance, this paper examines the dynamical transitions in TMS during SGD training, connecting them to phase transitions of the Bayesian posterior with respect to sample size \(n\). While the former transitions have been observed in several recent works in deep learning (Olsson et al., 2022; McGrath et al., 2022; Wei et al., 2022a), their formal status has remained elusive. In contrast, phase transitions of the Bayesian posterior are mathematically well-defined in Singular Learning Theory (SLT) (Watanabe, 2009). Using SLT, we can show formally that the Bayesian posterior is subject to an _internal model selection_ mechanism in the following sense: the posterior prefers, for small training sample size \(n\), critical points with low complexity but potentially high loss. The opposite is true for high \(n\) where the posterior prefers low loss critical points at the cost of higher complexity. The measure of complexity here is very specific: it is the _local learning coefficient_, \(\lambda\), of the critical points, first alluded to by Watanabe (2009, SS7.6) and clarified recently in Lau et al. (2023). We can think of this internal model selection as a discrete dynamical process: at various critical sample sizes the posterior concentration "jumps" from one region \(\mathcal{W}_{\alpha}\) of parameter space to another region \(\mathcal{W}_{\beta}\). We refer to an event of this kind as a _Bayesian phase transition_\(\alpha\to\beta\). For the TMS model with two hidden dimensions we show that these Bayesian phase transitions actually occur and do so between phases dominated by weight configurations representing regular polygons (termed here \(k\)-_gons_). The main result of SLT, the asymptotic expansion of the free energy (Watanabe, 2018), predicts phase transitions as a function of the loss and local learning coefficient of each phase. For TMS, we are in the fortunate position of being able to derive theoretically the exact local learning coefficient of the \(k\)-gons which are most commonly encountered during MCMC sampling of the posterior, and thereby verify that the mathematical theory correctly predicts the empirically observed phases and phase transitions. Altogether, this forms a mathematically well-founded toolkit for reasoning about phase transitions in the Bayesian posterior of TMS. It has been observed empirically in TMS that SGD training also undergoes "phase transitions" (Elhage et al., 2022) in the sense that we often see steady plateaus in the training (and test) loss separated by sudden transitions, associated with geometric transformations in the configuration of the columns of the weight matrix. Figure 1 shows a typical example. We refer to these as _dynamical transitions_. A striking pattern emerges when we observe the evolution of the loss and the estimated local learning coefficient, \(\hat{\lambda}\), over the course of training: we see "opposing staircases" where each drop in the training and test loss is accompanied by a jump in the (estimated) local complexity measure. In essence, during the training process, as SGD reduces the loss, it exhibits an increasing tolerance for complex solutions. On these grounds we propose the **Bayesian antecedent hypothesis**, which says that these dynamical transitions have "standing behind them" a Bayesian phase transition. We begin in Section 3.1 by recalling the TMS, and present a closed form for the population loss in the high sparsity limit. In our **first contribution**, we provide a partial classification of critical points of the population loss (Section 3.2) and document the local learning coefficients of several of these critical points (Section 3.3). In our **second contribution**, we experimentally verify that the main phase transition predicted by the internal model selection theory, using Figure 1: In TMS for \(r=2\) hidden dimensions and \(c=6\) feature dimensions, SGD seems to perform an internal form of Occam’s Razor: at the beginning of training, high loss solutions are tolerated because they have low complexity (low local learning coefficient \(\hat{\lambda}\)) but at the end of training low loss solutions are attractive despite their high complexity (high local learning coefficient \(\hat{\lambda}\)). The top row shows a visualization of the columns \(W_{i}\) of three snapshots (timestamps shown as red dots in the loss plot). For more examples and a guide to reading these plots, see Appendix B. the theoretically derived local learning coefficients, actually takes place (Section 4.2). In Section 5 we present experimental results on dynamical transitions in TMS. Our **third contribution** is to show empirically that SGD training in TMS transitions from high-loss-low-complexity solutions to low-loss-high-complexity solutions, where complexity is measured by the estimated local learning coefficient. This provides support for our proposed relation between Bayesian and dynamical transitions (Section 5.1). ## 2 Related work The TMS problem is, with the nonlinearity removed and varying importance factors, solved by computing principal components; it has long been understood that the learning dynamics of computing principal components is determined by a unique global minimum and a hierarchy of saddle points of decreasing loss (Baldi and Hornik, 1989), (Amari, 2016, SS13.1.3). In recent decades an extensive literature has emerged on _Deep Linear Networks_ (DLNs) building on these results, and applying them to explain phenomena in the development of both natural and artificial neural networks (Saxe et al., 2019). Under some hypotheses the saddles of a DLN are strict (Kawaguchi, 2016) and all local minima are global; this suggests a picture of gradient descent dynamics moving through neighbourhoods of saddles of ever-decreasing index until reaching a global minima. This has been termed "saddle-to-saddle" dynamics by Jacot et al. (2021). Through careful analysis of training dynamics it has been shown for DLNs that there is a general tendency of optimization trajectories towards solutions of lower loss and higher "complexity", which is generally defined in an ad-hoc way depending on the data distribution (Arora et al., 2018; Li et al., 2020; Eftekhari, 2020; Advani et al., 2020). For example, it has been shown that gradient-based optimization introduces a form of implicit regularization towards low-rank solutions in deep matrix factorization (Arora et al., 2019). Viewing the optimization process as a search for solutions which begins at candidates of low complexity, the tendency to gradually increase complexity "only when necessary" has been put forward as a potential explanation for the generalization performance of neural networks (Gissin et al., 2019). This intuition is backed by results such as (Gidel et al., 2019; Saxe et al., 2013), which show that for DLNs the singular values of the model are learned separately at different rates, with features corresponding to larger singular values learned first. Outside of the DLN models, saddle-to-saddle dynamics of SGD training have been studied in toy non-linear models often referred to as single-index or multi-index models. In these models, the target function for input \(x\in\mathbb{R}^{d}\) is generated by a non-linear, low dimensional function \(\varphi:\mathbb{R}^{k}\rightarrow\mathbb{R}\) via \(f(x)=\varphi(\theta^{T}x)\) with \(\theta\in\mathbb{R}^{d\times k}\) where \(k\ll d\). Single-index refers to \(k=1\). In a very recent work, Abbe et al. (2023) showed that for a particular multi-index model with certain restrictions on the input data distribution, SGD follows a saddle-to-saddle dynamic where the learning process adaptively selects target functions of increasing complexity. Their Figure 1 tells the same story as our Figure 1: at the beginning of training, low complexity solutions are preferred, and the opposite preference develops as training progresses. One attempt to put these intuitions in a broader context is (Zhang et al., 2018) which relates the above phenomena to entropy-energy competition in statistical physics. However this approach suffers from a lack of theoretical justification due to an incorrect application of the Laplace approximation (Wei et al., 2022b; Lau et al., 2023). The internal model selection principle (Section 4.1) in singular learning theory provides the correct form of entropy-energy competition for neural networks and potentially gives a theoretical backing for the intuitions developed in the DLN literature. ## 3 Toy Model of Superposition ### The TMS Potential We recall the Toy Model of Superposition (TMS) setup from (Elhage et al., 2022) and derive a closed-form expression for the population loss in the high sparsity limit. The TMS is an autoencoder with input and output dimension \(c\) and hidden dimension \(r<c\): \[f:X\times\mathcal{W}\longrightarrow\mathbb{R}^{c}\,,\] \[f(x,w)=\mathrm{ReLU}(W^{T}Wx+b)\,, \tag{1}\] where \(w=(W,b)\in\mathcal{W}\subseteq M_{r,c}(\mathbb{R})\times\mathbb{R}^{c}\) and inputs are taken from \(x\in X=[0,1]^{c}\). We suppose that the (unknown) true generating mechanism of \(x\) is given by the distribution \[q(x)=\sum_{i=1}^{c}\frac{1}{c}\delta_{x\in C_{i}} \tag{2}\] where \(C_{i}\) denotes the \(i\)th coordinate axis intersected with \(X\). Sampling from \(q(x)\) can be described as follows: uniformly sample a coordinate \(1\leq i\leq c\) and then uniformly sample a length \(0\leq\mu\leq 1\), return \(\mu e_{i}\) where \(e_{i}\) is the \(i\)th unit vector. This is the high sparsity limit of the TMS input distribution of Elhage et al. (2022), see also Henighan et al. (2023). We posit the probability model \[p(x|w)\propto\exp\big{(}-\tfrac{1}{2}\|x-f(x,w)\|^{2}\big{)}, \tag{3}\] which leads to the expected negative log likelihood \(-\int q(x)\log p(x|w)\,dx\). Dropping terms constant with respect to \(w\) we arrive at the population loss function \[L(w)=\int q(x)\|x-f(x,w)\|^{2}dx\,.\] Given \(W\in M_{r,c}(\mathbb{R})\) we denote by \(W_{1},\ldots,W_{c}\) the columns of \(W\). We set 1. \(P_{i,j}=\{(W,b)\in M_{r,c}(\mathbb{R})\times\mathbb{R}^{c}\,|\,W_{i}\cdot W_{ j}>0\text{ and }-W_{i}\cdot W_{j}\leq b_{i}\leq 0\}\); 2. \(P_{i}=\{(W,b)\in M_{r,c}(\mathbb{R})\times\mathbb{R}^{c}\,|\,\|W_{i}\|^{2}>0 \text{ and }-\|W_{i}\|^{2}\leq b_{i}\leq 0\}\); 3. \(Q_{i,j}=\{(W,b)\in M_{r,c}(\mathbb{R})\times\mathbb{R}^{c}\,|\,-W_{i}\cdot W_ {j}>b_{i}>0\}\) For \(w=(W,b)\) we set \(\delta(P_{i,j})\) to be \(1\) if \(w\in P_{i,j}\) and \(0\) otherwise, similarly for \(\delta(P_{i}),\delta(Q_{i,j})\). **Lemma 3.1**.: _For \(w=(W,b)\in M_{r,c}(\mathbb{R})\times\mathbb{R}^{c}\) we have \(L(w)=\frac{1}{3c}H(w)\) where_ \[H(W,b)=\sum_{i=1}^{c}\delta(b_{i}\leq 0)H_{i}^{-}(W,b)+\delta(b_{i}>0)H_{i}^{+} (W,b) \tag{4}\] _and_ \[H_{i}^{-}(W,b) =\sum_{j\neq i}\delta(P_{i,j})\bigg{[}\frac{1}{W_{i}\cdot W_{j}} (W_{i}\cdot W_{j}+b_{i})^{3}\bigg{]}\] \[\quad+\delta(P_{i})\bigg{[}\frac{b_{i}^{3}}{\|W_{i}\|^{4}}+\frac{ b_{i}^{3}}{\|W_{i}\|^{2}}\bigg{]}+(1-\delta(P_{i}))+\delta(P_{i})N_{i}\] \[H_{i}^{+}(W,b) =\sum_{j\neq i}\delta(Q_{i,j})\bigg{[}-\frac{1}{W_{i}\cdot W_{j}} b_{i}^{3}\bigg{]}\] \[\quad+\sum_{j\neq i}(1-\delta(Q_{i,j}))\bigg{[}(W_{i}\cdot W_{j} )^{2}+3(W_{i}\cdot W_{j})b_{i}+3b_{i}^{2}\bigg{]}+N_{i}\] _where \(N_{i}=(1-\|W_{i}\|^{2})^{2}-3(1-\|W_{i}\|^{2})b_{i}+3b_{i}^{2}\)_ Proof.: See Appendix G. We refer to \(H(w)\) as the **TMS potential**. While this function is analytic at many of the critical points of relevance when \(r=2\), it is not analytic at the \(4\)-gons (see Appendix J). ### \(k\)-gon critical points We prove that various \(k\)-gons are critical points for \(H\) when \(r=2\). Recall that \(w^{*}\in\mathcal{W}\) is a _critical point_ of \(H\) if \(\nabla H|_{w=w^{*}}=0\). The function \(H\) is clearly \(O(r)\)-invariant: if \(O\) is an orthogonal matrix then \(H(OW,b)=H(W,b)\). The potential is also invariant to jointly permuting the columns and biases. Due to these _generic symmetries_ we may without loss of generality assume that the columns \(W_{i}\) of \(W\) are ordered anti-clockwise in \(\mathbb{R}^{2}\) with zero columns losing last. For \(i=1,\ldots,c\), let \(\theta_{i}\in[0,2\pi)\) denote the angle between nonzero columns \(W_{i}\) and \(W_{i+1}\), where \(c+1\) is defined to be \(1\). Let \(l_{i}\in\mathbb{R}_{\geq 0}\) denote \(\|W_{i}\|\). In this parametrization \(W\) has coordinate \((l_{1},\ldots,l_{c},\theta_{1},\ldots,\theta_{c},b_{1},\ldots,b_{c})\) with constraint \(\theta_{1}+\cdots+\theta_{c}=2\pi\). Since \(O(2)\) has dimension \(1\) any critical point of \(H\) is automatically part of a \(1\)-parameter family. For convenience we refer to a critical point as non-degenerate (resp. minimally singular) if it has these properties _modulo the generic symmetries_, that is, in the \(\theta,l,b\) parametrization. Thus, a critical point is non-degenerate (resp. minimally singular) if in a local neighbourhood in the \(\theta,l,b\) parametrization \(H\) can be written as a full sum of squares with nonzero coefficients (resp. a non-full sum of squares). For background on the minimal singularity condition see (Wei et al., 2022b, SS4) and (Lau et al., 2023, Appendix A). We call \(w\in M_{2,c}(\mathbb{R})\times\mathbb{R}^{c}\) a **standard \(k\)-gon** for \(k\in\{4,5,6,7,8\}\) and \(k\leq c\) if it has coordinate \[l_{1} =\cdots=l_{k}=l^{*}, l_{k+1}=\cdots=l_{c}=0\,,\] \[\theta_{1} =\cdots=\theta_{k-1}=\tfrac{2\pi}{k}, \theta_{k}+\cdots+\theta_{c}=\tfrac{2\pi}{k}\,,\] \[b_{1} =\cdots=b_{k}=b^{*}\,, b_{k+1}<0,\dots,b_{c}<0\] where \(l^{*}\in\mathbb{R}_{>0},b^{*}\in\mathbb{R}_{\leq 0}\) are the unique joint solution to \(-(l^{*})^{2}\cos(s\tfrac{2\pi}{k})\leq b^{*}\) where \(s\) is the unique integer in \([\tfrac{k}{4}-1,\tfrac{k}{4})\) (see Theorem H.1). For values of \(l^{*},b^{*}\) see Table A.1. Any parameter of this form is proven to be a critical point of \(H\) in Appendix H. For \(k\) as above and \(0\leq\sigma\leq c-k\) a \(k^{\sigma+}\)-**gon** is a parameter with the same specification as the standard \(k\)-gon except that \(\sigma\) of the biases \(b_{k+1},\dots,b_{c}\) are equal to \(1/(2c)\) and the rest have arbitrary negative values. We usually write for example \(k^{++}\) when \(\sigma=2\), noting that the \(k^{0+}\)-gon is the standard \(k\)-gon. These parameters are proven to be critical points of \(H\) when \(k\geq 5\) in Appendix I and for \(k=4\) in Appendix J.2. For \(k=4\) there are a number of additional "exotic" \(4\)-gons. They are parametrized by \(0\leq\sigma\leq c-k\) and \(0\leq\phi\leq 4\). A \(4^{\sigma+,\phi^{-}}\)**-gon** has the same specification as the \(4^{\sigma+}\)-gon, except that a subset of the biases \(I\subseteq\{1,2,3,4\}\) of size \(|I|=\phi\) are special in the following sense: for any \(i\notin I\) the bias \(b_{i}\) has the optimal value \(b_{i}=b^{*}=0\) and the corresponding length is standard \(l_{i}=l^{*}=1\), but if \(i\in I\) then \(b_{i}<0\) and \(l_{i}\) is subject only to the constraint \(l_{i}^{2}<-b_{i}\). We write for example \(4^{++-}\) for the \(4^{2+,1-}\)-gon. These are proven to be critical points of \(H\) in Appendix J.2. In Appendix A, we provide visualizations and a quick guide for recognizing these critical points and their variants. What we know is the following: the standard \(k\)-gon for \(k=c\) is a non-degenerate critical point (modulo the generic symmetries) for \(c\in\{5,6,7,8\}\) in the sense that in local coordanates in the \(l,\theta,b\)-parametrization near the critical point \(H\) can be written as a full sum of squares (Section H.1). For \(c>8\) and \(c\) being a multiple of \(4\), we conjecture that the \(c\)-gon is a critical point (Section H.2), and we also conjecture that for \(c>8\) and \(c\) not a multiple of \(4\) there is no \(c\)-gon which is a critical point. When \(k\in\{5,6,7,8\}\) and \(k<c\) the standard \(k\)-gon is minimally singular (Appendix H.3). ### Local learning coefficients The local learning coefficient was proposed in Lau et al. (2023) as a general measure to quantify the degeneracy of a critical point in singular models. Table 1 summarises theoretical local learning coefficients \(\lambda\) and losses \(L\) for some critical points3. For more theoretical values see Table H.2 and Appendix H, and for empirical estimates Appendix K. In minimally singular cases (including \(5,5^{+},6\)) the local learning coefficient agrees with a simple dimension count (half the number of normal directions to the level set, which is locally a manifold). This explains why the coefficient increases by \(\tfrac{3}{2}\) as we move from the \(5\)-gon to the \(6\)-gon: this transition fixes one column of \(W\) (\(2\) parameters) and the corresponding entry in the bias \(b\), and so reduces by \(3\) the number of free parameters, increasing the learning coefficient by \(\tfrac{3}{2}\) (for further discussion see Appendix E). Footnote 3: The \(4\)-gons are on the boundary of multiple chambers (see Appendix J). ## 4 Bayesian Phase Transitions In Bayesian statistics there is a fundamental distinction between the learning process for regular models and singular models. In regular models, as the number of samples \(n\) increases, the posterior concentrates at the MAP estimator and looks increasingly Gaussian. In singular models, which include neural networks, we expect rather that the learning process is dominated by _phase transitions_, where at some critical values \(n\approx n_{cr}\) the posterior "jumps" from one region of parameter space to another.4 This is a universal phenomena in singular learning theory (Watanabe, 2009, 2020). \begin{table} \begin{tabular}{||c|c|c||} \hline Critical points & Local learning coefficient \(\lambda\) & Loss \(L\) \\ \hline \(5\) & \(7\) & \(0.06874\) \\ \hline \(5^{+}\) & \(8.5\) & \(0.06180\) \\ \hline \(6\) & \(8.5\) & \(0.04819\) \\ \hline \end{tabular} \end{table} Table 1: Critical points and their theoretical \(\lambda\) and \(L\) values for the \(r=2,c=6\) TMS potential. ### Internal Model Selection We present phase transitions of the Bayesian posterior in SLT building on (Watanabe, 2009, SS7.6), (Watanabe, 2018, SS9.4), Watanabe (2020). We assume \((p(x|w),q(x),\varphi(w))\) is a model-truth-prior triplet with parameter space \(\mathcal{W}\subseteq\mathbb{R}^{d}\) satisfying the fundamental conditions of (Watanabe, 2009) and the relative finite variance condition (Watanabe, 2018). Given a dataset \(\mathcal{D}_{n}=\{x_{1},\ldots,x_{n}\}\), we define the empirical negative log likelihood function \(L_{n}(w)=-\frac{1}{n}\sum_{i=1}^{n}\log p(x_{i}|w)\). The posterior distribution \(p(w|\mathcal{D}_{n})\) is, up to a normalizing constant, given by \(\exp(-nL_{n}(w))\varphi(w)\). The marginal likelihood is the intractable normalizing constant of the posterior distribution. The free energy \(F_{n}\) is defined to be the negative log of the marginal likelihood: \[F_{n}=-\log\int_{\mathcal{W}}\exp(-nL_{n}(w))\varphi(w)dw\,. \tag{5}\] The asymptotic expansion in \(n\) is (Watanabe, 2018, SS6.3) given by \[F_{n}=nL_{n}(w_{0})+\lambda\log n-(m-1)\log\log n+O_{p}(1) \tag{6}\] where \(w_{0}\) is an optimal parameter, \(\lambda\) is the learning coefficient and \(m\) is the multiplicity. We refer to this as the **free energy formula**. The philosophy behind using the marginal likelihood (or equivalently, the free energy) to perform model selection is well established. Thus we could use the first two terms in (6) to choose between two competing models on the basis of their fit (as measured by \(nL_{n}\)) and their complexity (as measured by \(\lambda\)). We can also apply the same principle to different regions of the parameter space in the same model. Let \(\{\mathcal{W}_{\alpha}\}_{\alpha}\) be a finite collection of compact semi-analytic subsets of \(\mathcal{W}\) with nonempty interior, whose interiors cover \(\mathcal{W}\). We assume each \(\mathcal{W}_{\alpha}\) contains in its interior a point \(w_{\alpha}^{*}\) minimising \(L\) on \(\mathcal{W}_{\alpha}\) and that the triple \((p,q,\varphi)\) restricted to \(\mathcal{W}_{\alpha}\) in the obvious sense has relative finite variance. We refer to the \(\alpha\) rather loosely as _phases_. We can choose a partition of unity \(\rho_{\alpha}\) subordinate to a suitably chosen cover, so as to define \(\varphi_{\alpha}(w)=\rho_{\alpha}(w)\varphi(w)\) with \[F_{n} =-\log\int_{\mathcal{W}}e^{-nL_{n}(w)}\varphi(w)dw=-\log\sum_{ \alpha}\int_{\mathcal{W}_{\alpha}}e^{-nL_{n}(w)}\varphi_{\alpha}(w)dw\] \[=-\log\sum_{\alpha}V_{\alpha}\int_{\mathcal{W}_{\alpha}}e^{-nL_{ n}(w)}\overline{\varphi}_{\alpha}(w)dw=-\log\sum_{\alpha}e^{-F_{n}(\mathcal{W}_{ \alpha})-v_{\alpha}}\] where \(\overline{\varphi}_{\alpha}=\frac{1}{V_{\alpha}}\varphi_{\alpha}\) for \(V_{\alpha}=\int_{\mathcal{W}_{\alpha}}\varphi_{\alpha}dw\), \(v_{\alpha}=-\log(V_{\alpha})\) and \[F_{n}(\mathcal{W}_{\alpha})=-\log\int_{\mathcal{W}_{\alpha}}e^{-nL_{n}(w)} \overline{\varphi}_{\alpha}(w)dw \tag{7}\] denotes the free energy of the restricted tuple \((p,q,\overline{\varphi}_{\alpha},\mathcal{W}_{\alpha})\). We will refer to \(F_{n}(\mathcal{W}_{\alpha})\) as the **local free energy**. Using the log-sum-exp approximation, we can write \(F_{n}=-\log\sum_{\alpha}e^{-F_{n}(\mathcal{W}_{\alpha})-v_{\alpha}}\approx \min_{\alpha}\big{[}F_{n}(\mathcal{W}_{\alpha})+v_{\alpha}\big{]}\). Since (6) applies to the restricted tuple \((p,q,\overline{\varphi}_{\alpha},\mathcal{W}_{\alpha})\) we have \[F_{n}(\mathcal{W}_{\alpha})=nL_{n}(w_{\alpha}^{*})+\lambda_{\alpha}\log n-(m_ {\alpha}-1)\log\log n+O_{p}(1) \tag{8}\] which we refer to as the **local free energy formula**.5 Footnote 5: In general deriving the free energy formula requires some sophisticated mathematics (Watanabe, 2009, 2018) but when the critical point \(w_{\alpha}^{*}\) dominating the phase \(\mathcal{W}_{\alpha}\) is minimally singular, simpler techniques similar to (Balasubramanian, 1997) suffice; see (Lau et al., 2023, Appendix A). Many, but not all, of the singularities appearing in this paper are minimally singular. In this paper we absorb the volume constant \(v_{\alpha}\) and terms of order \(\log\log n\) or lower in (8) into a term \(c_{\alpha}\) that we treat as effectively constant, giving \[F_{n}\approx\min_{\alpha}\big{[}nL_{n}(w_{\alpha}^{*})+\lambda_{\alpha}\log n +c_{\alpha}\big{]}\,. \tag{9}\] A principle of _internal model selection_ is suggested by (9) whereby the Bayesian posterior "selects" a phase \(\alpha\) based on the local free energy of the phase, in the sense that this phase contains most of the probability mass of the posterior for this value of \(n\)(Watanabe, 2009, SS7.6).6 At a given value of \(n\) we can order the phases by their posterior concentration, or what is the same, their free energies \(F_{n}(\mathcal{W}_{\alpha})\). We say there is a _local phase transition_ between phases \(\alpha,\beta\) at _critical sample size_\(n_{cr}\), written \(\alpha\rightarrow\beta\), if the position of \(\alpha,\beta\) in this ordered list of phases swaps. That is, for \(n\approx n_{cr}\) and \(n<n_{cr}\) the Bayesian posterior prefers \(\alpha\) to \(\beta\), and the reverse is true for \(n>n_{cr}\). We say that a phase \(\alpha\)_dominates the posterior_ at \(n\) if it has the highest posterior mass, that is, \(F_{n}(\mathcal{W}_{\alpha})<F_{n}(\mathcal{W}_{\beta})\) for all \(\beta\neq\alpha\). A _global_ phase transition is a local phase transition where \(\alpha\) dominates the posterior for \(n<n_{cr}\) and \(\beta\) dominates for \(n>n_{cr}\) with \(n\) near \(n_{cr}\). Generally when we speak of a phase transition in this paper we mean a _local_ transition. Generically, phase transitions occur when, as \(n\) increases, phases with lower loss and higher complexity are preferred; this expectation is verified in TMS in the next section. For more on the theory of Bayesian phase transitions see Appendix C. ### Experiments There is a fundamental tension in the internal model selection story elaborated above: the free energy formula is asymptotic in \(n\), but a theoretical discussion of phase transitions involves comparing local free energies \(F_{n}(\mathcal{W}_{\alpha})\) at finite \(n\). Whether or not this is valid, in a given range of \(n\) and for a given system, is a question that may be difficult to resolve purely theoretically. We show experimentally for \(r=2,c=6\) that a Bayesian phase transition actually takes place between the \(5\)-gon and the \(6\)-gon, within a range of \(n\) values consistent with the free energy formula. In this section we focus on the case \(r=2,c=6\). For \(c\in\{4,5\}\) see Appendix F.4. We first define regions of parameter space \(\{\mathcal{W}_{\alpha}\}_{\alpha}\). Given a matrix \(W\) we write \(\mathrm{ConvHull}(W)\) for the number of points in the convex hull of the set of columns. For \(3\leq k\leq c,0\leq\sigma\leq c-k\) we define \[\mathcal{W}_{k,\sigma}=\left\{w=(W,b)\in\mathcal{W}\,|\,\,\mathrm{ConvHull}(W) =k\text{ and }b\text{ has }\sigma\text{ positive entries }\right\}\,.\] The set \(\mathcal{W}_{k,\sigma}\subseteq\mathbb{R}^{d}\) is semi-analytic and contains the \(k^{\sigma+}\)-gon in its interior. For \(\alpha=(k,\sigma)\) we let \(w^{*}_{\alpha}\) denote the parameter of the \(k^{\sigma+}\)-gon. We verify experimentally the hypothesis that this parameter dominates the Bayesian posterior of \(\mathcal{W}_{\alpha}\) (see Appendix F) by which we mean that most samples from the posterior for a relevant range of \(n\) values are "close" to \(w^{*}_{\alpha}\).7 In this sense the choice of phases \(\mathcal{W}_{\alpha}\) is appropriate for the range of sample sizes we consider. Footnote 7: The \(k^{\sigma+,\phi-}\)-gons for \(\phi>0\) have high loss but may nonetheless dominate the posterior for very low \(n\), however this is outside the scope of our experiments, which ultimately dictates the choice of the set \(\mathcal{W}_{k,\sigma}\). We draw posterior samples using MCMC-NUTS (Homan & Gelman, 2014) with prior distribution \(\mathrm{N}(0,1)\) and sample sizes \(n\). Each posterior sample is then classified into some \(\mathcal{W}_{k,\sigma}\) (our classification algorithm for the deciding the appropriate value of \(k\) is not error-free.) For each \(n\), \(10\) datasets are generated and the average proportion of the \(k\)-gons, and standard error, is reported in Figure 2. Details of the theoretical proportion plot are given in Appendix F.1. Let \(w^{*}_{\alpha},w^{*}_{\beta}\) be \(k\)-gons dominating phases \(\mathcal{W}_{\alpha},\mathcal{W}_{\beta}\). A _Bayesian phase transition_\(\alpha\rightarrow\beta\) occurs when the difference between the free energies \(F_{n}(\mathcal{W}_{\beta})-F_{n}(\mathcal{W}_{\alpha})\) swaps sign, from positive to negative, as explained in the previous section. The most distinctive feature in the experimental plot is the \(5\to 6\) transition in the range \(600\leq n\leq 700\). The free energy formula predicts this transition at \(n_{cr}\approx 600\) (Appendix C.2). An alternative visualization of the \(5\to 6\) transition using t-SNE is given in Appendix F.2. As \(n\) decreases past \(400\) the MCMC classification becomes increasingly uncertain, and it is less clear that we should expect the free energy formula to be a good model of the Bayesian posterior, so we should not read too much into any correspondence between the plots for \(n\leq 400\) (see Appendix F.2). Figure 2: Proportion of Bayesian posterior concentrated in regions \(\mathcal{W}_{k,\sigma}\) for \(r=2,c=6\) according to the free energy formula (theory, left) and MCMC sampling of the posterior (experimental, right). Theory predicts, and experiments show, a phase transition \(5\to 6\) in the range \(600\leq n\leq 700\). ## 5 Dynamical Phase Transitions A _dynamical_ transition \(\alpha\rightarrow\beta\) occurs in a trajectory if it is near a critical point \(w_{\alpha}^{*}\) of the loss at some time \(\tau_{1}\) (e.g. there is a visible plateau in the loss) and at some later time \(\tau_{2}>\tau_{1}\) it is near \(w_{\beta}^{*}\) without encountering an intermediate critical point. We conduct an empirical investigation into whether the \(k\)-gon critical points of the TMS potential dominate the behaviour of SGD trajectories for \(r=2,c=6\), and the existence of dynamical transitions. There are two sets of experiments. In the first we draw a training dataset \(\mathcal{D}_{n}=\{x_{1},\dots,x_{n}\}\) where \(n=1000\) from the true distribution \(q(x)\). We also draw a test set of size \(5000\). We use minibatch-SGD initialized at a 4-gon plus Gaussian noise of standard deviation \(0.01\), and run for \(4500\) epochs with batch size \(20\) and learning rate \(0.005\). This initialisation is chosen to encourage the SGD trajectory to pass through critical points with high loss after a small number of SGD steps, allowing us to observe phase transitions more easily. Along the trajectory, we keep track of each iterate's training loss, test set loss and theoretical test loss. Figure 1 is a typical example, additional plots are collected in Figures B.1-B.7. In the second set of experiments, summarized in Figure 3, we take the same size training dataset but initialize SGD trajectories differently, at random MCMC samples from the Bayesian posterior at \(n=100\) (a small value of \(n\)). The number of epochs is \(5000\). In both cases we estimate the local learning coefficient of the training iterates \(w_{t}\). This is a newly-developed estimator Lau et al. (2023) that uses SGLD (Welling and Teh, 2011) to estimate a version of the WBIC (Watanabe, 2013) localized to \(w_{t}\) and then forms an estimate of the local learning coefficient \(\hat{\lambda}(w_{t})\) based on the approximation that \(\mathrm{WBIC}(w_{t})\approx nL_{n}(w_{t})+\lambda(w_{t})\log n\) where \(\lambda(w_{t})\) is the local RLCT. In the language of Lau et al. (2023), we use a full-batch version of SGLD with hyperparameters \(\epsilon=0.001,\gamma=1\) and \(500\) steps to estimate the local WBIC. These experiments support the following description of SGD training for the TMS potential when \(r=2,c=6\): trajectories are characterised by plateaus associated to the critical points described in Section 3.2 and further discussed in Appendix A. The dynamical transitions encountered are \[4^{++--}\longrightarrow 4^{+--}\longrightarrow 4^{-}\,,\] \[4^{++--}\longrightarrow 4^{+-}\longrightarrow 4\,,\] \[4^{++-}\longrightarrow 4^{+-}\longrightarrow 4\,,\] \[4^{++-}\longrightarrow 4^{+-}\longrightarrow 4^{+}\longrightarrow 5 \longrightarrow 5^{+}\,. \tag{10}\] The dominance of the classified critical points, and the general relationship of decreasing loss and increasing complexity, can be seen in Figure 3. Figure 3: Visualization of \(400\) SGD trajectories initialized at MCMC samples from the Bayesian posterior for \(r=2,c=6\) at sample size \(n=100\). We see that SGD trajectories are dominated by plateaus at loss values corresponding to our classification of critical points (Appendix A) and that lower loss critical points have higher estimated local learning coefficients. Note that for highly singular critical points we see that \(\hat{\lambda}\) is unable to provide non-negative values without additional hyperparameter tuning, but the ordinality (more positive is less degenerate) is nonetheless correct. See Appendix K for details and caveats for the \(\hat{\lambda}\) estimation. ### Relation Between Bayesian and Dynamical Transitions Phases of the Bayesian posterior for TMS with \(r=2,c=6\) are dominated by \(k\)-gons which are critical points of the TMS potential (Section 4). The same critical points explain plateaus of the SGD training curves (Section 5). This is not a coincidence: on the one hand SLT predicts that phases of the Bayesian posterior will be associated to singularities of the KL divergence, and on the other hand it is a general principle of nonlinear dynamics that singularities of a potential dictate the global behaviour of solution trajectories (Strogatz, 2018; Gilmore, 1981). However, the relation between _transitions_ of the Bayesian posterior and _transitions_ over SGD training is more subtle. There is no _necessary_ relation between these two kinds of transitions. A Bayesian transition \(\alpha\to\beta\) might not have an associated dynamical transition if, for example, the regions \(\mathcal{W}_{\alpha},\mathcal{W}_{\beta}\) are distant or separated by high energy barriers. For example, the Bayesian phase transition \(5\to 6\) has not been observed as a dynamical transition (it may occur, just with low probability per SGD step). However, it seems reasonable to expect that for many dynamical transitions there exists a Bayesian transition between the same phases. We call this the _Bayesian antecedent_ of the dynamical transition if it exists. This leads us to: **Bayesian Anteced Hypothesis (BAH).** The dynamical transitions \(\alpha\to\beta\) encountered in neural network training have Bayesian antecedents. Since a dynamical transition decreases the loss, the main obstruction to having a Bayesian antecedent is that in a Bayesian phase transition \(\alpha\to\beta\) the local learning coefficient should increase (Appendix D). Thus the BAH is in a similar conceptual vein to the expectation, discussed in Section 2, that SGD prefers higher complexity critical points as training progresses. While the dynamical transitions in (10) are all associated with increases in our estimate of the local learning coefficient, we also know that at low \(n\), the constant terms can play a nontrivial role in the free energy formula. Our analysis (Appendix D.1) suggests that all dynamical transitions in (10) have Bayesian antecedents, with the possible exception of \(4^{++---}\to 4^{+--}\) and \(4^{+---}\to 4^{--}\) where the analysis is inconclusive. ## 6 Conclusion Phase transitions and emergent structure are among the most interesting phenomena in modern deep learning (Wei et al., 2022; Barak et al., 2022; Liu et al., 2022) and provide an interesting avenue for fundamental progress in neural network interpretability (Olsson et al., 2022; Nanda et al., 2023) and AI safety (Hoogland et al., 2023). Building on Elhage et al. (2022) we have shown that the Toy Model of Superposition with two hidden dimensions has, in the high sparsity limit, phase transitions in both stochastic gradient-based and Bayesian learning. We have shown that phases are in both cases dominated by \(k\)-gon critical points which we have classified, and we have proposed with the BAH a relation between transitions in SGD training and phase transitions in the Bayesian posterior. Our analysis of TMS also demonstrates the practical utility of the local complexity measure \(\hat{\lambda}\) introduced in (Lau et al., 2023), which is an all-purpose tool for measuring model complexity. In this paper we have shown that this tool reveals in TMS an interesting sequential learning mechanism underlying SGD training, consistent with observations derived in other settings including DLNs (Arora et al., 2019; Gissin et al., 2019) and multi-index models (Abbe et al., 2023). However we emphasise that \(\hat{\lambda}\) has not been specifically engineered for studying complexity in TMS. In principle it can be used to study the development of internal structure over training in _any_ neural network, from toy models like the ones considered here, through to large language models. ## Acknowledgement SW is the recipient of an Australian Research Council Discovery Early Career Researcher Award (project number DE200101253) funded by the Australian Government. SW is also partially funded by an unrestricted gift from Google. We would like to thank Matthew Farrugia-Roberts, Jesse Hoogland and Liam Carroll for discussions and valuable feedback on the manuscript.
2302.06110
Spectrum and stability of travelling pulses in a coupled FitzHugh--Nagumo equation
For a coupled slow--fast FitzHugh--Nagumo(FHN) equation derived from a reaction-diffusion-mechanics (RDM) model, Holzer, Doelman and Kaper in 2013 studied existence and stability of the travelling pulse, which consists of two fast orbit arcs and two slow ones, where one fast segment passes the unique fold point with algebraic decreasing and two slow ones follow normally hyperbolic critical curve segments. Shen and Zhang in 2019 obtained existence of the travelling pulse, whose two fast orbit arcs both exponentially decrease, and one of the slow orbit arcs could be normally hyperbolic or not at the origin. Here we characterize both nonlinear and spectral stabilities of this travelling pulse.
Qi Qiao, Xiang Zhang
2023-02-13T05:27:54Z
http://arxiv.org/abs/2302.06110v1
# Spectrum and stability of travelling pulses in a coupled FitzHugh-Nagumo equation ###### Abstract. For a coupled slow-fast FitzHugh-Nagumo(FHN) equation derived from a reaction-diffusion-mechanics (RDM) model, Holzer, Doelman and Kaper in 2013 studied existence and stability of the travelling pulse, which consists of two fast orbit arcs and two slow ones, where one fast segment passes the unique fold point with algebraic decreasing and two slow ones follow normally hyperbolic critical curve segments. Shen and Zhang in 2019 obtained existence of the travelling pulse, whose two fast orbit arcs both exponentially decrease, and one of the slow orbit arcs could be normally hyperbolic or not at the origin. Here we characterize both nonlinear and spectral stabilities of this travelling pulse. Key words and phrases:coupled FitzHugh-Nagumo equation; singular perturbation; travelling pulse; spectrum; nonlinear stability; spectral stability 2010 Mathematics Subject Classification: 35C07; 35B25; 35B35; 34E15; 34F10; 35B32 \({}^{\dagger}\)Corresponding author: Xiang Zhang ## 1. Introduction The FitzHugh-Nagumo (FHN) equation is a typical reaction-diffusion equation, which was originally proposed by FitzHugh [18] in 1961 and Nagumo et al. [31] in 1962 as a simplification of the Hodgkin-Huxley model of a nerve axon \[\begin{array}{l}u_{t}=u_{xx}+u(u-a)(1-u)-w,\\ w_{t}=\epsilon(u-\gamma w).\end{array} \tag{1}\] with \(a\in[0,1],\ \gamma>0\) and \(0<\epsilon\ll 1\). Here \(u(x,t)\) represents the membrane potential and \(w(x,t)\) denotes the recovery variable. Usually, in system (1), it is assumed that \(\gamma\) is small enough such that it only allows a trivial steady state. There are many studies on the travelling pulses of FHN system, which represent the propagation of the action potential along the nerve cells. It is well known that, if \(a\in(0,1/2)\), system (1) has a travelling pulse solution by using a number of different techniques [3, 5, 6, 22, 23, 28]. Hastings [22] researched that system (1) has two pulse solutions with different propagation speeds, which are called the slow pulse solution and the fast pulse solution, respectively. And Hastings [23], by topological method, investigated the travelling pulses with oscillatory tails in the FHN system for \(0<a,\epsilon\ll 1\). Recently, by using geometric singular perturbation theory, exchange lemma and geometric blow-up techniques, Carter and Sandstede [5] also discussed the existence of travelling pulses with oscillatory tails for \(0<a,\epsilon\ll 1\). The stability of the pulse solution has been studied also by many authors [4, 13, 14, 19, 25, 42]. Such as Jones [25] and Yanagida [42] proved that the fast pulses are stable for \(0<a<1/2\) by defining the Evans function [1, 14]. Carter et al. [4] verified that the fast pulses obtained in [5] is nonlinearly stable for \(a\in[0,1/2)\), where \(a\) can be taken as a small parameter by applying exponential dichotomies [7, 8] and Lin's method [2, 30, 38]. Nash and Panfilov [32] and Panfilov et al. [37] introduced a reaction-diffusion-mechanics (RDM) model that couples the elasticity equation, which simulated the mechanical deformation of a two-dimensional patch of myocardial fibers, to the modified FHN equations. Recently, Holzer et al. [24] derived the next RDM model on the real line \[\begin{array}{l} U_{t}=\frac{1}{F}\frac{\partial}{\partial x }\left(\frac{1}{F}\frac{\partial U}{\partial x}\right)+kU(U-a)(1-U)-UW,\\ \\ W_{t}=\epsilon(kU-W),\end{array} \tag{2}\] where \[F(x,t)=\frac{1}{2}+\frac{M}{4c_{1}}+\frac{1}{2}\sqrt{\left(1+\frac{M}{2c_{1}} \right)^{2}-\frac{2}{c_{1}}W(x,t)}. \tag{3}\] Here \(U\) represents voltage and \(W\) a recovery variable. Moreover, the parameter \(a\) measures the degree of excitability in the medium and \(k\) is a rate constant. And the function \(F(x,t)\) is used to simulate the deformation effect under some simplified conditions, where \(M\) is the stress constant and \(c_{1}\) is the parameter, which measures the internal energy of the deformable medium. Then the authors established the existence of the travelling pulses for system (2) by using the geometric singular perturbation theory and geometric blow-up techniques with \(a\in(0,1/2)\). More precisely, the travelling pulse is located in the region near a non-hyperbolic point (the maximum point). And then, they discussed the spectral stability of the travelling pulse solution by exponential dichotomies and Evans function. Recently, Shen and Zhang [40] also discussed the coupled FHN system \[\begin{array}{l} U_{t}=\frac{1}{F}\frac{\partial}{\partial x }\left(\frac{1}{F}\frac{\partial U}{\partial x}\right)+kU(U-a)(1-U)-UW,\\ \\ W_{t}=\epsilon(U-\gamma W),\end{array} \tag{4}\] where \(F(x,t)\) has the expression (3), and \(\gamma\) is small enough such that it only allows a trivial steady state. The authors obtained existence of the travelling pulses for \(a\in[0,1/2)\) by using the theories of geometric singular perturbation, for instance Fenichel's invariant manifold theorems and exchange lemma together with qualitative analysis and the center manifold theorem, as well as provided an explanation on nonexitence of travelling pulse for \(a\in[1/2,1]\). It should be noted here that the travelling pulses of systems (1) and (4) were obtained as homoclinic orbits to the origin of the associated systems of ordinary differential equations from (1) and (4) via travelling wave transformations. Not like in [24], the homoclinic orbit in [40] jumps before reaching the non-hyperbolic fold point (the maximum point) of the critical curves for \(a\in[0,1/2)\). And the associated ordinary differential systems to systems (1) and (4) have a key difference when \(a=0\): the trivial equilibrium of the model (1) overlaps with a folded point (i.e., the minimum point) of the critical curves, while the trivial equilibrium of the model (4) overlaps with a transcritical point of the critical curves. In this paper, our investigation will be based on the results of Shen and Zhang [40], and discuss the stability of the travelling pulses obtained there, by the exponential dichotomy [2, 4, 7, 8, 24, 39] and Lin's method [2, 4, 30, 38]. We remark that one of the main techniques in this paper is the exponential dichotomy, which is stated and deeply utilized here for studying the point spectrum of the associated second order linear differential operator, which is in fact determined by the variational equation of system (4) along the travelling pulse. We must say that besides the exponential dichotomy, there are also Evans function and geometric and topological method [1, 20, 21, 25, 41], Evans function and NLEP(Nonlocal Eigenvalue Problem) method [9, 10, 11], SLEP(Singular Limit Eigenvalue Problem) method [33, 34] et al. Compared with the geometric and topological method [1, 20, 21, 25, 41], the method of exponential dichotomy is analytical method and may be simpler. The NLEP method [9, 10, 11] will cause the nonlocal eigenvalue problem which is a hypergeometric differential equation. And similar, the SLEP method [33, 34] will cause the singular limit eigenvalue problem which is not easy to solve. Thus compared with NLEP method and SLEP method, the method of exponential dichotomy may be simpler about calculation. We remark that the Evans function and exponential dichotomy method in [24] is not directly applicable to this article. Since, in [24], the back solution in some layer has an exponential decay rate as \(\xi\to+\infty\) and an algebraic decay rate as \(\xi\to-\infty\), which implies that the back solution does not contribute eigenvalue to the associated second order linear differential operator \(\mathcal{L}_{a,e}\) in region \(R_{1}\) seeing figure 2. Thus the eigenvalue of \(\mathcal{L}_{a,e}\) in region \(R_{1}\) is unique and is trivial which is relevant with the front solution decaying with exponential rate as \(\xi\to\pm\infty\). However, in this paper, the back solution has exponential decay rates as \(\xi\to\pm\infty\) which implies that the back solution contributes an eigenvalue to operator \(\mathcal{L}_{a,e}\) in region \(R_{1}\). Meanwhile, the front solution also have exponential decay rates as \(\xi\to\pm\infty\). Thus it follows that there exist two eigenvalue \(\lambda_{0}=0\) and \(\lambda_{1}\) is region \(R_{1}\). In order to discuss the stability of travelling pulse, we must consider the sign of \(\mathrm{Re}\lambda_{1}\) by exponential dichotomy and Lin's method. The remaining part of this paper is organized as follows. Section 2 introduces the existence of travelling pulses established by Shen and Zhang [40] for \(a\in[0,1/2)\). Section 3 states the main results on stability of the travelling pulse, that is, the pulse is nonlinear stable for \(a\in(0,1/2)\) and is spectrally stable for \(a=0\). Section 4 is to calculate the essential spectrum of the linearization \(\mathcal{L}_{a,e}\) of system (4) along the travelling pulse. Section 5 focuses on the point spectrum of the linearization operator, where we divide the right region of the essential spectrum into three regions \(R_{1},\ R_{2}\) and \(R_{3}\): The regions \(R_{2}\) and \(R_{3}\) do not intersect point spectrum, while the region \(R_{1}\) contains at most two eigenvalues, if they exist, one is a translational eigenvalue \(\lambda=0\) and the other is a negative eigenvalue. Sections 6 is the proofs of Theorems 2 and 4. The last section is an appendix, which for readers' convenience, recall some related results on exponential dichotomy and on Sturm-Liouville theorem on infinite interval. ## 2. Existence of travelling pulses A _travelling wave solution_ of system (4) is a particular non-constant solution of the form \[U(x,t)=u(\xi),\ W(x,t)=w(\xi),\ \xi=x+ct, \tag{5}\] where \(c\) is called wave speed. The next process is one of normal procedures for studying existence of travelling waves of slow-fast models, see e.g. [24, 40]. Substituting the ansatz (5) into (4) leads to \[\begin{split}&\frac{1}{F}\left(\frac{1}{F}u^{\prime}\right)^{ \prime}-cu^{\prime}-f(u,w)=0,\\ & w^{\prime}=\frac{\epsilon}{c}(u-\gamma w),\end{split} \tag{6}\] where \({}^{\prime}=\frac{d}{d\xi}\) and \(f(u,w)=f(u,w;a)=ku(u-a)(u-1)+uw.\) Let \(v=\frac{u^{\prime}}{F}\), then system (6) can be rewritten as a system of the first-order ODEs, namely, \[\begin{split}& u^{\prime}=F(w)v,\\ & v^{\prime}=cF^{2}(w)v+F(w)f(u,w),\\ & w^{\prime}=\frac{\epsilon}{c}(u-\gamma w).\end{split} \tag{7}\] System (7) is called a _fast system_, and in the slow scale \(\widetilde{\xi}=\varepsilon\xi\), its associated _slow one_ reads \[\begin{split}&\epsilon\dot{u}=F(w)v,\\ &\epsilon\dot{v}=cF^{2}(w)v+F(w)f(u,w),\\ &\dot{w}=\frac{1}{c}(u-\gamma w),\end{split} \tag{8}\] where the dot means the derivative with respect to \(\widetilde{\xi}\). The _layer system_ is \[\begin{split}& u^{\prime}=F(w)v,\\ & v^{\prime}=cF^{2}(w)v+F(w)f(u,w),\\ & w^{\prime}=0.\end{split} \tag{9}\] Setting \(\varepsilon=0\) in system (8) results in the _reduced system_ \[\begin{split}& 0=F(w)v,\\ & 0=cF^{2}(w)v+F(w)f(u,w),\\ &\dot{w}=\frac{1}{c}(u-\gamma w),\end{split} \tag{10}\] The critical set of system (10) is composed of the straight line \[L_{0}=\{(u,v,w)\in\mathbb{R}_{+}\times\mathbb{R}\times\mathbb{R}_{+}\ |\ u=0,\ v=0\}\] and the parabola \[S_{0}=\{(u,v,w)\in\mathbb{R}_{+}\times\mathbb{R}\times\mathbb{R}_{+}\ |\ w=-k(u-a)(u-1),\ v=0\}.\] The maximum point of \(S_{0}\) is \((\frac{1+a}{2},0,\frac{k}{4}(1-a)^{2})\), which is a non-hyperbolic point. Moreover, we define the right branch of \(S_{0}\) as \(S_{0}^{r}\) and the left branch of \(S_{0}\) as \(S_{0}^{l}\). Recall that \(S_{0}^{r}\) and \(S_{0}^{l}\) do not contain the non-hyperbolic point \((\frac{1+a}{2},0,\frac{k}{4}(1-a)^{2})\), the local maximum point. It is clear that all points on the critical set \(L_{0}\cup S_{0}\) are equilibria of the layer system (9). When \(w_{0}\in\left[0,\frac{k}{4}(1-a)^{2}\right)\), the layer system (9) on \(w=w_{0}\) has three equilibria \((0,0,w_{0}),\ (U_{1}(w_{0}),0,w_{0})\) and \((U_{2}(w_{0}),0,w_{0})\), where \[U_{1,2}(w_{0})=\frac{1+a}{2}\pm\frac{1}{2}\sqrt{(1-a)^{2}-\frac{4w_{0}}{k}}.\] According to [40], in the \(w=w_{f}=0\) plane the layer system has a heteroclinic orbit \(\phi_{f}=(u_{f},v_{f})^{T}\) in the first quadrant, with the wave speed \(c=c_{0}(a)=\frac{\sqrt{2k}}{F(0)}(\frac{1}{2}-a)\), connecting \((0,0,0)\) and \((1,0,0)\), where \[u_{f}(\xi)=\frac{C_{f}e^{F(0)\sqrt{\frac{k}{2}}\xi}}{1+C_{f}e^{F(0)\sqrt{\frac{ k}{2}}\xi}},\ \ \ \ v_{f}(\xi)=\frac{u_{f}^{\prime}(\xi)}{F(0)},\] with \(C_{f}\) an integral constant. In the \(w=w_{b}\) plane, where \(w_{b}\) satisfies \[(1-2a)F(w_{b})=(2U_{1}(w_{b})-U_{2}(w_{b}))F(0) \tag{11}\] and \(0<w_{b}<\frac{k(2a^{2}-5a+2)}{9}<\frac{k(1-a)^{2}}{4}\), system (9) on \(w=w_{b}\) has a heteroclinic orbit \(\phi_{b}=(u_{b},v_{b})^{T}\) in the fourth quadrant, with the wave speed \(c=c_{0}(a)\), connecting \((U_{2}(w_{b}),0,w_{b})\) and \((0,0,w_{b})\). Here \[u_{b}(\xi)=\frac{U_{2}(w_{b})}{1+C_{b}e^{F(w_{b})\sqrt{\frac{k}{2}}U_{2}(w_{b })\xi}},\ \ \ \ v_{b}(\xi)=\frac{u_{b}^{\prime}(\xi)}{F(w_{b})},\] with \(C_{b}\) an integral constant. From the above discussions, one can construct the singular homoclinic orbit, consisting of the above mentioned two heteroclinic fast orbits and two slow orbit arcs on \(L_{0}\) and \(S_{0}^{r}\) in between \(0\leq w\leq w_{b}\), see Fig. 1. Shen and Zhang [40] proved the next result. **Lemma 1**.: _For \(a\in[0,\ 1/2)\) and any fixed positive values of \((M,\ c_{1},\ k)\), if \(\gamma>0\) then for \(\epsilon>0\) sufficiently small, there is a function \(c(a,\epsilon)=c_{0}(a)+O(\epsilon)\) such that system (6) when \(c=c(a,\epsilon)\) admits a homoclinic orbit to the origin, as that in red shown in Fig. 1._ The homoclinic orbit in Lemma 1 of system (9) provides a travelling wave solution of system (4), as stated in the following theorem, where there presents an approximation of the wave with the singular homoclinic orbit. **Theorem 1**.: _Let \(\widetilde{\phi}_{a,\epsilon}(\xi)=(u_{a,\epsilon},w_{a,\epsilon})^{T}(\xi)\) be the travelling pulse solution derived in Lemma 1 for \(a\in[0,1/2)\) and \(\epsilon>0\) sufficiently small. For each sufficiently small \(\sigma_{0},\ \tau>0\), set \(\Xi_{\tau}(\epsilon)=-\tau\log\epsilon\), there exist \(\epsilon_{0}>0\), \(C>1\), and \(\xi_{0},\ Z_{a,\epsilon}>0\) with \(\xi_{0}\) independent of \(a\) and \(\epsilon\) and \(1/C\leq\epsilon Z_{a.\epsilon}\leq C\), such that the next estimation holds._ * _On_ \(J_{f}:=(-\infty,\Xi_{\tau}(\epsilon)]\) _and_ \(J_{b}:=[Z_{a,\epsilon}-\Xi_{\tau}(\epsilon),Z_{a,\epsilon}+\Xi_{\tau}( \epsilon)]\)_, the pulse solution_ \(\widetilde{\phi}_{a,\epsilon}(\xi)\) _satisfies respectively_ \[(u_{a,\epsilon}(\xi),w_{a,\epsilon}(\xi))^{T}=(u_{f}(\xi)+O( \epsilon\log\epsilon),O(\epsilon\log\epsilon))^{T},\] \[(u_{a,\epsilon}(\xi),w_{a,\epsilon}(\xi))^{T}=(u_{b}(\xi)+O( \epsilon\log\epsilon),w_{b}+O(\epsilon\log\epsilon))^{T}.\] _._ * _On_ \(J_{r}:=[\xi_{0},Z_{a,\epsilon}-\xi_{0}]\)_,_ \(\phi_{a,\epsilon}(\xi)=(u_{a,\epsilon},v_{a,\epsilon},w_{a,\epsilon})^{T}(\xi)\) _is approximated by the right slow manifold_ \(S_{0}^{r}\) _with_ \[d(\phi_{a,\epsilon},S_{0}^{r})\leq\sigma_{0}.\] * _On_ \(J_{l}:=[Z_{a,\epsilon}+\xi_{0},\infty)\)_,_ \(\phi_{a,\epsilon}(\xi)\) _is approximated by the left slow manifold_ \(L_{0}\) _with_ \[d(\phi_{a,\epsilon},L_{0})\leq\sigma_{0}.\] Proof.: The proof can be found in Appendix A or obtained using the techniques in [4, Theorem 4.5]. ## 3. The main stability results In this section, we state our main results of this paper, which are on stability of the travelling pulse solution \(\widetilde{\phi}_{a,\epsilon}(\xi)\) obtained in Lemma 1 and Theorem 1. By linearizing the system along the pulse solutions we study their stability via essential spectrum and point spectrum. Without abuse using of notation, we write \((u,w)^{T}(\xi)\) as the travelling wave solution of system (6) obtained in the last section. Let \((U,W)^{T}(x,t)\) be a solution of system (4), which is a perturbation of the travelling wave solution. Utilizing the moving coordinate \(\xi=x+ct\), we write this perturbed solution in \[(U,W)^{T}(x,t)=(u(\xi)+p(\xi,t),w(\xi)+r(\xi,t))^{T},\] with \(p,\ r\in C_{ub}(\mathbb{R},\mathbb{R})\), where \[C_{ub}(\mathbb{R},\mathbb{R}):=\{u:\mathbb{R}\to\mathbb{R}|\ u\text{ is bounded and uniformly continuous}\}\] Figure 1. For \(a\in(0,1/2)\), the green curves are given by \(L_{0}\) and \(S_{0}\), and the two red points on green curves are non–hyperbolic point. Moreover, when \(a=0\), the red spot on the \(w\)–axis coincides with the origin. The blue curves with the double arrow represent the front and back solutions of the layer system (9), respectively, which are fast orbits. Here, the red curve is the pulse \(\phi_{a,\epsilon}\), and \(\xi_{1}=\Xi_{\tau}(\epsilon)\), \(\xi_{2}=Z_{a,\epsilon}-\Xi_{\tau}(\epsilon)\) and \(\xi_{3}=Z_{a,\epsilon}+\Xi_{\tau}(\epsilon)\). Plugging this new expression of \((U,W)^{T}\) in system (4) and replacing \((x,t)\) by \((\xi,t)\), one gets the system of linear partial differential equations that \((p,r)\) satisfy \[p_{t} =\frac{1}{F}\frac{\partial}{\partial\xi}\left(\frac{1}{F}p^{\prime }\right)-cp^{\prime}-f_{u}(u,w)r-\frac{1}{F}\frac{\partial}{\partial\xi} \left(\frac{F_{w}u^{\prime}r}{F^{2}}\right)-\left(\frac{F_{w}}{F^{2}}\frac{ \partial}{\partial\xi}(\frac{u^{\prime}}{F})\right)r,\] \[r_{t} =-cr^{\prime}+\epsilon p-\epsilon\gamma r. \tag{12}\] The right-hand side of (12) defines a linear operator, denoted by \(\mathcal{L}_{a,\epsilon}\). That is, \(\mathcal{L}_{a,\epsilon}(p,r)^{T}\) is equal to the right-hand side of (12). Note that the linear operator \(\mathcal{L}_{a,\epsilon}\) is partially determined by the pulse solution \((u,w)^{T}\). To study stability of the pulse solution, we will investigate the spectrum of this linear operator. For this aim we will seek values of \(\lambda\) for which the linearized eigenvalue problem \((\mathcal{L}_{a,\epsilon}-\lambda I)\vec{P}=0\) has a nontrivial bounded solution \(\vec{P}=(p,r)^{T}\). The next is our first main result, which is on the spectrum of the linear operator \(\mathcal{L}_{a,\epsilon}\). **Theorem 2**.: _Let \(\widetilde{\phi}_{a,\epsilon}(\xi)\) denote the travelling pulse solution obtained from Theorem 1 with the associated linear operator \(\mathcal{L}_{a,\epsilon}\) for \(\epsilon>0\) sufficiently small. For \(a\in(0,1/2)\), there exists \(b_{0}>0\) such that the spectrum of \(\mathcal{L}_{a,\epsilon}\) is contained in_ \[\{0\}\cup\{\lambda\in\mathbb{C}|\ \mathrm{Re}\lambda\leq-b_{0}\epsilon\}.\] Applying Theorem 2 together with the results in [1, 2, 3], we obtain easily the next conclusion, which is on nonlinear stability of the travelling pulse \(\widetilde{\phi}_{a,\epsilon}(\xi)\). **Theorem 3**.: _For \(a\in(0,1/2)\), the travelling pulse solution from Theorem 1 is nonlinearly stable for system (4)._ By definition the travelling pulse solution \(\widetilde{\phi}_{a,\epsilon}(\xi)\) is _nonlinearly stable_ for system (4), if there exists a \(d>0\) such that for any solution \(\phi(\xi,t)\) of system (4) satisfying \(\|\phi(\xi,0)-\widetilde{\phi}_{a,\epsilon}(\xi)\|\leq d\), there exists a \(\xi_{0}\in\mathbb{R}\) such that \(\|\phi(\xi+\xi_{0},t)-\widetilde{\phi}_{a,\epsilon}(\xi)\|\to 0\) as \(t\to\infty\), where the norm is taken to be the \(L^{\infty}\) norm, i.e. the supremum one. For the critical value \(a=0\), we have the following result. **Theorem 4**.: _Let \(\widetilde{\phi}_{a,\epsilon}(\xi)\) denote the travelling pulse solution obtained from Theorem 1. For \(a=0\), the pulse solution \(\widetilde{\phi}_{a,\epsilon}(\xi)\) is spectrally stable for system (4)._ By definition a travelling wave of a system is _spectrally stable_ if the linearization operator \(\mathcal{L}\) of the system along this wave has its spectrum \(\sigma(\mathcal{L})\) satisfying \(\sigma(\mathcal{L})\cap\{\lambda\in\mathbb{C}|\ \mathrm{Re}\lambda>0\}=\emptyset\), i.e., there is no a spectrum point in the open right-half part of the complex plane. Otherwise, the wave is _spectrally unstable_, see e.g. Kapitula and Promislow [27, Definition 4.1.7]. On Theorems 2 and 4, we have the next remarks. * Our next proofs show that the point spectra in both cases \(a\in(0,1/2)\) and \(a=0\) are the same. That is, each of them contains two elements, one is zero and another is negative. * Whereas the essential spectra of the travelling wave in the cases \(a\in(0,1/2)\) and \(a=0\) are different. The former has the essential spectrum in the interior of the left half of the complex plane, which causes the nonlinear stability of the travelling pulse by using the results from [1, 13, 14]. The latter has the essential spectrum in the left half of the complex plane with a unique point on the imaginary axis, which is the origin. So, there does not happen the Andronov-Hopf bifurcation. We strongly believe that in this last case the travelling pulse is also nonlinearly stable. But at the moment we cannot prove it. * In the proof of our main theorems, we get help of the shifted eigenvalue problem. This proof also shows that in the case \(a=0\), the travelling pulse is stable in the exponential weighted space \(X_{\eta}:=\{(u,v)\in C_{ub}(\mathbb{R},\mathbb{R})\times C_{ub}(\mathbb{R}, \mathbb{R})\ |\ (e^{-\eta\xi}u,e^{-\eta\xi}v)\in C_{ub}(\mathbb{R},\mathbb{R}) \times C_{ub}(\mathbb{R},\mathbb{R})\}\) with \(\eta>0\) defined in Lemma 2. To prove Theorem 2, we need to investigate the spectra of these waves by the exponential dichotomy of the linearized operator \(\mathcal{L}_{a,\epsilon}\) of system (4) along the travelling pulse. We further write the linearized eigenvalue problem \((\mathcal{L}_{a,\epsilon}-\lambda I)\vec{P}=0\) in a first order linear differential system as follows \[\begin{split} p^{\prime}=& F(w)q,\\ q^{\prime}=& F(w)(f_{u}(u,w)+\lambda)p+cF^{2}(w)q+ F(w)f_{w}(u,w)r\\ &+\left(\frac{F_{w}}{F}\frac{\partial}{\partial\xi}\left(\frac{u ^{\prime}}{F}\right)\right)r+\frac{\partial}{\partial\xi}\left(\frac{F_{w}}{F ^{2}}u^{\prime}r\right),\\ r^{\prime}=&\frac{\epsilon}{c}p-\frac{\epsilon \gamma+\lambda}{c}r,\end{split} \tag{13}\] where \[\left(\frac{F_{w}}{F}\frac{\partial}{\partial\xi}\left(\frac{u^{ \prime}}{F}\right)\right)r+\frac{\partial}{\partial\xi}\left(\frac{F_{w}}{F^{2 }}u^{\prime}r\right)\] \[=\frac{1}{F^{2}}\left(2F_{w}u^{\prime\prime}-3\frac{F^{2}_{w}}{F} u^{\prime}w^{\prime}+F_{ww}u^{\prime}w^{\prime}-\frac{\epsilon\gamma+\lambda}{c} Fwu^{\prime}\right)r+\frac{\epsilon F_{w}u^{\prime}}{cF^{2}}p.\] The coefficient matrix of (13) is denoted by \(A_{0}(\xi,\lambda)=A_{0}(\xi,\lambda;a,\epsilon)\). Then system (13) can be written in the form \[\varphi^{\prime}=A_{0}(\xi,\lambda;a,\epsilon)\varphi, \tag{14}\] where \(\varphi=(p,q,r)^{T}\). To study spectral stability of the travelling wave solutions of the partial differential system (4), one needs to prove existence of non-trivial solutions of the ordinary differential system (14) satisfying \(p,q,r\in C_{ub}(\mathbb{R},\mathbb{R})\). By [35, 36, 39] the existence of such kind of solutions for the linear nonautonomous system (14) can be characterized in terms of exponential dichotomies (see Appendix). So, spectral properties of \(\mathcal{L}_{a,\epsilon}\), namely invertibility of \(\mathcal{L}_{a,\epsilon}-\lambda I\) in a Banach Space \(C_{ub}(\mathbb{R},\mathbb{R})\times C_{ub}(\mathbb{R},\mathbb{R})\), can be restated in terms of properties of exponential dichotomies of system (14). Recall from [39] the following properties on spectrum of linear operators. * \(\lambda\in\mathbb{C}\) is in the _resolvent set_ of \(\mathcal{L}_{a,\epsilon}\) if and only if the asymptotic matrix \(A_{\infty}(\lambda):=\lim_{\xi\to\pm\infty}A(\xi,\lambda)\) is hyperbolic and the projections \(P_{\pm}(\xi,\lambda)\) of the exponential dichotomies of (14) on \(J=\mathbb{R}_{\pm}\) satisfy \(\ker(P_{-}(0,\lambda))\oplus R(P_{+}(0,\lambda))=\mathbb{C}^{n}\). Here \(\ker\) and \(R\) denote respectively the kernel and range of a linear operator. * \(\lambda\in\mathbb{C}\) is in the _point spectrum_ if and only if the projections \(P_{\pm}(\xi,\lambda)\) of the exponential dichotomies of system (14) on \(J=\mathbb{R}_{\pm}\) satisfy \(\ker(P_{-}(0,\lambda))\cap R(P_{+}(0,\lambda))\neq\{0\}\). * \(\lambda\in\mathbb{C}\) is in the _essential spectrum_ if the asymptotic matrix \(A_{\infty}(\lambda)\) is not hyperbolic. Hereafter \(\mathbb{R}_{+}=[0,\infty)\), \(\mathbb{R}_{-}=(-\infty,0]\). Note that \(\lambda=0\) is always contained in the spectrum of \(\mathcal{L}_{a,\epsilon}\) with the associated eigenfunction \(\widetilde{\phi}^{\prime}_{a,\epsilon}(\xi)=(u^{\prime}_{a,\epsilon}(\xi),w^{ \prime}_{a,\epsilon}(\xi))^{T}\). This fact follows from the properties of solutions of the variational equations of a differential system along a given solution. Since our travelling pulse has the slow-fast structure, and the essential spectrum is partly determined by the asymptotic matrices, we make a remark here on the contribution of the slow and fast parts of the pulse to its point spectrum. By our next proofs, one gets that the two slow motions do not contribute eigenvalues to the associated second order linear differential operator \(\mathcal{L}_{a,\epsilon}\). The contribution to the eigenvalues of \(\mathcal{L}_{a,\epsilon}\) comes from the front and back solutions, which are the fast motions of the pulse near two different layers. To prove our main results, we analyze in the next two sections the essential and point spectra of the linear operator \(\mathcal{L}_{a,\epsilon}\). We remark that the main techniques of the proof on the analysis of the spectra are from those of [2, 4]. ## 4. Essential Spectrum In this section, we first prove that the essential spectrum of \(\mathcal{L}_{a,\epsilon}\) is contained in the left half plane and that it has a distance from the imaginary axis for \(a\in(0,1/2)\). When \(a=0\), the essential spectrum of \(\mathcal{L}_{a,\epsilon}\) includes the origin. **Proposition 1**.: _The essential spectrum of \(\mathcal{L}_{a,\epsilon}\) is contained in the half plane \(\{\lambda\in\mathbb{C}|\ \mathrm{Re}(\lambda)\leq\max\{-\epsilon\gamma,-ka\}\}\) of the complex plane. Moreover, for all \(\lambda\in\mathbb{C}\) located on the right hand side of the essential spectrum, the asymptotic matrix \(A_{\infty}(\lambda)=A_{\infty}(\lambda;a,\epsilon)\) of system (14) has precisely one \((\)spatial\()\) eigenvalue with positive real part._ Proof.: The asymptotic matrix of \(A_{0}(\xi,\lambda)\) is \[A_{\infty}(\lambda)=\left(\begin{array}{ccc}0&1+\dfrac{M}{2c_{1}}&0\\ \left(1+\dfrac{M}{2c_{1}}\right)(ka+\lambda)&c\left(1+\dfrac{M}{2c_{1}}\right) ^{2}&0\\ \dfrac{\epsilon}{c}&0&-\dfrac{\epsilon\gamma+\lambda}{c}\end{array}\right).\] The essential spectrum of \(\mathcal{L}_{a,\epsilon}\) is given by the solutions of the algebraic equation \[\begin{split} 0&=\det(A_{\infty}(\lambda)-il)\\ &=\left(il+\dfrac{\epsilon\gamma+\lambda}{c}\right)\left(l^{2}+c \left(1+\dfrac{M}{2c_{1}}\right)^{2}il+(ka+\lambda)\left(1+\dfrac{M}{2c_{1}} \right)^{2}\right)\end{split} \tag{15}\] with \(l\in\mathbb{R}\), where \(i=\sqrt{-1}\). Obviously, the solutions of the equation (15) are the straight line \[\lambda=-icl-\epsilon\gamma,\ l\in\mathbb{R}\] and the parabola \[\lambda=-icl-ka-l^{2}\bigg{(}1+\frac{M}{2c_{1}}\bigg{)}^{-2},\ l\in\mathbb{R}.\] Thus the essential spectrum is confined to \(\text{Re}(\lambda)\leq\max\{-\epsilon\gamma,-ka\}\). A straightforward computation shows that the eigenvalues of \(A_{\infty}(\lambda)\) are \[\mu_{1} =-\frac{\epsilon\gamma+\lambda}{c},\] \[\mu_{2,3} =\frac{1}{2}\left(c\left(1+\frac{M}{2c_{1}}\right)^{2}\pm\sqrt{c ^{2}\left(1+\frac{M}{2c_{1}}\right)^{4}+4(ka+\lambda)\left(1+\frac{M}{2c_{1}} \right)}\right).\] By the assumption that the \(\lambda\) belongs to the right hand side of the essential spectrum, one has \(\text{Re}(\epsilon\gamma+\lambda)>0\), and \(\text{Re}(\lambda+ka)>0\). This verifies that \(\text{Re}\mu_{1}<0\), and one of \(\mu_{2,3}\) has negative real part and another one has positive real part. This proves that for all \(\lambda\in\mathbb{C}\) belonging to the right hand side of the essential spectrum, the asymptotic matrix \(A_{\infty}(\lambda)\) has a unique eigenvalue with positive real part. According to Proposition 1, for detecting the spectral stability of the travelling wave solution \(\widetilde{\phi}_{a,\epsilon}(\xi)\), we need to study the point spectrum of the linear operator \(\mathcal{L}_{a,\epsilon}\). ## 5. Point spectrum In this section, we calculate the point spectrum in the right hand side of the essential spectrum for \(a\in[0,1/2)\). Firstly, we will show that the point spectrum of \(\mathcal{L}_{a,\epsilon}\) for \(a\in(0,1/2)\) to the right hand side of the essential spectrum consists of at most two eigenvalues. If both exist, one is the simple eigenvalue \(\lambda=0\), and the other is strictly negative. Next, we will show that there is no an element in the point spectrum of \(\mathcal{L}_{a,\epsilon}\) for \(a=0\) to the right hand side of the essential spectrum. In order to determine location of the point spectrum, it is useful to split the complex plane in several regions. For \(\widetilde{M}\gg 1\) and \(\delta\ll 1\) fixed and independent of \(a\in[0,1/2)\) and \(\epsilon\), we define the following three regions (see Fig. 2) \[R_{1} =R_{1}(\delta):=B(0,\delta),\] \[R_{2} =R_{2}(\delta,\widetilde{M}):=\{\lambda\in\mathbb{C}|\ \text{Re}( \lambda)\geq-\delta,\ \delta\leq|\lambda|\leq\widetilde{M}\},\] \[R_{3} =R_{3}(\widetilde{M}):=\{\lambda\in\mathbb{C}|\ |\arg(\lambda)|\leq 2 \pi/3,\ |\lambda|>\widetilde{M}\}.\] Recall that the point spectrum of \(\mathcal{L}_{a,\epsilon}\) is given by the values of \(\lambda\) such that the linear differential system (13) has an exponentially localized solution. ### The region \(R_{3}(\widetilde{M})\) We start by showing that the region \(R_{3}(\widetilde{M})\) does not intersect the point spectrum by rescaling the eigenvalue problem (13). Since \(|\lambda|>\widetilde{M}\) in \(R_{3}(\widetilde{M})\), take the rescaling \(\check{\xi}=\sqrt{|\lambda|}\xi\), \(P=p,\ Q=q/\sqrt{|\lambda|},\ R=r,\) system (13) is transformed to the next one \[\begin{split}\frac{dP}{d\check{\xi}}&=F(w)Q,\\ \frac{dQ}{d\check{\xi}}&=\frac{\lambda}{|\lambda|}F( w)P+O(\frac{1}{\sqrt{|\lambda|}}),\\ \frac{dR}{d\check{\xi}}&=-\frac{\lambda}{c\sqrt{| \lambda|}}R+O(\frac{\epsilon}{\sqrt{|\lambda|}}).\end{split} \tag{16}\] Set \[B(\xi;\lambda):=\left(\begin{array}{ccc}0&F(w)&0\\ \frac{\lambda}{|\lambda|}F(w)&0&0\\ 0&0&-\frac{\lambda}{c\sqrt{|\lambda|}}\end{array}\right).\] Obviously, its eigenvalues are \[\check{\mu}_{1}=-\frac{\lambda}{c\sqrt{|\lambda|}},\ \ \ \ \ \ \check{\mu}_{2,3}=\pm F(w)\sqrt{\frac{ \lambda}{|\lambda|}}.\] According to \(|\arg(\lambda)|<\frac{2\pi}{3}\) for all \(\lambda\in R_{3}\), we obtain \(\mathrm{Re}(\sqrt{\frac{\lambda}{|\lambda|}})>\frac{1}{2}.\) Combining with the fact \(F(w)\geq\frac{1}{2}+\frac{M}{4c_{1}}\), it holds \[|\check{\mu}_{2,3}|>\frac{1}{4}+\frac{M}{8c_{1}}.\] Next, we distinguish the two cases \(8|\mathrm{Re}\lambda|>c\sqrt{|\lambda|}\) and \(8|\mathrm{Re}\lambda|\leq c\sqrt{|\lambda|}.\) _Case \(1\)_. \(8|\mathrm{Re}\lambda|>c\sqrt{|\lambda|}\). Then \(B(\xi;\lambda)\) is hyperbolic with their spectral gap larger than \(1/8\). Thus, by Theorems 9 and 7 in Appendix B, and the roughness ([?, p. 34]) system (16) has an exponential dichotomy on \(\mathbb{R}\) for \(\lambda\in R_{3}(\widetilde{M})\). Hence, system (16) admits no nontrivial exponentially localized solutions. Consequently, \(\lambda\) is not in the intersection set of \(R_{3}(\widetilde{M})\) with the point spectrum of \(\mathcal{L}_{a,\epsilon}\). _Case \(2\)_. \(8|\mathrm{Re}\lambda|\leq c\sqrt{|\lambda|}\). By the roughness system (16) has an exponential trichotomy on \(\mathbb{R}\) with one-dimensional center subspace, and any bounded solution must lie entirely in the center subspace. By continuity, the eigenvalues of the asymptotic matrix \(\check{A}_{\infty}(\lambda;a,\epsilon):=\lim_{\xi\to\pm\infty}\check{A}(\xi, \lambda;a,\epsilon)\) are separated in the following way: one, saying \(\check{\mu}_{\epsilon}\), has the absolute value of its real part less than \(1/8+\kappa\) for some small \(\kappa>0\) and the other two have the absolute values of their real parts larger than \(\frac{1}{4}+\frac{M}{8c_{1}}-\kappa\). Let \(\beta\) be the eigenvector of \(\check{A}_{\infty}(\lambda;a,\epsilon)\) associated with \(\check{\mu}_{\epsilon}\), then any solution \(\check{\varphi}\) in the center subspace satisfies \(\lim_{\xi\to\pm\infty}\check{\varphi}e^{-\check{\mu}_{\epsilon}\check{\xi}}= b_{\pm}\beta\) for some \(b_{\pm}\in\mathbb{C}\setminus\{0\}\) (see [29], Theorem 1). Hence, system (16) admits no exponentially localized solutions, and consequently \(\lambda\in R_{3}(\widetilde{M})\) is not in the point spectrum of \(\mathcal{L}_{a,\epsilon}\). ### The region \(R_{1}(\delta)\cup R_{2}(\delta,\widetilde{M})\) In this subsection, we introduce a weight \(\eta>0\) and consider the shifted system \[\varphi^{\prime}=A(\xi,\lambda)\varphi, \tag{17}\] where \[A(\xi,\lambda) =A(\xi,\lambda;a,\epsilon)=A_{0}(\xi,\lambda;a,\epsilon)-\eta I\] \[=\left(\begin{array}{ccc}-\eta&F(w)&0\\ F(w)(f_{u}+\lambda)+\dfrac{\epsilon F_{w}u^{\prime}}{cF^{2}}&cF^{2}(w)-\eta& \Delta_{a,\epsilon}\\ \dfrac{\epsilon}{c}&0&-\dfrac{\lambda+\epsilon\gamma}{c}-\eta\end{array} \right),\] and \[\Delta_{a,\epsilon}=F(w)f_{w}(u,w)+\dfrac{1}{F^{2}(w)}\left(2F_{w}u^{\prime \prime}-\dfrac{3F_{w}^{2}u^{\prime}w^{\prime}}{F}+F_{ww}u^{\prime}w^{\prime}- \dfrac{\lambda+\epsilon\gamma}{c}F_{w}u^{\prime}\right),\] and the functions \(u=u_{a,\epsilon},\ w=w_{a,\epsilon}\) for simplification to notations. We remark that the introduction of the weight \(\eta\) is for shifting the eigenvalues of the matrix \(A_{0}(\xi,\lambda;a,\epsilon)\) to the left. Recall that \(A_{0}\) is defined in (14). #### 5.2.1. The shifted eigenvalue problem To study the eigenvalues of \(A(\xi,\lambda)\), we have the next result, which is for a modification of \(A(\xi,\lambda)\) with general \(u,\ w\). **Lemma 2**.: _Let \(k_{1},\ \sigma_{0}>0\) be small and define_ \[\mathcal{U}(\sigma_{0},k_{1}):= \left\{(a,u,w)\in\mathbb{R}^{3}|\ a\in\left[0,\dfrac{1}{2}-k_{1} \right],\ u\in[0,\sigma_{0}],\ w\in[0,w_{b}+\sigma_{0}]\right\}\] \[\bigcup\left\{(a,u,w)\in\mathbb{R}^{3}|\ a\in\left[0,\dfrac{1}{2 }-k_{1}\right],\ u\in[U_{2}(w_{b})-\sigma_{0},1+\sigma_{0}],\right.\] \[\left.\qquad\qquad\qquad\qquad\left.w\in[-k(u-a)(u-1)-\sigma_{0},-k(u-a)(u-1)+\sigma_{0}]\right\}.\] _Set \(F_{m}:=\frac{1}{2}+\frac{M}{4c_{1}}\) and \(\eta:=\frac{\sqrt{2k}}{4}F_{m}k_{1}\). For \(\sigma_{0},\ \delta>0\) sufficiently small, there exists \(\epsilon_{0}>0\) and \(\mu\in(0,\eta]\) such that the matrix_ \[\hat{A}(u,w,\lambda,a,\epsilon)=\left(\begin{array}{ccc}-\eta&F(w)&0\\ F(w)(f_{u}(u,w)+\lambda)+\frac{\epsilon F_{w}u^{\prime}}{cF^{2}}&cF^{2}(w)- \eta&\Delta\\ \frac{\epsilon}{c}&0&-\frac{\lambda+\epsilon\gamma}{c}-\eta\end{array}\right),\] _where_ \[\Delta=F(w)f_{w}(u,w)+\frac{1}{F^{2}(w)}\left(2F_{w}u^{\prime\prime}-\frac{3F _{w}^{2}u^{\prime}w^{\prime}}{F}+F_{ww}u^{\prime}w^{\prime}-\frac{\lambda+ \epsilon\gamma}{c}F_{w}u^{\prime}\right),\] _admits a uniform spectral gap larger than \(\mu>0\) for \((a,u,w)\in\mathcal{U}(\sigma_{0},k_{1}),\ \lambda\in R_{1}(\delta)\cup R_{2}(\delta, \widetilde{M})\), and \(\epsilon\in[0,\epsilon_{0}]\). Moreover, the matrix \(\hat{A}\) has precisely one eigenvalue with positive real part._ Proof.: The matrix \(\hat{A}(u,w,\lambda,a,\epsilon)\) is nonhyperbolic if and only if \[0 =\det(\hat{A}(u,w,\lambda,a,\epsilon)-iII)\] \[=\left(\eta+il+\frac{\lambda+\epsilon\gamma}{c}\right)\left(( \eta+il)(cF^{2}-\eta-il)+F^{2}(f_{u}+\lambda)+\frac{\epsilon F_{w}u^{\prime}}{ cF}\right)+\frac{\epsilon F}{c}\Delta\] is satisfied for some \(l\in\mathbb{R}.\) Here \(I\) is the identity operator. In what follows we also use \(1\) to represent the identity operator. For \(\epsilon=0\), \(\hat{A}(u,w,\lambda,a,0)\) is nonhyperbolic if and only if \(\lambda\) is located on the line \(\ell_{s}\) \[\lambda=-c_{0}\eta-ic_{0}l\] or on the parabola \(\mathcal{P}_{s}\) \[\lambda=-f_{u}+\frac{1}{F^{2}}(\eta+2\eta il-c_{0}\eta F^{2}-l^{2}-c_{0}F^{2} il). \tag{18}\] For \(u\in[0,\sigma_{0}],\ w\in[0,w_{b}+\sigma_{0}]\), it holds \[f_{u}(u,w)\geq f_{u}(u,0)\geq f_{u}(\sigma_{0},0)=k(3\sigma_{0}^{2}-2(a+1) \sigma_{0}+a),\] then \[-f_{u}(u,w)\leq 3k\sigma_{0}.\] For \(u\in[U_{2}(w_{b})-\sigma_{0},1+\sigma_{0}],\ w\in[-k(u-a)(u-1)-\sigma_{0},-k(u- a)(u-1)+\sigma_{0}]\), it holds \[f_{u}(u,w) \geq k\left(3u^{2}-2(a+1)u+a\right)-k(u-a)(u-1)-\sigma_{0}\] \[=k\left(2u^{2}-(a+1)u\right)-\sigma_{0}\] \[\geq k\left(2\left(\frac{a+1}{2}\right)^{2}-(a+1)\frac{a+1}{2} \right)-\sigma_{0}=-\sigma_{0},\] then \[-f_{u}(u,w)\leq\sigma_{0}.\] Thus, we obtain \(-f_{u}(u,w)\leq(3k+1)\sigma_{0}\) for any \((a,u,w)\in\mathcal{U}(\sigma_{0},k_{1})\). Take \(\eta=\frac{\sqrt{2k}}{4}F_{m}k_{1}\) and \((3k+1)\sigma_{0}<\frac{1}{8}kk_{1}^{2}\), where \(F_{m}\triangleq\frac{1}{2}+\frac{M}{4c_{1}}=\frac{1}{2}F(0)\), it holds \[c_{0}\eta=\frac{kk_{1}}{4}(\frac{1}{2}-a)\geq\frac{kk_{1}^{2}}{4}.\] By the expression (18) it holds \[\operatorname{Re}(\lambda) \leq-f_{u}(u,w)+\eta^{2}\frac{1}{F^{2}(w)}-c_{0}\eta\leq-f_{u}(u,w)+ \eta^{2}\frac{1}{F_{m}^{2}}-c_{0}\eta\] \[\leq-(3k+1)\sigma_{0}-\frac{1}{4}kk_{1}^{2}\leq-\frac{1}{8}kk_{1} ^{2},\] Thus, the union of the line \(\ell_{s}\) and the parabola \(\mathcal{P}_{s}\) lies in the half plane \[\operatorname{Re}(\lambda)\leq-\frac{1}{8}kk_{1}^{2}\] for any \((a,u,w)\in\mathcal{U}(\sigma_{0},k_{1})\). Hence, provided \(\delta>0\) is sufficiently small, the union of \(\ell_{s}\) and \(\mathcal{P}_{s}\) does not intersect the compact set \(R_{1}\cup R_{2}\) for any \((a,u,w)\) belonging to the compact set \(\mathcal{U}(\sigma_{0},k_{1})\), namely, when \(\lambda\in R_{1}\cup R_{2}\), \(\hat{A}(u,w,\lambda,a,0)\) is hyperbolic for any \((a,u,w)\in\mathcal{U}(\sigma_{0},k_{1})\). By continuity we conclude that there exists \(\epsilon_{0}>0\) such that the matrix \(\hat{A}(u,w,\lambda,a,\epsilon)\) has, for \((a,u,w)\in\mathcal{U}(\sigma_{0},k_{1}),\ \lambda\in R_{1}\cup R_{2}\) and \(\epsilon\in[0,\epsilon_{0}]\), a uniform spectral gap larger than some \(\mu>0\). Since \(-\eta\) is in the spectrum of \(\hat{A}(0,0,0,a,0)\), it forces \(\mu\leq\eta\). Moreover, \(\hat{A}(u,w,\lambda,a,0)\) has precisely one eigenvalue with positive real part for sufficiently large \(\lambda>0.\) Therefore, by continuity, we obtain that \(\hat{A}(u,w,\lambda,a,0)\) has also precisely one eigenvalue with positive real part for \(\lambda\in\mathbb{C}\) lying in the right hand side of \(\operatorname{Re}(\lambda)\leq-\frac{1}{8}kk_{1}^{2}\). So, \(\hat{A}(u,w,\lambda,a,\epsilon)\) has precisely one eigenvalue with positive real part for \((a,u,w)\in\mathcal{U}(\sigma_{0},k_{1}),\ \lambda\in R_{1}\cup R_{2},\ \epsilon\in[0, \epsilon_{0}]\). This proves the proposition. To characterize the point spectrum, we fix \[\eta=\frac{\sqrt{2k}}{4}F_{m}k_{1},\] and take \[\nu\geq\max\left\{\sqrt{\frac{2}{k}}\frac{2}{F(0)},\ \sqrt{\frac{2}{k}}\frac{2}{F(w_{b})U_{2}(w_{b})}, \frac{2}{\mu}\right\}. \tag{19}\] **Proposition 2**.: _For \(a\in(0,1/2)\), \(\lambda\in R_{1}\cup R_{2}\) is a point spectrum of \(\mathcal{L}_{a,\epsilon}\) if and only if it is an eigenvalue of the shifted eigenvalue problem (17). Correspondingly, associated to the \(\lambda\) the shifted eigenvalue problem (17) has a bounded solution._ Proof.: The relation of asymptotic matrices \(A_{\infty}(\lambda,a,\epsilon)\) and \(\hat{A}(0,0,\lambda,a,\epsilon)\) is \[\sigma(\hat{A}(0,0,\lambda,a,\epsilon))=\sigma(A_{\infty}(\lambda,a,\epsilon) )-\eta.\] Moreover, when \(a\in(0,1/2)\), for \(\lambda\in R_{1}\cup R_{2}\), \(A_{\infty}(\lambda,a,\epsilon)\) and \(\hat{A}(0,0,\lambda,a,\epsilon)\) have precisely one eigenvalue with positive real part by Proposition 1 and Lemma 2. Thus system (14) admits a nontrivial exponentially localized solution \(\varphi(\xi)\) if and only if system (17) admits the one given by \(e^{-\eta\xi}\varphi(\xi)\) for \(\lambda\in R_{1}\cup R_{2}\) with \(a\in(0,1/2)\). Let \(\Omega\) be the right region of the essential spectrum of \(\mathcal{L}_{a,\epsilon}\) for \(a=0\). We have the next equivalent result. **Proposition 3**.: _For \(a=0\), \(\lambda\in(R_{1}\cup R_{2})\cap\Omega\) is a point spectrum of \(\mathcal{L}_{a,\epsilon}\) if and only if it is an eigenvalue of the shifted eigenvalue problem (17)._ Proof.: When \(a=0\), for \(\lambda\in(R_{1}\cup R_{2})\cap\Omega\), the matrices \(A_{\infty}(\lambda,0,\epsilon)\) and \(\hat{A}(0,0,\lambda,0,\epsilon)\) both have precisely one eigenvalue with positive real part by Proposition 1 and Lemma 2. Thus system (14) admits a nontrivial exponentially localized solution \(\varphi(\xi)\) if and only if system (17) admits the one given by \(e^{-\eta\xi}\varphi(\xi)\) for \(\lambda\in R_{1}\cup R_{2}\cap\Omega\) with \(a=0\). In order to prove Theorem 4, we just need to prove that the point spectrum of the operator \(\mathcal{L}_{0,\epsilon}\) does not intersect the region \(\Omega_{+}:=\Omega\cap\{\lambda:\ \mathrm{Re}(\lambda)>0\}\). According to these last results, it is only necessary to show that the shifted eigenvalue problem (17) does not have intersection of the point spectrum with the region \(\Omega_{+}\) with \(a=0\). #### 5.2.2. Exponential dichotomies along the right and left slow manifolds. In this subsection, we will prove that system (17) has exponential dichotomies on the intervals \(I_{r}=[L_{\epsilon},Z_{a,\epsilon}-L_{\epsilon}]\) and \(I_{l}=[Z_{a,\epsilon}+L_{\epsilon},\infty)\), where \(L_{\epsilon}=-\nu\log\epsilon\), and \(Z_{a,\epsilon}\) is as in Theorem 1, for \(\lambda\in R_{1}\cup R_{2}\). By Lemma 2, the matrix \(A(\xi,\lambda)\) is pointwise hyperbolic for \(\xi\in I_{r}\cup I_{l}\) and has slowly varying coefficients. According to Theorem 9, the shifted eigenvalue system admits exponential dichotomies. The main results are the following. **Proposition 4**.: _For the shifted eigenvalue system (17), the following statements hold._ * _There exists an_ \(\epsilon_{0}>0\) _such that for_ \(0<\epsilon<\epsilon_{0}\)_, system (_17_) admits exponential dichotomies on the intervals_ \(I_{r}=[L_{\epsilon},Z_{a,\epsilon}-L_{\epsilon}]\) _and_ \(I_{l}=[Z_{a,\epsilon}+L_{\epsilon},\infty)\) _with an exponential decay rate_ \(\mu>0\)_, as that in Lemma_ 2_._ * _The projections_ \(\mathcal{Q}_{r,l}^{u,s}(\xi,\lambda)=\mathcal{Q}_{r,l}^{u,s}(\xi,\lambda;a,\epsilon)\) _associated to the exponential dichotomies are analytic in_ \(\lambda\in R_{1}\cup R_{2}\) _and are approximated by_ \(\mathcal{P}\) _at the endpoints_ \(L_{\epsilon},\ Z_{a,\epsilon}\)_, in the following way_ \[\|(\mathcal{Q}_{r}^{s}-\mathcal{P})(L_{\epsilon},\lambda)\|,\|(\mathcal{Q}_{ r}^{s}-\mathcal{P})(Z_{a,\epsilon}-L_{\epsilon},\lambda)\|,\|(\mathcal{Q}_{l}^{s}- \mathcal{P})(Z_{a,\epsilon}+L_{\epsilon},\lambda)\|\leq C\epsilon|\log\epsilon|\] _where_ \(\mathcal{P}(\xi,\lambda)=\mathcal{P}(\xi,\lambda;a,\epsilon)\) _is the spectral projections onto the stable eigenspace of the coefficient matrix_ \(A(\xi,\lambda)\) _of system (_17_), and_ \(C>0\) _is a constant independent of_ \(\lambda,a\) _and_ \(\epsilon\)_._ Proof.: The proof is similar to Proposition 6.5 in [4] or Proposition 5.2 in [2]. ### The Region \(R_{1}(\delta)\). #### 5.3.1. The reduced eigenvalue problem. The spectra of the reduced problems along the front and the back will be of critical importance for the full system. In this section, we construct a reduced eigenvalue problem by setting \(\epsilon\) to \(0\) in system (17) for \(\xi\) in \(I_{f}\) or \(I_{b}\). Note that since \(\lambda\in R_{1}\) is sufficiently small, the reduced eigenvalue problem, i.e. (17)\(|_{\lambda=\epsilon=0}\), does not depend on \(\lambda\). The reduced eigenvalue problem reads \[\varphi^{\prime}=A_{j}(\xi)\varphi,\ j=f,\ b, \tag{20}\] where \[A_{j}(\xi)=A_{j}(\xi;a)=\left(\begin{array}{cc}-\eta&F(w_{j})&0\\ F(w_{j})f_{u}(u_{j}(\xi),w_{j})&c_{0}F^{2}(w_{j})-\eta&\Delta_{1,j}\\ 0&0&-\eta\end{array}\right) \tag{21}\] and \[\Delta_{1,j}=F(w_{j})f_{w}(u_{j},w_{j})+2\frac{F_{w}(w_{j})}{F^{2}(w_{j})}u_{j} ^{\prime\prime}.\] Here \(u_{j}(\xi)\) denotes the \(u\)-component of \(\phi_{j}\) and \(a\in[0,\frac{1}{2}-k_{1}]\), with \(k_{1}>0\) small given in Lemma 2. For \(\xi\in I_{f}=(-\infty,L_{a,\epsilon}]\), the shifted eigenvalue system (17) can be written as the perturbed one \[\varphi^{\prime}=(A_{f}(\xi)+B_{f}(\xi,\lambda))\varphi,\] where \[B_{f}(\xi,\lambda)=B_{f}(\xi,\lambda;a,\epsilon)=\left(\begin{array}{cc}0&F (w_{a,\epsilon})-F(0)&0\\ \widetilde{\Delta}_{f}&c_{0}F^{2}(w_{j})-\eta&\Delta_{a,\epsilon}-\Delta_{1, f}\\ \frac{\epsilon}{c}&0&-\frac{\lambda+\epsilon\gamma}{c}\end{array}\right),\] and \[\widetilde{\Delta}_{f}=\left.\left(F(w)(f_{u}+\lambda)+\frac{\epsilon F_{w}u^ {\prime}}{cF^{2}}\right)\right|_{(u,w)=(u_{a,\epsilon},w_{a,\epsilon})}-F(0)f _{u}(u_{f},0).\] For \(\xi\in[-L_{a,\epsilon},L_{a,\epsilon}]\), system (17) can be written as the perturbed one \[\varphi^{\prime}=A(\xi+Z_{a,\epsilon},\lambda)\varphi=(A_{b}(\xi)+B_{b}(\xi, \lambda))\varphi, \tag{22}\] where \[B_{b}(\xi,\lambda)=B_{b}(\xi,\lambda;a,\epsilon):=A(\xi+Z_{a,\epsilon}, \lambda)-A_{b}(\xi).\] The upper triangular block structure of the coefficient matrix (21) for the reduced eigenvalue problem (20) induces existence of the two-dimensional invariant subspace \(\mathbb{C}^{2}\times\{0\}\subset\mathbb{C}^{3}\) for system (20), i.e. the \(w=0\) plane, and the dynamics of system (20) on this invariant space is given by \[\psi^{\prime}=C_{j}(\xi)\psi,\ j=f,\ b, \tag{23}\] with \[C_{j}(\xi)=\left(\begin{array}{cc}-\eta&F(w_{j})\\ F(w_{j})f_{u}(u_{j},w_{j})&c_{0}F^{2}(w_{j})-\eta\end{array}\right).\] Obviously, system (23) admits a one-dimensional invariant subspace formed by its bounded solutions, which are spanned by \[\psi_{j}(\xi)=\psi_{j}(\xi;a):=e^{-\eta\xi}\phi_{j}^{\prime}(\xi),\ j=f,b.\] Then the adjoint system of system (23) \[\psi^{\prime}=-C_{j}^{*}(\xi)\psi,\ j=f,b,\] with \(C_{j}^{*}\) being the transpose of conjugate of \(C_{j}\), also has a one-dimensional invariant space formed by its bounded solutions, which are spanned by \[\psi_{j,ad}(\xi)=\psi_{j,ad}(\xi;a):=e^{(\eta-c_{0}F^{2}(w_{j}))\xi}\left( \begin{array}{c}v_{j}^{\prime}(\xi)\\ -u_{j}^{\prime}(\xi)\end{array}\right),\ j=f,b.\] Recall that \(v_{j}\)'s are defined in the formulae on the lines above and under (11). Note that the inner product of \(\psi_{j}\) and \(\psi_{j,ad}\) vanishes, i.e. \(\langle\psi_{j},\psi_{j,ad}\rangle=0.\) Next, we construct exponential dichotomies for subsystem (23) on the both half-lines \(\mathbb{R}_{+}\) and \(\mathbb{R}_{-}\). **Proposition 5**.: _Taking \(k_{1}>0\) small. For each \(a\in[0,\frac{1}{2}-k_{1}]\),_ * _system (_23_) admits exponential dichotomies on the both half-lines_ \(\mathbb{R}_{\pm}\)_,_ _with the \(a\)-independent decay rate \(\mu>0\) and the coefficient \(C>0\), and the projections \(\Pi^{u,s}_{j,\pm}(\xi)=\Pi^{u,s}_{j,\pm}(\xi;a),\ j=f,b,\) which satisfy_ \[\begin{split} R(\Pi^{s}_{j,+}(0))&=\operatorname{ Span}(\psi_{j}(0))=R(\Pi^{u}_{j,-}(0)),\\ R(\Pi^{u}_{j,+}(0))&=\operatorname{Span}(\psi_{j, ad}(0))=R(\Pi^{s}_{j,-}(0)),\ j=f,\ b.\end{split} \tag{24}\] _Here \(\mu,C>0\) are the quantities given in the definition of exponential dichotomy as those in Appendix B._ Proof.: Let \(C_{j,\pm\infty}:=\lim_{\xi\to\pm\infty}C_{j}(\xi)\) be the asymptotic matrices, with \(j=f,\ b.\) According to Lemma 2, the spectra of \(C_{f,-\infty}\) and \(C_{f,+\infty}\) are contained in the spectra of \(\hat{A}(0,0,0,a,0)\) and \(\hat{A}(1,0,0,a,0),\) respectively, namely, \[\sigma(C_{f,-\infty})\subset\sigma(\hat{A}(0,0,0,a,0))\text{ and }\sigma(C_{f,+ \infty})\subset\sigma(\hat{A}(1,0,0,a,0)).\] In the same way, it follows \[\sigma(C_{b,-\infty})\subset\sigma(\hat{A}(U_{2}(w_{b}),w_{b},0,a,0))\text{ and }\sigma(C_{b,+\infty})\subset\sigma(\hat{A}(0,w_{b},0,a,0)).\] Moreover, the fact that \(\hat{A}(u,w,0,a,0)\) has a uniform spectral gap larger than \(\mu>0\) for \((a,u,w)\in\mathcal{U}(\sigma_{0},k_{1})\) again by Lemma 2 is inherited by the asymptotic matrices \(C_{j,\pm\infty},\ j=f,\ b\), namely the asymptotic matrices \(C_{j,\pm\infty},\ j=f,\ b\) have a uniform spectral gap larger than \(\mu>0\). By Theorems 9 and 8, it holds that system (23) admits exponential dichotomies on the both half-lines with the constants \(C,\ \mu>0\) (see the definition of exponential dichotomy in Appendix B) and the projections as in (24). Since the interval \([0,\frac{1}{2}-k_{1}]\) is compact, we can choose the constant \(C>0\) independent of \(a\). Focus on system (20) again, and observe that \[\omega_{j}(\xi):=\left(\begin{array}{c}\psi_{j}(\xi)\\ 0\end{array}\right)=\left(\begin{array}{c}e^{-\eta\xi}\phi^{\prime}_{j}(\xi) \\ 0\end{array}\right),\ j=f,\ b,\] is a bounded solution to (20). In addition, using the variation of constants formulas, the exponential dichotomies of the subsystem (23) can be extended to the system (20). **Proposition 6**.: _Taking \(k_{1}>0\) small. For each \(a\in[0,\frac{1}{2}-k_{1}]\),_ * _system (_20_) admits exponential dichotomies on the both half-lines_ \(\mathbb{R}_{\pm}\)_,_ _with \(a\)-independent constants \(C,\ \mu>0\) and the projections \(Q^{u,s}_{j,\pm}(\xi)=Q^{u,s}_{j,\pm}(\xi;a),\ j=f,b,\) which satisfy_ \[\begin{split}& Q^{s}_{j,+}(\xi)=\left(\begin{array}{cc}\Pi^{s}_{j,+} (\xi)&\int_{\infty}^{\xi}e^{\eta(\xi-\hat{\xi})}\Phi^{u}_{j,+}(\xi,\hat{\xi})F _{j}d\hat{\xi}\\ 0&1\end{array}\right)=1-Q^{u}_{j,+}(\xi),\ \xi\geq 0,\\ & Q^{s}_{j,-}(\xi)=\left(\begin{array}{cc}\Pi^{s}_{j,-}(\xi)&\int_{0}^{\xi }e^{\eta(\xi-\hat{\xi})}\Phi^{u}_{j,-}(\xi,\hat{\xi})F_{j}d\hat{\xi}\\ 0&1\end{array}\right)=1-Q^{u}_{j,-}(\xi),\ \xi\leq 0.\end{split} \tag{25}\] _Here_ \[F_{j}=\left(\begin{array}{c}0\\ \Delta_{1,j}\end{array}\right),\ j=f,\ b,\] _and \(\Phi^{u,s}_{j,+}(\xi,\hat{\xi})=\Phi^{u,s}_{j,+}(\xi,\hat{\xi};a)\) denotes the \((\)un\()\)stable evolution of subsystem (23) under the exponential dichotomies established in Proposition 5. Moreover, the projections satisfy_ \[\begin{split}&\mathrm{R}(Q^{u}_{j,+}(0))=\mathrm{Span}(\varphi_{1,j}),\ \ \mathrm{R}(Q^{s}_{j,+}(0))=\mathrm{Span}(\omega_{j}(0),\varphi_{2}),\\ &\mathrm{R}(Q^{u}_{j,-}(0))=\mathrm{Span}(\omega_{j}(0)),\ \mathrm{R}(Q^{s}_{j,-}(0))=\mathrm{Span}(\varphi_{1,j},\varphi_{2}), \end{split}\qquad j=f,\ b.\end{split} \tag{26}\] _Here_ \[\varphi_{1,j}=\varphi_{i,j}(a):=\left(\begin{array}{c}\psi_{j,ad}(0)\\ 0\end{array}\right),\ \ \ \ \varphi_{2}:=\left(\begin{array}{c}0\\ 0\\ 1\end{array}\right).\] Proof.: Denote by \(\Phi_{j}(\xi,\hat{\xi})\) the evolution of subsystem (23) with its second column belonging to \(R(\Pi^{s}_{j,+}(\xi))\). By the variation of constants formula, the evolution \(T_{j}(\xi,\hat{\xi})=T_{j}(\xi,\hat{\xi};a)\) of system (20) is given by \[T_{j}(\xi,\hat{\xi})=\left(\begin{array}{cc}\Phi_{j}(\xi,\hat{\xi})&\int_{ \hat{\xi}}^{\xi}\Phi_{j}(\xi,z)F_{j}e^{-\eta(z-\hat{\xi})}dz\\ 0&e^{-\eta(\xi-\hat{\xi})}\end{array}\right),\ j=f,\ b.\] Some calculations yield \[\Pi^{s}_{j,+}(\xi)\Phi_{j}(\xi,z)F_{j}=\Phi_{j}(\xi,z)F_{j},\ \xi\geq z,\ j=f,b\] and \[\Pi^{s}_{j,-}(\xi)\Phi_{j}(\xi,z)F_{j}=0,\ \xi\leq z,\ j=f,b.\] And then the projections defined in (25) yield exponential dichotomies on the both half-lines for (20) with the constants \(C,\ \mu>0\), where \(C\) is independent of \(a\). #### 5.3.2. Along the front From Proposition 6, system (20) admits an exponential dichotomy on \((-\infty,0]\), then, by the variation of constants formula, the solutions of system (17) can be expressed on interval \((-\infty,0]\). It holds that the exponentially decaying solution to (17) in the backward time admits an exit condition at \(\xi=0\). Here _exit condition_ is the one under which the next constructed exponentially decaying solution (27) to (17) leaves the neighborhood of the critical curve in the negative time. The time when the orbit negatively leaves the critical curve is called _exit time_. Next, we will establish the entry and exit conditions for existence of the solutions to system (17) on \([0,Z_{a,\epsilon}]\) and for existence of exponentially decaying solutions to (17) in the forward time on \([Z_{a,\epsilon},\infty)\). Then, equating these exit and entry conditions at \(\xi=0\) and \(\xi=Z_{a,\epsilon}\), we obtain a matching equations whose solutions indicate that system (17) admits an exponentially localized solution. **Proposition 7**.: _Let \(T^{u,s}_{f,-}(\xi,\hat{\xi})=T^{u,s}_{f,-}(\xi,\hat{\xi};a)\) be the \((\)un\()\)stable evolution of system (20) under the exponential dichotomy on \(I_{f}=(-\infty,0]\) established in Proposition 6 and the associated projections are given by \(Q^{u,s}_{f,-}(\xi)=Q^{u,s}_{f,-}(\xi;a)\). The following statements hold._ 1. _There exist_ \(\delta,\ \epsilon_{0}>0\)_, such that any solution_ \(\varphi_{f,-}(\xi,\lambda)\) _to (_17_) decaying exponentially in the backward time for_ \(\lambda\in R_{1}(\delta)\) _and_ \(\epsilon\in(0,\epsilon_{0})\) _satisfies_ (27) \[\begin{split}&\varphi_{f,-}(0,\lambda)=\beta_{f,-}\omega_{f}(0)+ \beta_{f,-}\int_{-\infty}^{0}T^{s}_{f,-}(0,\hat{\xi})B_{f}(\hat{\xi},\lambda) \omega_{f}(\hat{\xi})d\hat{\xi}+\mathcal{H}_{f,-}(\beta_{f,-}),\\ & Q^{u}_{f,-}(0)\varphi_{f,-}(0,\lambda)=\beta_{f,-}\omega_{f}(0),\\ &\text{for some $\beta_{f,-}\in\mathbb{C}$, where $\mathcal{H}_{f,-}$ is a linear map satisfying the bound condition}\\ &\|\mathcal{H}_{f,-}(\beta_{f,-})\|\leq C(\epsilon|\log\epsilon|+| \lambda|)^{2}|\beta_{f,-}|,\\ &\text{with $C>0$ independent of $\lambda$},\ a\text{ and $\epsilon$}.\text{ Moreover, $\varphi_{f,-}(\xi,\lambda)$ is analytic in $\lambda$}.\\ \end{split}\] (28) \((ii)\) _The derivative_ \(\phi^{\prime}_{a,\epsilon}:=(u^{\prime}_{a,\epsilon},v^{\prime}_{a,\epsilon},w ^{\prime}_{a,\epsilon})^{T}\) _of the pulse solution satisfies_ (29) \[Q^{s}_{f,-}(0)\phi^{\prime}_{a,\epsilon}(0)=\int_{-\infty}^{0}T^{s}_{f,-}(0, \hat{\xi})B_{f}(\hat{\xi},0)e^{-\eta\hat{\xi}}\phi^{\prime}_{a,\epsilon}( \hat{\xi})d\hat{\xi}.\] Proof.: For \((i)\), take \(0<\hat{\mu}<\mu\). Denote by \(C_{\hat{\mu}}(I_{f,-},\mathbb{C}^{3})\) the space of \(\hat{\mu}\)-exponentially decaying, and continuous functions defined on \(I_{f,-}\) with range in \(\mathbb{C}^{3}\), and its associated norm is \[\|\varphi\|_{\hat{\mu}}=\sup_{\xi\leq 0}\|\varphi(\xi)\|e^{\hat{\mu}|\xi|}.\] By Theorem 1\((i)\), the perturbed matrix \(B_{f}(\xi,\lambda;a,\epsilon)\) has the bound estimation \[\|B_{f}(\xi,\lambda;a,\epsilon)\|\leq C(\epsilon|\log\epsilon|+|\lambda|), \quad\xi\in I_{f,-}. \tag{30}\] Taking \(\beta\in\mathbb{C},\ \lambda\in R_{1}(\delta)\), one can check that the function \[\mathcal{G}_{\beta,\lambda}:\ C_{\hat{\mu}}(I_{f,-},\mathbb{C}^{3})\to C_{ \hat{\mu}}(I_{f,-},\mathbb{C}^{3})\] with \[\mathcal{G}_{\beta,\lambda}(\varphi)(\xi)=\beta\omega_{f}(\xi) +\int_{0}^{\xi}T^{u}_{f,-}(\xi,\hat{\xi})B_{f}(\hat{\xi},\lambda) \varphi(\hat{\xi})d\hat{\xi}\] \[+\int_{-\infty}^{\xi}T^{s}_{f,-}(\xi,\hat{\xi})B_{f}(\hat{\xi}, \lambda)\varphi(\hat{\xi})d\hat{\xi},\] is well-defined, and is a contraction mapping for each \(\delta,\ \epsilon>0\) sufficiently small (with the contraction constant independent of \(\beta\) and \(a\)). By the Banach Contraction Theorem, the mapping \(\mathcal{G}_{\beta,\lambda}\) has a unique fixed point \(\varphi_{f,-}\) in \(C_{\hat{\mu}}(I_{f,-},\mathbb{C}^{3})\), i.e. \[\varphi_{f,-}=\mathcal{G}_{\beta,\lambda}(\varphi_{f,-}),\quad\ \ \xi\in I_{f,-}. \tag{31}\] Since the perturbed matrix \(B_{f}(\xi,\lambda;a,\epsilon)\) is analytic in \(\lambda\), then \(\varphi_{f,-}(\xi,\lambda)\) is analytic in \(\lambda\). Moreover, \(\varphi_{f,-}\) is linear in \(\beta\) by construction, and we derive \[\|\varphi_{f,-}\|\leq\|\beta\omega_{f}\|+\frac{2}{\mu}\|B_{f}\|\|\varphi_{f,-}\|.\] Combining this with (29) yields \[\|\varphi_{f,-}(\xi,\lambda)-\beta\omega_{f}(\xi)\|\leq C|\beta|(\epsilon|\log \epsilon|+|\lambda|),\quad\xi\in I_{f,-}. \tag{31}\] The family of fixed points to Eqs.(30) parameterized by \(\beta\in\mathbb{C}\) form a one-dimensional space, which consists of exponentially decaying solutions as \(\xi\to-\infty\) to (17). From Lemma 2, the asymptotic matrix \(\hat{A}(0,0,\lambda,a,\epsilon)\) of system (17) has exactly one eigenvalue with positive real part. Thus, the dimension of the space in which the solutions to (17) decay exponentially in the backward time is one. This proves that there exists some \(\beta\in\mathbb{C}\) such that any solution \(\varphi_{f,-}(\xi,\lambda)\) to (17) that converges to \(0\) as \(\xi\to-\infty\) satisfies (30). Using (29) and (31) arrives \[\varphi_{f,-}(0) =\beta_{f,-}\omega_{f}(0)+\int_{-\infty}^{0}T^{s}_{f,-}(0,\hat{ \xi})B_{f}(\hat{\xi},\lambda)\varphi_{f,-}(\hat{\xi})d\hat{\xi}\] \[=\beta_{f,-}\omega_{f}(0)+\beta_{f,-}\int_{-\infty}^{0}T^{s}_{f,- }(0,\hat{\xi})B_{f}(\hat{\xi},\lambda)\omega_{f}(\hat{\xi})d\hat{\xi}+\mathcal{ H}_{f,-}(\beta_{f,-}),\] \[Q^{u}_{f,-}(0)\varphi_{f,-}(0,\lambda)=\beta_{f,-}\omega_{f}(0),\] where \[\|\mathcal{H}_{f,-}(\beta_{f,-})\|\leq C(\epsilon|\log\epsilon|+|\lambda|)^{2 }|\beta_{f,-}|.\] For \((ii)\), since \(e^{-\eta\xi}\phi^{\prime}_{a,\epsilon}\) is an eigenfunction of system (17) at \(\lambda=0\), it follows that \(e^{-\eta\xi}\phi^{\prime}_{a,\epsilon}\) satisfies equation (30) at \(\lambda=0\) for some \(\beta\in\mathbb{C}\). Set \(\xi=0\), we obtain \[Q^{*}_{f,-}(0)\phi^{\prime}_{a,\epsilon}(0)=\int_{-\infty}^{0}T^{s}_{f,-}(0, \hat{\xi})B_{f}(\hat{\xi},0)e^{-\eta\hat{\xi}}\phi^{\prime}_{a,\epsilon}(\hat {\xi})d\hat{\xi}.\] This proves the proposition. #### 5.3.3. Passage near the right slow manifold This part focuses on the expressions of the solution of system (17) at the end of the orbit along the right critical manifold, and of the derivative of the pulse solution. **Proposition 8**.: _Let \(T^{u,s}_{j,\pm}(\xi,\hat{\xi})=T^{u,s}_{j,\pm}(\xi,\hat{\xi};a)\) be the \((\)un\()\)stable evolution of system (20) under the exponential dichotomy established in Proposition 6, and let the associated projections be \(Q^{u,s}_{j,\pm}(\xi)=Q^{u,s}_{j,\pm}(\xi;a)\), \(j=f,\ b\). The following statements hold._ * _There exist_ \(\delta,\ \epsilon_{0}>0\)_, such that any solution_ \(\varphi^{sl}(\xi,\lambda)\) _to (_17_) for_ \(\lambda\in R_{1}(\delta)\) _and_ \(\epsilon\in(0,\epsilon_{0})\) _satisfies_ (32) \[\varphi^{sl}(0,\lambda) =\beta_{f}\omega_{f}(0)+\zeta_{f}Q^{s}_{f,+}(0)\varphi_{2}\] \[\qquad\qquad+\beta_{f}\int_{L_{\epsilon}}^{0}T^{u}_{f,+}(0,\hat{ \xi})B_{f}(\hat{\xi},\lambda)\omega_{f}(\hat{\xi})d\hat{\xi}+\mathcal{H}_{f}( \beta_{f},\zeta_{f},\beta_{b}),\] \[Q^{u}_{f,-}(0)\varphi^{sl}(0,\lambda) =\beta_{f}\omega_{f}(0),\] _and_ (33) \[\varphi^{sl}(Z_{a,\epsilon},\lambda) =\beta_{b}\omega_{b}(0)+\beta_{b}\int_{-L_{\epsilon}}^{0}T^{s}_{b,- }(0,\hat{\xi})B_{b}(\hat{\xi},\lambda)\omega_{b}(\hat{\xi})d\hat{\xi}+ \mathcal{H}_{b}(\beta_{f},\zeta_{f},\beta_{b}),\] \[Q^{u}_{b,-}(0)\varphi^{sl}(Z_{a,\epsilon},\lambda) =\beta_{b}\omega_{b}(0),\] _for some_ \(\beta_{f},\zeta_{f},\beta_{b}\in\mathbb{C}\)_, where_ \(\mathcal{H}_{f}\) _and_ \(\mathcal{H}_{b}\) _are linear maps satisfying the estimations_ \[\|\mathcal{H}_{f}(\beta_{f},\zeta_{f},\beta_{b})\|\leq C\left(( \epsilon|\log\epsilon|+|\lambda|)|\zeta_{f}|+(\epsilon|\log\epsilon|+|\lambda| )^{2}|\beta_{f}|+e^{-q/\epsilon}|\beta_{b}|\right),\] \[\|\mathcal{H}_{b}(\beta_{f},\zeta_{f},\beta_{b})\|\leq C\left(( \epsilon|\log\epsilon|+|\lambda|)^{2}|\beta_{b}|+e^{-q/\epsilon}(|\beta_{f}|+| \zeta_{f}|)\right),\] _with_ \(q,\ C>0\) _independent of_ \(\lambda,\ a\) _and_ \(\epsilon\)_. Moreover,_ \(\varphi^{sl}(\xi,\lambda)\) _is analytic in_ \(\lambda\)_._ \((ii)\) _The derivative_ \(\phi^{\prime}_{a,\epsilon}\) _of the pulse solution satisfies_ \[\begin{split} Q^{u}_{f,+}(0)\phi^{\prime}_{a,\epsilon}(0)=& T^{u}_{f,+}(0,L_{\epsilon})e^{-\eta L_{\epsilon}}\phi^{\prime}_{a, \epsilon}(L_{\epsilon})\\ &\qquad+\int_{L_{\epsilon}}^{0}T^{u}_{f,+}(0,\hat{\xi})B_{f}(\hat {\xi},0)e^{-\eta\hat{\xi}}\phi^{\prime}_{a,\epsilon}(\hat{\xi})d\hat{\xi}. \end{split}\] \[\begin{split} Q^{s}_{b,-}(0)\phi^{\prime}_{a,\epsilon}(Z_{a, \epsilon})=& T^{s}_{b,-}(0,-L_{\epsilon})e^{\eta L_{\epsilon}} \phi^{\prime}_{a,\epsilon}(Z_{a,\epsilon}-L_{\epsilon})\\ &\qquad+\int_{-L_{\epsilon}}^{0}T^{s}_{b,-}(0,\hat{\xi})B_{b}( \hat{\xi},0)e^{-\eta\hat{\xi}}\phi^{\prime}_{a,\epsilon}(Z_{a,\epsilon}+\hat{ \xi})d\hat{\xi}.\end{split} \tag{34}\] Proof.: \((i)\). Since the front \(\phi_{f}(\xi)\) is a heteroclinic orbit connecting the equilibria \((0,0)\) and \((1,0)\) of the layer system on \(w=0\), and converges to them at the exponential rate \(\sqrt{\frac{k}{2}}F(0)\) as \(\xi\to\pm\infty\), the coefficient matrix \(A_{f}(\xi)\) of (20) converges at the exponential rate \(\sqrt{\frac{k}{2}}F(0)\) to some asymptotic matrix \(A_{f,\infty}\) as \(\xi\to\infty\). Hence, by Lemma 3.4 of [35] and its proof, the associated projections \(Q^{u,s}_{f,+}\) of the exponential dichotomy to system (20) admit \[\|Q^{u,s}_{f,+}(\xi)-P^{u,s}_{f}\|\leq C\left(e^{-\sqrt{\frac{k}{2}}F(0)\xi}+ e^{-\mu\xi}\right),\ \xi\geq 0. \tag{35}\] Here \(P^{u,s}_{f}=P^{u,s}_{f}(a)\) is the spectral projection on the (un)stable eigenspace of the asymptotic matrix \(A_{f,\infty}\). At the endpoint \(L_{\epsilon}=-\nu\log(\epsilon)\), the coefficient matrix \(A(\xi,\lambda)\) of (17) satisfies \[\|A(L_{\epsilon},\lambda)-A_{f,\infty}\|\leq C(\epsilon|\log\epsilon|+|\lambda|),\] where we have used the fact that the coefficient matrix \(A_{f}(\xi)\) of (20) converges at the exponential rate \(\sqrt{\frac{k}{2}}F(0)\) to the asymptotic matrix \(A_{f,\infty}\) as \(\xi\to\infty\) and \(\nu\geq\sqrt{\frac{2}{k}}\frac{2}{F(0)}\). Therefore, the spectral projections associated with the matrices \(A(L_{\epsilon},\lambda)\) and \(A_{f,\infty}\) admit the same bound by continuity, namely, \[\|\mathcal{Q}^{u,s}_{r}(L_{\epsilon},\lambda)-P^{u,s}_{f}\|\leq C(\epsilon| \log\epsilon|+|\lambda|).\] Combining this with (35), we obtain \[\|\mathcal{Q}^{u,s}_{r}(L_{\epsilon},\lambda)-Q^{u,s}_{f,+}(L_{\epsilon})\| \leq C(\epsilon|\log\epsilon|+|\lambda|). \tag{36}\] In a similar way, at \(\xi=Z_{a,\epsilon}-L_{\epsilon}\), we obtain \[\|\mathcal{Q}^{u,s}_{r}(Z_{a,\epsilon}-L_{\epsilon},\lambda)-Q^{u,s}_{b,-}(-L_ {\epsilon})\|\leq C(\epsilon|\log\epsilon|+|\lambda|).\] By the variation of constants formula, any solution \(\varphi_{f}^{sl}(\xi,\lambda)\), on \(I_{f,+}\), to the shifted eigenvalue system (17) must satisfy \[\varphi_{f}^{sl}(\xi,\lambda)= T_{f,+}^{u}(\xi,L_{\epsilon})\alpha_{f}+\beta_{f}\omega_{f}(\xi)+ \zeta_{f}T_{f,+}^{s}(\xi,0)\varphi_{2}\] \[+\int_{0}^{\xi}T_{f,+}^{s}(\xi,\hat{\xi})B_{f}(\hat{\xi},\lambda )\varphi_{f}^{sl}(\hat{\xi},\lambda)d\hat{\xi}+\int_{L_{\epsilon}}^{\xi}T_{f,+ }^{u}(\xi,\hat{\xi})B_{f}(\hat{\xi},\lambda)\varphi_{f}^{sl}(\hat{\xi},\lambda )d\hat{\xi} \tag{37}\] for some \(\alpha_{f},\ \zeta_{f}\in\mathbb{C}\) and \(\alpha_{f}\in\mathrm{R}(Q_{f,+}^{u}(L_{\epsilon}))\). By Theorem 1\((i)\), the perturbed matrix \(B_{f}(\xi,\lambda;a,\epsilon)\) has \[\|B_{f}(\xi,\lambda;a,\epsilon)\|\leq C(\epsilon|\log\epsilon|+|\lambda|),\ \xi\in I _{f,+}. \tag{38}\] Then, by the contraction mapping principle, equation (37) has a unique solution \(\varphi_{f}^{sl}\) for all sufficiently small \(|\lambda|,\ \epsilon>0\). Note that \(\varphi_{f}^{sl}\) is linear in \((\alpha_{f},\ \beta_{f},\ \zeta_{f})\). By estimate (38) yields \[\sup_{\xi\in[0,L_{\epsilon}]}\|\varphi_{f}^{sl}(\xi,\lambda)\|\leq C(\|\alpha_ {f}\|+|\beta_{f}|+|\zeta_{f}|), \tag{39}\] taking \(\delta,\ \epsilon_{0}>0\) smaller if necessary. Denote by \(\mathcal{T}_{r}^{u,s}(\xi,\hat{\xi},\lambda)=\mathcal{T}_{r}^{u,s}(\xi,\hat{ \xi},\lambda;a,\epsilon)\) the (un)stable evolution of system (17) under the exponential dichotomy on \(I_{r}\) established in Proposition 4. Then any solution \(\varphi_{r}(\xi,\lambda)\) to (17) on \(I_{r}\) is \[\varphi_{r}(\xi,\lambda)=\mathcal{T}_{r}^{u}(\xi,Z_{a,\epsilon}-L_{\epsilon}, \lambda)\alpha_{r}+\mathcal{T}_{r}^{s}(\xi,L_{\epsilon},\lambda)\beta_{r}, \tag{40}\] where \(\alpha_{r}\in\mathrm{R}(\mathcal{Q}_{r}^{u}(Z_{a,\epsilon}-L_{\epsilon}, \lambda))\) and \(\beta_{r}\in\mathrm{R}(\mathcal{Q}_{r}^{s}(L_{\epsilon},\lambda))\). Applying the projection \(\mathcal{Q}_{r}^{u}(L_{\epsilon},\lambda)\) to \(\varphi_{r}(L_{\epsilon},\lambda)-\varphi_{f}^{sl}(L_{\epsilon},\lambda)\), we obtain the matching condition \[\alpha_{f}=\mathcal{H}_{1}(\alpha_{f},\beta_{f},\alpha_{r}),\] \[\|\mathcal{H}_{1}(\alpha_{f},\beta_{f},\alpha_{r})\|\leq C[( \epsilon|\log\epsilon|+|\lambda|)(\|\alpha_{f}\|+|\beta_{f}|+|\zeta_{f}|)+e^{- \frac{\alpha}{\epsilon}}||\alpha_{r}|], \tag{41}\] by using (36), (38), (39), (19) and \(Z_{a,\epsilon}=O(\epsilon^{-1})\). Similarly, applying the projection \(\mathcal{Q}_{r}^{s}(L_{\epsilon},\lambda)\) to \(\varphi_{r}(L_{\epsilon},\lambda)-\varphi_{f}^{sl}(L_{\epsilon},\lambda)\) yields \[\beta_{f} =\mathcal{H}_{2}(\alpha_{f},\beta_{f},\zeta_{f}),\] \[\|\mathcal{H}_{2}(\alpha_{f},\beta_{f},\zeta_{f})\| \leq C(\epsilon|\log\epsilon|+|\lambda|)(\|\alpha_{f}\|+|\beta_{f}| +|\zeta_{f}|). \tag{42}\] Consider the translated version (22) of system (17). Then any solution \(\varphi_{b}^{sl}(\xi,\lambda)\) to (22) on \([-L_{\epsilon},0]\) must satisfy \[\varphi_{b}^{sl}(\xi,\lambda)= T_{b,-}^{s}(\xi,-L_{\epsilon})\alpha_{b}+\beta_{b}\omega_{b}(\xi)+ \int_{0}^{\xi}T_{b,-}^{u}(\xi,\hat{\xi})B_{b}(\hat{\xi},\lambda)\varphi_{b}^{ sl}(\hat{\xi},\lambda)d\hat{\xi}\] \[+\int_{-L_{\epsilon}}^{\xi}T_{b,-}^{s}(\xi,\hat{\xi})B_{b}(\hat{ \xi},\lambda)\varphi_{b}^{sl}(\hat{\xi},\lambda)d\hat{\xi} \tag{43}\] for some \(\beta_{b}\in\mathbb{C}\) and \(\alpha_{b}\in R(Q_{b,-}^{s}(-L_{\epsilon}))\) by the variation of constants formula. According to Theorem 1\((i)\) we obtain \[\|B_{b}(\xi,\lambda;a,\epsilon)\|\leq C(\epsilon|\log\epsilon|+|\lambda|),\ \xi\in I _{b,-}=[-L_{\epsilon},0]. \tag{44}\] Then the contraction mapping principle verifies that there exists a unique solution \(\varphi_{b}^{sl}\) to equation (43) for all sufficiently small \(|\lambda|,\ \epsilon>0\). Note that \(\varphi_{b}^{sl}\) is linear in \((\alpha_{b},\ \beta_{b})\) and satisfies the estimation \[\sup_{\xi\in[-L_{\epsilon},0]}\|\varphi_{b}^{sl}(\xi,\lambda)\|\leq C(\|\alpha_ {b}\|+|\beta_{b}|) \tag{45}\] via (44), taking \(\delta,\ \epsilon_{0}>0\) smaller if necessary. Similarly, applying the projections \(\mathcal{Q}_{r}^{u}(Z_{a,\epsilon}-L_{\epsilon},\lambda)\) and \(\mathcal{Q}_{r}^{s}(Z_{a,\epsilon}-L_{\epsilon},\lambda)\) to \(\varphi_{r}(Z_{a,\epsilon}-L_{\epsilon},\lambda)-\varphi_{b}^{sl}(Z_{a,\epsilon }-L_{\epsilon},\lambda)\) respectively produce the matching conditions \[\alpha_{r} =\mathcal{H}_{3}(\alpha_{b},\beta_{b}),\] \[\|\mathcal{H}_{3}(\alpha_{b},\beta_{b})\| \leq C(\epsilon|\log\epsilon|+|\lambda|)(\|\alpha_{b}\|+|\beta_{b }|), \tag{46}\] \[\alpha_{b} =\mathcal{H}_{4}(\alpha_{b},\beta_{b},\beta_{r}),\] \[\|\mathcal{H}_{4}(\alpha_{b},\beta_{b},\beta_{r})\| \leq C[(\epsilon|\log\epsilon|+|\lambda|)(\|\alpha_{b}\|+|\beta_{ b}|)+e^{-\frac{\sigma}{\epsilon}}\|\beta_{r}\|], \tag{47}\] where \(\mathcal{H}_{3}\) and \(\mathcal{H}_{4}\) are linear maps in their variables. Next, we will combine these last results about the solution on \([0,Z_{a,\epsilon}]\) to obtain the relevant conditions satisfied at \(\xi=0\) and \(\xi=Z_{a,\epsilon}\). Combining (42) and (47) yields \[\alpha_{b} =\mathcal{H}_{5}(\alpha_{b},\beta_{b},\alpha_{f},\beta_{f},\zeta _{f}),\] \[\|\mathcal{H}_{5}(\alpha_{b},\beta_{b},\alpha_{f},\beta_{f}, \zeta_{f})\| \leq C\left((\epsilon|\log\epsilon|+|\lambda|)(\|\alpha_{b}\|+| \beta_{b}|)+e^{-\frac{\sigma}{\epsilon}}(\|\alpha_{f}\|+|\beta_{f}|+|\zeta_{f} |)\right),\] which induce that for all sufficiently small \(|\lambda|,\ \epsilon>0\) \[\alpha_{b} =\alpha_{b}(\alpha_{f},\beta_{b},\beta_{f},\zeta_{f}),\] \[\|\alpha_{b}(\alpha_{f},\beta_{b},\beta_{f},\zeta_{f})\| \leq C\left((\epsilon|\log\epsilon|+|\lambda|)|\beta_{b}|+e^{- \frac{\sigma}{\epsilon}}(\|\alpha_{f}\|+|\beta_{f}|+|\zeta_{f}|)\right). \tag{48}\] By (41), (46) and (48), we obtain a linear map \(\mathcal{H}_{6}\) satisfying \[\alpha_{f} =\mathcal{H}_{6}(\alpha_{f},\beta_{b},\beta_{f},\zeta_{f}),\] \[\|\mathcal{H}_{6}(\alpha_{f},\beta_{b},\beta_{f},\zeta_{f})\| \leq C\left((\epsilon|\log\epsilon|+|\lambda|)(\|\alpha_{f}\|+| \beta_{f}|+|\zeta_{f}|)+e^{-\frac{\sigma}{\epsilon}}|\beta_{b}|\right),\] which further imply that for all sufficiently small \(|\lambda|,\ \epsilon>0\) \[\alpha_{f} =\alpha_{f}(\beta_{f},\zeta_{f},\beta_{b}),\] \[\|\alpha_{f}(\beta_{f},\zeta_{f},\beta_{b})\| \leq C\left(\epsilon|\log\epsilon|+|\lambda|)(|\beta_{f}|+|\zeta_{ f}|)+e^{-\frac{\sigma}{\epsilon}}|\beta_{b}|\right). \tag{49}\] Substituting (49) into (37) at \(\xi=0\) gives \[\varphi_{f}^{sl}(0,\lambda)= \beta_{f}\omega_{f}(0)+\zeta_{f}Q_{f,+}^{s}(0)\varphi_{2}\] \[+\beta_{f}\int_{L_{\epsilon}}^{0}T_{f,+}^{u}(0,\hat{\xi})B_{f}( \hat{\xi},\lambda)\omega_{f}(\hat{\xi})d\hat{\xi}+\mathcal{H}_{f}(\beta_{f}, \zeta_{f},\beta_{b}),\] where we have used (26), (38) and (39). Applying the projection \(Q_{f,-}^{u}(0)\) and (26) to the above equation, one gets \[Q_{f,-}^{u}(0)\varphi_{f}^{sl}(0,\lambda)=\beta_{f}\omega_{f}(0).\] Thus, any solution \(\varphi^{sl}\) to system (17) satisfies the entry condition (32). Similarity, substituting (49) into (48) at \(\xi=0\) gives \[\alpha_{b} =\alpha_{b}(\alpha_{f}(\beta_{f},\zeta_{f},\beta_{b}),\beta_{f}, \zeta_{f},\beta_{b}),\] \[\|\alpha_{b}(\alpha_{f}(\beta_{f},\zeta_{f},\beta_{b}),\beta_{f}, \zeta_{f},\beta_{b})\| \leq C\left((\epsilon|\log\epsilon|+|\lambda|)|\beta_{b}|+e^{- \frac{\alpha}{2}}(|\beta_{f}|+|\zeta_{f}|)\right).\] Substituting the above expression of \(\alpha_{b}\) and its associated estimation into (43) at \(\xi=0\), together with (26), (44) and (45), verifies \[\varphi^{sl}_{b}(0,\lambda)=\beta_{b}\omega_{b}(0)+\beta_{b}\int_{-L_{\epsilon }}^{0}T^{s}_{b,-}(0,\hat{\xi})B_{b}(\hat{\xi},\lambda)\omega_{b}(\hat{\xi})d \hat{\xi}+\mathcal{H}_{b}(\beta_{f},\zeta_{f},\beta_{b})\] Applying the projection \(Q^{u}_{b,-}(0)\) and (26) to the above equation shows \[Q^{u}_{b,-}(0)\varphi^{sl}_{b}(0,\lambda)=\beta_{b}\omega_{b}(0).\] Thus, any solution \(\varphi^{sl}\) to system (17) satisfies the condition (33). Since all quantities, the perturbed matrices \(B_{j}(\xi,\lambda),\ j=f,\ b,\) the evolution \(\mathcal{T}(\xi,\hat{\xi},\lambda)\) of system (17) and the projections \(\mathcal{Q}^{u,s}_{r}(\xi,\lambda)\) associated with the exponential dichotomy of (17), involved in the above proofs depend analytically on \(\lambda\), \(\varphi^{sl}(\xi,\lambda)\) is analytic in \(\lambda\). The statement \((i)\) follows. \((ii)\). Note that \(e^{-\eta\xi}\phi^{\prime}_{a,\epsilon}(\xi)\) is an eigenfunction of system (17) at \(\lambda=0\). Thus there exist \(\beta_{f,0},\ \zeta_{f,0}\in\mathbb{C},\ \alpha_{f,0}\in R(Q^{u}_{f,+}(L_{ \epsilon}))\) such that (37) holds at \(\lambda=0\) with \(\varphi^{sl}_{f}(\xi,0)=e^{-\eta\xi}\phi^{\prime}_{a,\epsilon}(\xi)\), where \((\alpha_{f},\beta_{f},\zeta_{f})=(\alpha_{f,0},\beta_{f,0},\zeta_{f,0})\). Applying the projection \(Q^{u}_{f,+}(L_{\epsilon})\) and (26) to (37) at \(\xi=L_{\epsilon}\) derives \[\alpha_{f,0}=Q^{u}_{f,+}(L_{\epsilon})e^{-\eta L_{\epsilon}}\phi^{\prime}_{a, \epsilon}(L_{\epsilon}),\] and acting the projection \(Q^{u}_{f,+}(0)\) on (37) at \(\xi=0\) yields \[Q^{u}_{f,+}(0)\phi^{\prime}_{a,\epsilon}(0)=T^{u}_{f,+}(0,L_{\epsilon})e^{- \eta L_{\epsilon}}\phi^{\prime}_{a,\epsilon}(L_{\epsilon})+\int_{L_{\epsilon }}^{0}T^{u}_{f,+}(0,\hat{\xi})B_{f}(\hat{\xi},0)e^{-\eta\hat{\xi}}\phi^{\prime }_{a,\epsilon}(\hat{\xi})d\hat{\xi}.\] In a similar fashion, there exist \(\beta_{b,0}\in\mathbb{C},\ \alpha_{b,0}\in\mathbb{R}(Q^{s}_{b,-}(-L_{\epsilon}))\) such that (43) holds at \(\lambda=0\) with \(\varphi^{sl}_{b}(\xi,0)=e^{-\eta(\xi+Z_{a,\epsilon})}\phi^{\prime}_{a,\epsilon }(\xi+Z_{a,\epsilon})\). Acting the projection \(Q^{s}_{b,-}(-L_{\epsilon})\) on (43) at \(\xi=-L_{\epsilon}\), together with (26), gives \[\alpha_{b,0}=Q^{s}_{b,-}(-L_{\epsilon})e^{-\eta(Z_{a,\epsilon}-L_{\epsilon})} \phi^{\prime}_{a,\epsilon}(Z_{a,\epsilon}-L_{\epsilon}),\] and the projection \(Q^{s}_{b,-}(0)\) on (37) at \(\xi=0\) forces \[Q^{s}_{b,-}(0)\phi^{\prime}_{a,\epsilon}(Z_{a,\epsilon})= T^{s}_{b,-}(0,-L_{\epsilon})e^{\eta L_{\epsilon}}\phi^{\prime}_{a,\epsilon}(Z_{a, \epsilon}-L_{\epsilon})\] \[+\int_{-L_{\epsilon}}^{0}T^{s}_{b,-}(0,\hat{\xi})B_{b}(\hat{\xi},0)e^{-\eta\hat{\xi}}\phi^{\prime}_{a,\epsilon}(Z_{a,\epsilon}+\hat{\xi})d \hat{\xi}.\] Statement \((ii)\) follows. It completes the proof of the proposition. #### 5.3.4. Along the back Similar to the last subsection, here we study the properties of the solution along the heteroclinic orbit of the layer system on the layer \(w=w_{b}\). **Proposition 9**.: _Let \(T^{u,s}_{b,\pm}(\xi,\hat{\xi})=T^{u,s}_{b,\pm}(\xi,\hat{\xi};a)\) be the \((\)un\()\)stable evolution of system (20) under the exponential dichotomy established in Proposition 6 and the associated projections are \(Q^{u,s}_{b,\pm}(\xi)=Q^{u,s}_{b,\pm}(\xi;a)\). The following statements hold._ \[\sup_{\xi\in[0,L_{\epsilon}]}\|\hat{\varphi}_{b,+}(\xi,\lambda)\|\leq C(\|\alpha_{b, +}\|+|\beta_{b,+}|+|\zeta_{b,+}|), \tag{54}\] following (53), and taking \(\delta,\ \epsilon_{0}>0\) smaller if necessary. By Proposition 4, system (17) has the exponential dichotomy on \(I_{l}=[Z_{a,\epsilon}+L_{\epsilon},\infty)\) with the associated projections \(\mathcal{Q}_{l}^{u,s}(\xi,\lambda)\). Similar to the derivation of (36) in the proof of Proposition 8, we arrive at \[\|\mathcal{Q}_{l}^{u,s}(Z_{a,\epsilon}+L_{\epsilon},\lambda)-Q_{b,+}^{u,s}(L_{ \epsilon})\|\leq C(\epsilon|\log\epsilon|+|\lambda|). \tag{55}\] Since any exponentially decaying solution of system (17) at \(\xi=Z_{a,\epsilon}+L_{\epsilon}\) under the action of \(\mathcal{Q}_{l}^{u}(Z_{a,\epsilon}+L_{\epsilon},\lambda)\) must be \(0\), it follows that any solution \(\varphi_{l}(\xi,\lambda)\) of system (17) decaying exponentially in forward time can be written as \[\varphi_{l}(\xi,\lambda)=\mathcal{T}_{l}^{s}(\xi,Z_{a,\epsilon}+L_{\epsilon},\lambda)\beta_{l}, \tag{56}\] with some \(\beta_{l}\in\mathrm{R}(\mathcal{Q}_{l}^{s}(Z_{a,\epsilon}+L_{\epsilon},\lambda))\). Here \(\mathcal{T}_{l}^{s}(\xi,\hat{\xi},\lambda)\) represents the stable evolution of system (17). Applying \(\mathcal{Q}_{l}^{u}(Z_{a,\epsilon}+L_{\epsilon},\lambda)\) to \(\hat{\varphi}_{b,+}(L_{\epsilon},\lambda)\) shows \[\begin{split}\alpha_{b,+}=\mathcal{H}_{1}(\alpha_{b,+},\beta_{b,+ },\zeta_{b,+}),\\ \|\mathcal{H}_{1}(\alpha_{b,+},\beta_{b,+},\zeta_{b,+})\|\leq C( \epsilon|\log\epsilon|+|\lambda|)(\|\alpha_{b,+}\|+|\beta_{b,+}|+|\zeta_{b,+} |),\end{split} \tag{57}\] by using (26), (53), (54) and (55). Therefore, solving (57) for \(\alpha_{b,+}\) yields \[\begin{split}\alpha_{b,+}=\alpha_{b,+}(\beta_{b,+},\zeta_{b,+}), \\ \|\alpha_{b,+}(\beta_{b,+},\zeta_{b,+})\|\leq C(\epsilon|\log \epsilon|+|\lambda|)(|\beta_{b,+}|+|\zeta_{b,+}|),\end{split} \tag{58}\] for sufficiently small \(|\lambda|,\ \epsilon>0\). Substituting (58) into (52), it holds \[\begin{split}\hat{\varphi}_{b,+}(\xi,\lambda)=& T_{b,+}^{u}(\xi,L_{\epsilon})\alpha_{b,+}(\beta_{b,+}, \zeta_{b,+})+\beta_{b,+}\omega_{b}(\xi)+\zeta_{b,+}T_{b,+}^{s}(\xi,0)\varphi_ {2}\\ &+\int_{0}^{\xi}T_{b,+}^{s}(\xi,\hat{\xi})B_{b}(\hat{\xi}, \lambda)\hat{\varphi}_{b,+}(\hat{\xi},\lambda)d\hat{\xi}\\ &+\int_{L_{\epsilon}}^{\xi}T_{b,+}^{u}(\xi,\hat{\xi})B_{b}(\hat{ \xi},\lambda)\hat{\varphi}_{b,+}(\hat{\xi},\lambda)d\hat{\xi}.\end{split}\] Recall that \[\hat{\varphi}_{b,+}(\xi-Z_{a,\epsilon},\lambda)=\varphi_{b,+}(\xi,\lambda), \ \xi\in[Z_{a,\epsilon},Z_{a,\epsilon}+L_{\epsilon}].\] Then, by using (53), (54) and (26), we derive \[\begin{split}\varphi_{b,+}(Z_{a,\epsilon},\lambda)=& \beta_{b,+}\omega_{b}(0)+\zeta_{b,+}Q_{b,+}^{s}\varphi_{2}\\ &+\beta_{b,+}\int_{L_{\epsilon}}^{0}T_{b,+}^{u}(0,\hat{\xi})B_{b} (\hat{\xi},\lambda)\omega_{b}(\hat{\xi})d\hat{\xi}+\mathcal{H}_{b,+}(\beta_{ b,+},\zeta_{b,+}).\end{split}\] Since all quantities, the perturbed matrices \(B_{b}(\xi,\lambda)\), the evolution \(\mathcal{T}(\xi,\hat{\xi},\lambda)\) of system (17) and the projections \(\mathcal{Q}_{l}^{u,s}(\xi,\lambda)\) associated with the exponential dichotomy of (17), occurring in the above proofs analytically depend on \(\lambda\), it induces that \(\varphi_{b,+}(\xi,\lambda)\) is analytic in \(\lambda\). \((ii)\). Similar to the proof of Proposition 8, there exist \(\beta_{b,+},\ \zeta_{b,+}\in\mathbb{C}\) and \(\alpha_{b,+}\in\mathrm{R}(Q_{b,+}^{u}(L_{\epsilon}))\) such that equation (52) holds at \(\lambda=0\) with \[\hat{\varphi}_{b,+}(\xi,0)=e^{-\eta(\xi+Z_{a,\epsilon})}\phi_{a,\epsilon}^{ \prime}(\xi+Z_{a,\epsilon}). \tag{59}\] Applying the projection \(Q_{b,+}^{u}(L_{\epsilon})\) and (26) to the above equation at \(\xi=L_{\epsilon}\) derives \[\alpha_{b,+}=Q_{b,+}^{u}(L_{\epsilon})e^{-\eta(L_{\epsilon}+Z_{a,\epsilon})} \phi_{a,\epsilon}^{\prime}(L_{\epsilon}+Z_{a,\epsilon}).\] Then, applying the projection \(Q_{b,+}^{u}(0)\) to (59) at \(\xi=0\) gives (51). This proves the proposition. #### 5.3.5. The matching procedure In the previous section, we divided the real line \(\mathbb{R}\) into three intervals and constructed a piecewise continuous, exponentially localized solution of system (17) for any \(\lambda\in R_{1}(\delta)\). In the two discontinuous jumps at \(\xi=0\) and \(\xi=Z_{a,\epsilon}\), we get the expressions of the left and right limits of the solution, which are the entry and exit conditions along the right branch of the critical curve. Determining existence of eigenvalues is now reduced to find \(\lambda\in R_{1}(\delta)\) such that the exit and entry conditions match. After equalizing the exit condition and the entry condition, a single analytical matching equation in \(\lambda\) can be obtained in the next result. **Theorem 5**.: _There exist \(\delta,\ \epsilon_{0}>0\) such that for \(\epsilon\in(0,\epsilon_{0})\) the shifted eigenvalue system (17) has precisely two different eigenvalues \(\lambda_{0},\ \lambda_{1}\in R_{1}(\delta)\)._ * _The eigenvalue_ \(\lambda_{0}\) _equals_ \(0\) _and the corresponding eigenspace is spanned by the solution_ \(e^{-\eta\xi}\phi^{\prime}_{a,\epsilon}(\xi)\) _of system (_17_)._ * _The eigenvalue_ \(\lambda_{1}\) _is a-uniformly approximated by_ \[\lambda_{1}=-\frac{M_{b,2}}{M_{b,1}}+O(|\epsilon\log\epsilon|^{2}),\] _where_ (60) \[\begin{split} M_{b,1}&=\int_{-\infty}^{+\infty}F(w _{b})(u^{\prime}_{b}(\xi))^{2}e^{-c_{0}F^{2}(w_{b})\xi}d\xi,\\ M_{b,2}&=\langle\Psi_{*},\ \phi^{\prime}_{a, \epsilon}(Z_{a,\epsilon}-L_{\epsilon})\rangle,\end{split}\] _with_ \[\Psi_{*}=\left(\begin{array}{c}e^{c_{0}F^{2}(w_{b})L_{\epsilon}}v^{\prime }_{b}(-L_{\epsilon})\\ -e^{c_{0}F^{2}(w_{b})L_{\epsilon}}u^{\prime}_{b}(-L_{\epsilon})\\ \int_{\infty}^{-L_{\epsilon}}u^{\prime}_{b}(z)e^{-c_{0}F^{2}(w_{b})z}\Delta_{ b}(z)dz\end{array}\right).\] _The corresponding eigenspace associated to_ \(\lambda_{1}\) _is spanned by a solution_ \(\varphi_{1}(\xi)\) _to system (_17_) satisfying_ (61) \[\begin{split}\|\varphi_{1}(\xi+Z_{a,\epsilon})-\omega_{b}(\xi) \|&\leq C\epsilon|\log\epsilon|,\quad\xi\in[-L_{\epsilon},L_{ \epsilon}],\\ \|\varphi_{1}(\xi+Z_{a,\epsilon})\|&\leq C\epsilon|\log \epsilon|,\quad\xi\in\mathbb{R}\setminus[-L_{\epsilon},L_{\epsilon}],\end{split}\] _where_ \(C>0\) _is a constant independent of_ \(a\) _and_ \(\epsilon\)_. Moreover,_ \(M_{b,1}\) _and_ \(M_{b,2}\) _satisfy the bounds_ \[1/C\leq M_{b,1}\leq C,\ |M_{b,2}|\leq C\epsilon|\log\epsilon|.\] Proof.: By Theorem 1\((i)\), we derive \[\begin{split}\|B_{f}(\xi,\lambda;a,\epsilon)\|&\leq C (\epsilon|\log\epsilon|+|\lambda|),\ \ \xi\in(-\infty,L_{\epsilon}],\\ \|B_{b}(\xi,\lambda;a,\epsilon)\|&\leq C(\epsilon|\log \epsilon|+|\lambda|),\ \ \xi\in[-L_{\epsilon},L_{\epsilon}].\end{split} \tag{62}\] Then it follows that \[\begin{split}\left\|\left(\begin{array}{c}\phi^{\prime}_{f}( \xi)\\ 0\end{array}\right)-\phi^{\prime}_{a,\epsilon}(\xi)\right\|&\leq C \epsilon|\log\epsilon|,\quad\xi\in(-\infty,L_{\epsilon}],\\ \left\|\left(\begin{array}{c}\phi^{\prime}_{b}(\xi)\\ 0\end{array}\right)-\phi^{\prime}_{a,\epsilon}(Z_{a,\epsilon}+\xi)\right\|& \leq C\epsilon|\log\epsilon|,\quad\xi\in[-L_{\epsilon},L_{\epsilon}].\end{split}\] According to Proposition 7, any exponential decay solution \(\varphi_{f,-}(\xi,\lambda)\) of system (17) in backward time satisfies (27) at \(\xi=0\) with some constant \(\beta_{f,-}\in\mathbb{C}\). Then, by Proposition 8, it follows that there exist some \(\beta_{f},\ \zeta_{f}\in\mathbb{C}\) and \(\beta_{b}\in\mathbb{C}\) such that any solution \(\varphi^{sl}(\xi,\lambda)\) to (17) satisfies (32) at \(\xi=0\) and satisfies (33) at \(\xi=Z_{a,\epsilon}\), respectively. Finally, from Proposition 9 any exponential decay solution \(\varphi_{b,+}(\xi,\lambda)\) of system (17) in forward time satisfies (50) at \(\xi=Z_{a,\epsilon}\) with some \(\beta_{b,+},\ \zeta_{b,+}\in\mathbb{C}\). Next, we will match the solutions \(\ \varphi_{f,-}\) and \(\varphi^{sl}\) at \(\xi=0\), and match \(\varphi^{sl}\) and \(\varphi_{b,+}\) at \(\xi=Z_{a,\epsilon}\). To do so is sufficient to require that \[\begin{split}& Q^{u,s}_{f,-}(0)\left(\varphi_{f,-}(0,\lambda)- \varphi^{sl}(0,\lambda)\right)=0,\\ & Q^{u,s}_{b,-}(0)\left(\varphi^{sl}(Z_{a,\epsilon},\lambda)- \varphi_{b,+}(Z_{a,\epsilon},\lambda)\right)=0.\end{split} \tag{63}\] To solve these two equations, we need their concrete expressions. By computing the expressions of \(Q^{u}_{f,-}(0)\left(\varphi_{f,-}(0,\lambda)-\varphi^{sl}(0,\lambda)\right)=0\) and of \(Q^{u}_{b,-}(0)\big{(}\varphi^{sl}(Z_{a,\epsilon},\lambda)-\varphi_{b,+}(Z_{a, \epsilon},\lambda)\big{)}=0\), one can immediately obtain \(\beta_{f}=\beta_{f,-}\) and \(\beta_{b}=\beta_{b,+}\) by using (27), (32), (33) and (50). Next we consider the matching conditions (63) with \(s\). We define the vector \[\varphi_{j,\perp}:=\varphi_{1,j}-\int_{\infty}^{0}e^{-\eta\xi}\langle\psi_{j, ad}(\xi),F(\xi)\rangle d\xi\ \varphi_{2},\quad j=f,\ b.\] Since \(\mathrm{R}(Q^{s}_{j,-}(0))=\mathrm{Span}(\varphi_{1,j},\varphi_{2})\), it holds that \(\varphi_{j,\perp}\) and \(\varphi_{2}\) also span \(\mathrm{R}(Q^{s}_{j,-}(0))\). Moreover, some calculations show that \[\varphi_{j,\perp}\in\mathrm{Ker}(Q^{s}_{j,+}(0)^{*})=\mathrm{R}(Q^{u}_{j,+}(0 )^{*})\subset\mathrm{R}(Q^{s}_{j,-}(0)^{*}),\quad j=f,\ b.\] Then equations (63) with \(s\) can be rewritten as \[\begin{split}&\big{\langle}\varphi_{2},\varphi_{f,-}(0,\lambda)- \varphi^{sl}(0,\lambda)\big{\rangle}=0,\\ &\big{\langle}\varphi_{2},\varphi^{sl}(Z_{a,\epsilon},\lambda)- \varphi_{b,+}(Z_{a,\epsilon},\lambda)\big{\rangle}=0,\\ &\big{\langle}\varphi_{f,\perp},\varphi_{f,-}(0,\lambda)-\varphi^ {sl}(0,\lambda)\big{\rangle}=0,\\ &\big{\langle}\varphi_{b,\perp},\varphi^{sl}(Z_{a,\epsilon}, \lambda)-\varphi_{b,+}(Z_{a,\epsilon},\lambda)\big{\rangle}=0.\end{split} \tag{64}\] By the identities (27), (32), (33) and (50), we can further write the first two equations as \[\begin{split} 0&=\langle\varphi_{2},\varphi_{f,-}(0, \lambda)-\varphi^{sl}(0,\lambda)\rangle=-\zeta_{f}+\mathcal{H}_{1}(\beta_{b}, \beta_{f},\zeta_{f}),\\ 0&=\langle\varphi_{2},\varphi^{sl}(Z_{a,\epsilon}, \lambda)-\varphi_{b,+}(Z_{a,\epsilon},\lambda)\rangle=-\zeta_{b,+}+\mathcal{H }_{2}(\beta_{b},\beta_{f},\zeta_{f},\zeta_{b,+}),\end{split} \tag{65}\] where \[|\mathcal{H}_{1}(\beta_{b},\beta_{f},\zeta_{f})| \leq C((\epsilon|\log\epsilon|+|\lambda|)(|\beta_{f}|+|\zeta_{ f}|)+e^{-q/\epsilon}|\beta_{b}|),\] \[|\mathcal{H}_{2}(\beta_{b},\beta_{f},\zeta_{f},\zeta_{b,+})| \leq C((\epsilon|\log\epsilon|+|\lambda|)(|\beta_{b}|+|\zeta_{ b,+}|)+e^{-q/\epsilon}(|\beta_{f}|+|\zeta_{f}|)),\] with \(q>0\) a constant independent of \(\lambda,\ a,\ \epsilon\). Hence system (65) is solvable for \(\zeta_{f}\) and \(\zeta_{b,+}\), provided that \(|\lambda|,\ \epsilon>0\) are sufficiently small, and the solutions satisfy \[\begin{split}&\zeta_{f}=\zeta_{f}(\beta_{b},\beta_{f}),\ \zeta_{b,+}=\zeta_{b,+}(\beta_{b},\beta_{f}),\\ &|\zeta_{f}(\beta_{b},\beta_{f})|\leq C\left((\epsilon|\log \epsilon|+|\lambda|)|\beta_{f}|+e^{-q/\epsilon}|\beta_{b}|\right),\\ &|\zeta_{b,+}(\beta_{b},\beta_{f})|\leq C\left((\epsilon|\log \epsilon|+|\lambda|)|\beta_{b}|+e^{-q/\epsilon}|\beta_{f}|\right).\end{split} \tag{66}\] Combining (27), (32) and (19) with \(\varphi_{f,\perp}\in\mathrm{R}(Q^{u}_{f,+}(0)^{*})\), the third equation of (64) can be written in \[\begin{split} 0&=\langle\varphi_{f,\perp},\varphi_{f,-}(0, \lambda)-\varphi^{sl}(0,\lambda)\rangle\\ &=\beta_{f}\int_{-L_{\epsilon}}^{L_{\epsilon}}\langle T_{f}(0,\hat {\xi})^{*}\varphi_{f,\perp},B_{f}(\hat{\xi},\lambda)\omega_{f}(\hat{\xi}) \rangle d\hat{\xi}+\mathcal{H}_{3}(\beta_{f},\beta_{b}),\end{split} \tag{67}\] where \[|\mathcal{H}_{3}(\beta_{f},\beta_{b})|\leq C\left((\epsilon|\log\epsilon|+| \lambda|)^{2}|\beta_{f}|+e^{-q/\epsilon}|\beta_{b}|\right).\] Similarity, by using (33), (50), (19) and \(\varphi_{b,\perp}\in\mathrm{R}(Q^{u}_{b,+}(0)^{*})\), the fourth equation of (64) can be written in \[\begin{split} 0&=\big{\langle}\varphi_{b,\perp},\varphi^{sl}(Z_{a, \epsilon},\lambda)-\varphi_{b,+}(Z_{a,\epsilon},\lambda)\big{\rangle}\\ &=\beta_{b}\int_{-L_{\epsilon}}^{L_{\epsilon}}\Big{\langle}T_{b} (0,\hat{\xi})^{*}\varphi_{b,\perp},B_{b}(\hat{\xi},\lambda)\omega_{b}(\hat{ \xi})\Big{\rangle}\,d\hat{\xi}+\mathcal{H}_{4}(\beta_{f},\beta_{b}),\end{split} \tag{68}\] where \[|\mathcal{H}_{4}(\beta_{f},\beta_{b})|\leq C\left((\epsilon|\log\epsilon|+|\lambda |)^{2}|\beta_{b}|+e^{-q/\epsilon}|\beta_{f}|\right).\] To present clear approximate expressions of (67) and (68), we first give the following approximations \[\begin{split} 0&=\left\langle\varphi_{f,\perp}, \phi^{\prime}_{a,\epsilon}(0)-\phi^{\prime}_{a,\epsilon}(0)\right\rangle\\ &=\left\langle\varphi_{f,\perp},Q^{s}_{f,-}(0)\phi^{\prime}_{a, \epsilon}(0)-Q^{u}_{f,+}(0)\phi^{\prime}_{a,\epsilon}(0)\right\rangle\\ &=\int_{-L_{\epsilon}}^{L_{\epsilon}}\left\langle e^{-\eta\hat{ \xi}}T_{f}(0,\hat{\xi})^{*}\varphi_{f,\perp},B_{f}(\hat{\xi},0)\phi^{\prime}_ {a,\epsilon}(\hat{\xi})\right\rangle d\hat{\xi}+O(\epsilon^{2}),\\ 0&=\left\langle\varphi_{b,\perp},\phi^{\prime}_{a, \epsilon}(Z_{a,\epsilon})-\phi^{\prime}_{a,\epsilon}(Z_{a,\epsilon})\right\rangle \\ &=\left\langle\varphi_{b,\perp},Q^{s}_{b,-}(0)\phi^{\prime}_{a, \epsilon}(Z_{a,\epsilon})-Q^{u}_{b,+}(0)\phi^{\prime}_{a,\epsilon}(Z_{a, \epsilon})\right\rangle\\ &=\int_{-L_{\epsilon}}^{L_{\epsilon}}\left\langle e^{-\eta\hat{ \xi}}T_{b}(0,\hat{\xi})^{*}\varphi_{b,\perp},B_{b}(\hat{\xi},0)\phi^{\prime}_ {a,\epsilon}(Z_{a,\epsilon}+\hat{\xi})\right\rangle d\hat{\xi}\\ &\qquad+\langle e^{\eta L_{\epsilon}}T_{b}(0,-L_{\epsilon})^{*} \varphi_{b,\perp},\phi^{\prime}_{a,\epsilon}(Z_{a,\epsilon}-L_{a,\epsilon}) \rangle+O(\epsilon^{2}).\end{split} \tag{69}\] Next, we simplify the expressions in (67) and (68) by (69) and (70). Direct calculations yield \[\begin{split} e^{-\eta\xi}T_{j}(0,\xi)^{*}\varphi_{j,\perp}& =\left(\begin{array}{c}e^{-\eta\xi}\psi_{j,ad}(\xi)\\ \int_{\xi}^{\infty}e^{-\eta z}\langle\psi_{j,ad}(z),F_{j}(z) \rangle dz\end{array}\right)\\ &=\left(\begin{array}{c}e^{-c_{0}F^{2}(w_{j})\xi}v^{\prime}_{j}( \xi)\\ -e^{-c_{0}F^{2}(w_{j})\xi}u^{\prime}_{j}(\xi)\\ \int_{\infty}^{\xi}e^{-c_{0}F^{2}(w_{j})z}u^{\prime}_{j}(z)\Delta_{j}(z)dz \end{array}\right),\ \xi\in\mathbb{R},\ j=f,\ b.\end{split} \tag{71}\] Recall that \(\phi^{\prime}_{f}(\xi)\) converges to \(0\) at the exponential rate \(F(0)\sqrt{\frac{k}{2}}\) as \(\xi\to\pm\infty\), and \(\phi^{\prime}_{b}(\xi)\) converges to \(0\) at the exponential rate \(F(w_{b})U_{2}(w_{b})\sqrt{\frac{k}{2}}\) as \(\xi\to\pm\infty\). Note that \(c_{0}=\frac{\sqrt{2k}}{F(0)}(\frac{1}{2}-a)\), and \(w_{b}\) satisfies (11). Thus, for all \(a\geq 0\), there exists an \(a\)-independent constant \(C>0\) such that the upper two entries of (71) are bounded by \(C\) on \(\mathbb{R}\), and the last entry is bounded by \(C|\log\epsilon|\) on \([-L_{\epsilon},L_{\epsilon}]\). Combining these bounds together with (19), (69) and (71), we get the next \(a\)-uniform approximation \[\begin{split}&\int_{-L_{\epsilon}}^{L_{\epsilon}}\left\langle T_{ f}(0,\xi)^{*}\varphi_{f,\perp},B_{f}(\xi,\lambda)\omega_{f}(\xi)\right\rangle d \xi\\ =&\int_{-L_{\epsilon}}^{L_{\epsilon}}\left\langle e^{- \eta\xi}T_{f}(0,\xi)^{*}\varphi_{f,\perp},B_{f}(\xi,\lambda)\phi^{\prime}_{a, \epsilon}(\xi)\right\rangle d\xi+O(|\epsilon\log\epsilon|^{2})\\ =&\int_{-L_{\epsilon}}^{L_{\epsilon}}\left\langle e^{- \eta\xi}T_{f}(0,\xi)^{*}\varphi_{f,\perp},B_{f}(\xi,0)\phi^{\prime}_{a, \epsilon}(\xi)\right\rangle d\xi\\ &\qquad-\lambda\int_{-L_{\epsilon}}^{L_{\epsilon}}F(0)e^{-c_{0}F ^{2}(0)\xi}(u^{\prime}_{f}(\xi))^{2}d\xi+O(|\epsilon\log\epsilon|^{2})\\ =&-\lambda\int_{-\infty}^{\infty}F(0)e^{-c_{0}F^{2}(0) \xi}(u^{\prime}_{f}(\xi))^{2}d\xi+O(|\epsilon\log\epsilon|^{2}).\end{split} \tag{72}\] By similar calculations together with (19), (70) and (71), one has the next \(a\)-uniform approximation \[\begin{split}&\int_{-L_{\epsilon}}^{L_{\epsilon}}\left\langle T_{b}(0, \xi)^{*}\varphi_{b,\perp},B_{b}(\xi,\lambda)\omega_{b}(\xi)\right\rangle d\xi \\ =&\int_{-L_{\epsilon}}^{L_{\epsilon}}\left\langle e ^{-\eta\xi}T_{b}(0,\xi)^{*}\varphi_{b,\perp},B_{b}(\xi,\lambda)\phi^{\prime}_ {a,\epsilon}(Z_{a,\epsilon}+\xi)\right\rangle d\xi+O(|\epsilon\log\epsilon|^{ 2})\\ =&\int_{-L_{\epsilon}}^{L_{\epsilon}}\left\langle e ^{-\eta\xi}T_{b}(0,\xi)^{*}\varphi_{b,\perp},B_{b}(\xi,0)\phi^{\prime}_{a, \epsilon}(Z_{a,\epsilon}+\xi)\right\rangle d\xi\\ &\qquad-\lambda\int_{-L_{\epsilon}}^{L_{\epsilon}}F(w_{b})e^{-c _{0}F^{2}(w_{b})\xi}(u^{\prime}_{b}(\xi))^{2}d\xi+O(|\epsilon\log\epsilon|^{2} )\\ =&-\left\langle e^{\eta L_{\epsilon}}T_{b}(0,-L_{ \epsilon})^{*}\varphi_{b,\perp},\phi^{\prime}_{a,\epsilon}(Z_{a,\epsilon}-L_ {\epsilon})\right\rangle\\ &\qquad-\lambda\int_{-\infty}^{\infty}F(w_{b})e^{-c_{0}F^{2}(w_ {b})\xi}(u^{\prime}_{b}(\xi))^{2}d\xi+O(|\epsilon\log\epsilon|^{2}).\end{split} \tag{73}\] Using (72) and (73), the matching conditions (67) and (68) can be written in the next form \[\begin{split}&\left(\begin{array}{cc}\lambda M_{f}+O((\epsilon| \log\epsilon|+|\lambda|)^{2})&O(e^{-q/\epsilon})\\ O(e^{-q/\epsilon})&-\lambda M_{b,1}-M_{b,2}+O((\epsilon|\log\epsilon|+| \lambda|)^{2})\end{array}\right)\left(\begin{array}{c}\beta_{f}\\ \beta_{b}\end{array}\right)\\ &=\vec{0},\end{split} \tag{74}\] where the approximations are \(a\)-uniformly, \[M_{f}=\int_{-\infty}^{\infty}F(0)e^{-c_{0}F^{2}(0)\xi}(u^{\prime}_{f}(\xi))^{ 2}d\xi, \tag{75}\] and \(M_{b,1}\) and \(M_{b,2}\) are those defined in (60). Hence, any nontrivial solution \((\beta_{f},\beta_{b})\) to system (74) corresponds to an eigenfunction of the shifted eigenvalue system (17). Since all quantities, the perturbed matrices \(B_{j}(\xi,\lambda),\ j=f,\ b\), the evolution \(\mathcal{T}(\xi,\hat{\xi},\lambda)\) of system (17) and the projections \(\mathcal{Q}^{u,s}_{r,l}(\xi,\lambda)\) associated with the exponential dichotomy of (17), occurring in this section are analytic in \(\lambda\), it induces that the matrix in (74) and its determinant \(D(\lambda)=D(\lambda;a,\epsilon)\) are analytic in \(\lambda\). Since \(u_{j}(\xi)\) converges to \(0\) as \(\xi\to\pm\infty\) at an exponential rate, then the \(\epsilon\)-independent quantities \(M_{f}\) and \(M_{b,1}\) are at leading order bounded away from \(0\). It follows that \(1/C\leq M_{f},\ M_{b,1}\leq C\). Combining (62) and (70) will arrive \(a\)-uniform estimate \(M_{b,2}=O(\epsilon|\log\epsilon|)\). Hence we have \[|D(\lambda)-\lambda M_{f}(\lambda M_{b,1}+M_{b,2})|<|\lambda M_{f}(\lambda M_ {b,1}+M_{b,2})|\] for \(\lambda\in\partial R_{1}(\delta):=\{\lambda\in\mathbb{C}:\ |\lambda|=\delta\}\) with \(\delta,\ \epsilon>0\) sufficiently small. Since the roots of the quadratic equation \(\lambda M_{f}(\lambda M_{b,1}+M_{b,2})=0\) in \(\lambda\) are \(0\) and \(-M_{b,2}M_{b,1}^{-1}\), \(D(\lambda)\) has precisely two roots \(\lambda_{0},\ \lambda_{1}\) in \(R_{1}(\delta)\), by Rouche Theorem, which are \(a\)-uniformly \(O(|\epsilon\log\epsilon|^{2})\)-close to \(0\) and \(-M_{b,2}M_{b,1}^{-1}\). Thus system (17) has two eigenvalues \(\lambda_{0},\ \lambda_{1}\) in the region \(R_{1}(\delta)\). Let \(\lambda_{1}\) be the eigenvalue, which is \(a\)-uniformly \(O(|\epsilon\log\epsilon|^{2})\)-close to \(-M_{b,2}M_{b,1}^{-1}\), and \(\varphi_{1}(\xi)\) be the associated eigenfunction of system (17). The eigenvector \((\beta_{f},\beta_{b})^{T}=(O(e^{-q/\epsilon}),1)^{T}\) is the associated solution to system (74). Propositions 7, 8 and 9 provide a piecewise continuous eigenfunction to system (17) for any prospective eigenvalue \(\lambda\in R_{1}(\delta)\). Thus, the eigenfunction \(\varphi_{1}(\xi)\) to (17) satisfies (27) on \(I_{f,-}\), (37) on \(I_{f,+}\), (40) on \(I_{r}\), (43) on \(I_{b,-}\), (52) on \(I_{b,+}\), and (56) on \(I_{l}\). Moreover, \(\beta_{f}=O(e^{-q/\epsilon})\) and \(\beta_{b}=1\) can represent all variables occurring in these six expressions and we obtain the approximation (61) of \(\varphi_{1}(\xi)\). By translational invariance, it holds that \(e^{-\eta\xi}\phi^{\prime}_{a,\epsilon}(\xi)\) is an eigenfunction of the shifted eigenvalue system (17) at \(\lambda=0\). Therefore, \(\lambda=0\) is one of the two eigenvalues \(\lambda_{0}\) and \(\lambda_{1}\). According to (61), it holds that the eigenfunction \(\varphi_{1}(\xi)\) is not a multiple of \(e^{-\eta\xi}\phi^{\prime}_{a,\epsilon}(\xi)\). By Lemma 2 the asymptotic matrix \(\hat{A}(0,0,\lambda,a,\epsilon)\) of the shifted eigenvalue system (17) has precisely one eigenvalue with positive real part, it induces that the space of the exponentially decaying solutions in backward time to (17) is one dimensional. Thereby, \(\varphi_{1}(\xi)\) and \(e^{-\eta\xi}\phi^{\prime}_{a,\epsilon}(\xi)\) must correspond to different eigenvalues. Consequently, \(\lambda_{0}=0\) and \(\lambda_{1}\neq\lambda_{0}\). It completes the proof of the proposition. #### 5.3.6. The translational eigenvalue \(\lambda_{0}=0\) is simple. In this section, we prove that the eigenvalue \(\lambda_{0}=0\) of \(\mathcal{L}_{a,\epsilon}\) is simple. Recall that \(\lambda_{0}=0\) has geometric multiplicity one by the proof of Theorem 5. **Proposition 10**.: _The translational eigenvalue \(\lambda_{0}=0\) of \(\mathcal{L}_{a,\epsilon}\) is simple._ Proof.: According to Theorem 5, we obtain that the eigenspace of the shifted eigenvalue system (17) at \(\lambda_{0}=0\) is spanned by \(e^{-\eta\xi}\phi^{\prime}_{a,\epsilon}(\xi)\). Thus, translating back to system (14), it holds that the kernel \(\ker(\mathcal{L}_{a,\epsilon})\) is of one dimensional and is spanned by \(\widetilde{\phi}^{\prime}_{a,\epsilon}(\xi)=(u_{a,\epsilon}(\xi),w_{a, \epsilon}(\xi))^{T}\). Thereby, the geometric multiplicity of \(\lambda_{0}=0\) for \(\mathcal{L}_{a,\epsilon}\) is equal to one. Next, we will prove that the algebraic multiplicity of \(\lambda_{0}=0\) is also equal to one, i.e., there is no exponentially localized solutions \(\widetilde{\psi}(\xi)\) to the generalized eigenvalue problem \(\mathcal{L}_{a,\epsilon}\widetilde{\psi}=\widetilde{\phi}^{\prime}_{a, \epsilon}(\xi)\). This problem can be rewritten as \[\check{\psi}_{\xi}=A_{0}(\xi,0)\check{\psi}+\partial_{\lambda}A_{0}(\xi,0) \phi^{\prime}_{a,\epsilon}(\xi), \tag{76}\] with \(A_{0}(\xi,0)\) as that in system (14). Recall from Proposition 1 and Lemma 2 that the asymptotic matrices of \(A_{0}(\xi,\lambda)\) and its shifted version \(A(\xi,\lambda)\) have precisely one eigenvalue with positive real part at \(\lambda=0\). Since \(e^{-\eta\xi}\phi^{\prime}_{a,\epsilon}(\xi)\) is exponentially localized, it follows that \(\check{\psi}_{\xi}\) is an exponentially localized solution to system (76) if and only if \(\psi_{\xi}=e^{-\eta\xi}\check{\psi}_{\xi}\) is an exponentially localized solution to the system \[\psi_{\xi}=A(\xi,0)\psi+e^{-\eta\xi}\partial_{\lambda}A(\xi,0)\phi^{\prime}_{ a,\epsilon}(\xi), \tag{77}\] where \(A(\xi,0)\) is the coefficient matrix of the shifted eigenvalue problem (17) at \(\lambda=0\). Again using the fact that \(e^{-\eta\xi}\phi^{\prime}_{a,\epsilon}(\xi)\) is an exponentially localized solution to (17) at \(\lambda=0\), together with Propositions 7, 8 and 9, one can get the solutions of system (17) on different intervals, \(\varphi_{f,-}(\xi,\lambda)\), \(\varphi^{sl}(\xi,\lambda)\) and \(\varphi_{b,+}(\xi,\lambda)\), which are analytic in \(\lambda\) and satisfy \[e^{-\eta\xi}\phi^{\prime}_{a,\epsilon}(\xi)=\varphi_{f,-}(\xi,0), \quad\xi\in(-\infty,0],\] \[e^{-\eta\xi}\phi^{\prime}_{a,\epsilon}(\xi)=\varphi^{sl}(\xi,0), \quad\xi\in[0,Z_{a,\epsilon}],\] \[e^{-\eta\xi}\phi^{\prime}_{a,\epsilon}(\xi)=\varphi_{b,+}(\xi,0), \quad\xi\in[Z_{a,\epsilon},+\infty),\] for some \(\beta_{f,-},\ \beta_{f},\ \zeta_{f},\ \beta_{b},\ \beta_{b,+},\ \zeta_{b,+}\in \mathbb{C},\\) which are given in Propositions 7, 8 and 9. As in the proof of Theorem 5, applying the projections \(Q^{u}_{j,-}(0),\ j=f,\ b\) to the differences \(\varphi_{f,-}(0,0)-\varphi^{sl}(0,0)\) and \(\varphi^{sl}(Z_{a,\epsilon},0)-\varphi_{b,+}(Z_{a,\epsilon},0)\) yield \(\beta_{f,-}=\beta_{f}\) and \(\beta_{b,+}=\beta_{b}\). According to (66), one knows that \(\zeta_{f}\) and \(\zeta_{b,+}\) are also treated as functions of \(\beta_{b}\) and \(\beta_{f}\). Moreover, we derive \[\zeta_{f}=\zeta_{f}(\beta_{b},\,\beta_{f}),\ \zeta_{b,+}=\zeta_{b,+}( \beta_{b},\beta_{f}),\] \[|\zeta_{f}(\beta_{b},\beta_{f})|\leq C\left(\epsilon|\log\epsilon ||\beta_{f}|+e^{-q/\epsilon}|\beta_{b}|\right),\] \[|\zeta_{b,+}(\beta_{b},\beta_{f})|\leq C\left(\epsilon|\log \epsilon||\beta_{b}|+e^{-q/\epsilon}|\beta_{f}|\right).\] where \(C>0\) is a constant independent of \(a\) and \(\epsilon\). Note that \(\partial_{\lambda}\varphi_{f,-}(\xi,0),\ \partial_{\lambda}\varphi^{sl}(\xi,0)\) and \(\partial_{\lambda}\varphi_{b,+}(\xi,0)\) are particular solutions to equation (77) on \((-\infty,0],\ [0,Z_{a,\epsilon}]\) and \([Z_{a,\epsilon},+\infty)\) respectively, and that the space of exponentially localized solutions to the homogeneous problem (17) associated to (77) is spanned by \(e^{-\eta\xi}\phi^{\prime}_{a,\epsilon}(\xi)\). Suppose \(\psi(\xi)\) is an exponentially localized solution to (77). Then, it holds \[\psi(\xi)=\partial_{\lambda}\varphi_{f,-}(\xi,0)+\alpha_{1}e^{- \eta\xi}\phi^{\prime}_{a,\epsilon}(\xi), \quad\xi\in(-\infty,0],\] \[\psi(\xi)=\partial_{\lambda}\varphi^{sl}(\xi,0)+\alpha_{2}e^{- \eta\xi}\phi^{\prime}_{a,\epsilon}(\xi), \quad\xi\in[0,Z_{a,\epsilon}],\] \[\psi(\xi)=\partial_{\lambda}\varphi_{b,+}(\xi,0)+\alpha_{3}e^{- \eta\xi}\phi^{\prime}_{a,\epsilon}(\xi), \quad\xi\in[Z_{a,\epsilon},+\infty), \tag{78}\] for some \(\alpha_{1},\ \alpha_{2},\ \alpha_{3}\in\mathbb{C}\). Differentiating the analytic expressions (27) and (32) with respect to \(\lambda\) derives \[\partial_{\lambda}\varphi_{f,-}(0,0)=\beta_{f}\int_{-\infty}^{0} T^{s}_{f,-}(0,\hat{\xi})\partial_{\lambda}B_{f}(\hat{\xi},0)\omega_{f}(\hat{\xi})d \hat{\xi}+\mathcal{H}_{1}(\beta_{f}),\] \[\partial_{\lambda}\varphi^{sl}(0,0)=\beta_{f}\int_{L_{\epsilon}} ^{0}T^{u}_{f,+}(0,\hat{\xi})\partial_{\lambda}B_{f}(\hat{\xi},0)\omega_{f}( \hat{\xi})d\hat{\xi}+\mathcal{H}_{2}(\beta_{f},\beta_{b}),\] \[\|\mathcal{H}_{1}(\beta_{f})\|\leq C\epsilon|\log\epsilon||\beta_ {f}|,\] \[\|\mathcal{H}_{2}(\beta_{f},\beta_{b})\|\leq C(\epsilon|\log \epsilon||\beta_{f}|+e^{-q/\epsilon}|\beta_{b}|), \tag{79}\] where \[\partial_{\lambda}B_{f}(\hat{\xi},0)=\left(\begin{array}{ccc}0&0&0\\ F(w_{a,\epsilon}(\xi))&0&0\\ 0&0&-\frac{1}{c}\end{array}\right)=:\widetilde{B}.\] By Theorem 1\((i)\), for \(\xi\in J_{f}=(-\infty,L_{\epsilon}]\), we obtain \[\|\varphi_{f,a,\epsilon}-\varphi_{1,f}\|\leq C\epsilon|\log\epsilon|,\ \text{where}\ \varphi_{f,a,\epsilon}=\left(\begin{array}{c}v^{\prime}_{a,\epsilon}(0)\\ -u^{\prime}_{a,\epsilon}(0)\\ 0\end{array}\right). \tag{80}\] Some calculations show that \(\varphi_{f,a,\epsilon}\perp\phi^{\prime}_{a,\epsilon}(0)\). Moreover, by expressions (25), it holds \[\varphi_{1,f}\in\left(R(Q^{s}_{f,-}(0))\right)\cap\left(R(Q^{u}_{f,+}(0))\right).\] Combining these results with (78), (79), (80) and (19) yield \[\begin{split} 0&=\left\langle\varphi_{f,a, \epsilon},\ \partial_{\lambda}\varphi_{f,-}(0,0)-\partial_{\lambda}\varphi^{sl}(0,0)+( \alpha_{1}-\alpha_{2})\phi^{\prime}_{a,\epsilon}(0)\right\rangle\\ &=\left\langle\varphi_{f,a,\epsilon},\ \partial_{\lambda} \varphi_{f,-}(0,0)-\partial_{\lambda}\varphi^{sl}(0,0)\right\rangle\\ &=\beta_{f}\left(\int_{-\infty}^{L_{\epsilon}}\left\langle T_{f} (0,\xi)^{*}\varphi_{1,f},\widetilde{B}\omega_{f}(\xi)\right\rangle d\xi+O( \epsilon|\log\epsilon|)\right)+\beta_{b}O(e^{-q/\epsilon})\\ &=\beta_{f}(-M_{f}+O(\epsilon|\log\epsilon|))+\beta_{b}O(e^{-q/ \epsilon}),\end{split} \tag{81}\] with the asymptotic expression \(a\)-uniformly, where \(M_{f}\) is defined in (75). Let \(\varphi_{b,a,\epsilon}=(v^{\prime}_{a,\epsilon}(Z_{a,\epsilon}),-u^{\prime}_{ a,\epsilon}(Z_{a,\epsilon}),0)^{T}\). Similar calculation as above shows \[\begin{split} 0&=\left\langle\varphi_{b,a, \epsilon},\ \partial_{\lambda}\varphi^{sl}(Z_{a,\epsilon},0)-\partial_{\lambda} \varphi_{b,+}(Z_{a,\epsilon},0)+(\alpha_{2}-\alpha_{3})e^{-\eta Z_{a,\epsilon} }\phi^{\prime}_{a,\epsilon}(Z_{a,\epsilon})\right\rangle\\ &=\beta_{f}(-M_{b,1}+O(\epsilon|\log\epsilon|))+\beta_{f}O(e^{-q /\epsilon}),\end{split} \tag{82}\] with the asymptotic expression \(a\)-uniformly, where \(M_{b,1}\) is defined in (60). The conditions (81) and (82) form a system \[\left(\begin{array}{cc}-M_{f}+O(\epsilon|\log\epsilon|)&O(e^{-q/\epsilon} )\\ O(e^{-q/\epsilon})&-M_{b,1}+O(\epsilon|\log\epsilon|)\end{array}\right) \left(\begin{array}{c}\beta_{f}\\ \beta_{b}\end{array}\right)=\vec{0}. \tag{83}\] Since \(M_{f},\ M_{b,1}>0\) are independent of \(\epsilon\) and bounded below away from \(0\) uniformly in \(a\), system (83) has only the trivial soliton \(\beta_{f}=\beta_{b}=0\). We are in contradiction with the fact that \(e^{-\eta\xi}\phi^{\prime}_{a,\epsilon}(\xi)\) is not the zero solution to the shifted eigenvalue system (17). So far, we arrive the conclusions that system (77) has no exponentially localized solution and that the algebraic multiplicity of the eigenvalue \(\lambda=0\) of \(\mathcal{L}_{a,\epsilon}\) is also equal to one. #### 5.3.7. Approximate calculation of \(\lambda_{1}\) By Theorem 5, the second eigenvalue \(\lambda_{1}\in R_{1}(\delta)\) of the shifted eigenvalue system (17) is \(a\)-uniformly \(O(|\epsilon\log\epsilon|^{2})\)-close to \(-M_{b,2}M_{b,1}^{-1}\). Thus, we need to show \(-M_{b,2}M_{b,1}^{-1}\leq-b_{0}\epsilon\) for proving our main stability results with a constant independent of \(a\) and \(\epsilon\). **Proposition 11**.: _For \(k_{1}>0\) small given in Lemma 2, there exists an \(\epsilon_{0}>0\) such that for each \((a,\epsilon)\in[0,\frac{1}{2}-k_{1}]\times(0,\epsilon_{0})\), \(M_{b,2}\) in Theorem 5 has the \(a\)-uniformly approximated expression_ \[\begin{split} M_{b,2}=&-\frac{\epsilon}{c_{0}}(u_{b} ^{1}-\gamma w_{b})\left(\int_{-\infty}^{\infty}u^{\prime}_{b}(z)e^{-c_{0}F^{2 }(w_{b})z}F(w_{b})u_{b}(z)dz\right.\\ &\qquad\qquad\left.+c_{0}F_{w}(w_{b})\int_{-\infty}^{\infty}(u^{ \prime}_{b}(z))^{2}e^{-c_{0}F^{2}(w_{b})z}dz\right)+O\left(\epsilon^{2}|\log \epsilon|\right).\end{split}\] _Especially, we have \(M_{b,2}\geq\epsilon/k_{2}\) for some \(k_{2}>1\), independent of \(a\) and \(\epsilon\)._ Proof.: The back solution \(\phi_{b}(\xi)\) to system (3.5) converges to the \((u_{b}^{1},0)^{T}\) as \(\xi\to-\infty\) with the exponential rate \(\sqrt{\frac{k}{2}}F(w_{b})U_{2}(w_{b})\), where \[u_{b}^{1}\triangleq U_{2}(w_{b})=\frac{1+a}{2}+\frac{1}{2}\sqrt{(1-a)^{2}-\frac {4w_{b}}{k}}.\] Combining this with Theorem 1\((i)\), the condition (19) and \(c=c_{0}+O(\epsilon)\), we obtain the estimation \[w_{a,\epsilon}^{\prime}(Z_{a,\epsilon-L_{\epsilon}}) =\frac{\epsilon}{c}(u_{a,\epsilon}(Z_{a,\epsilon-L_{\epsilon}})- \gamma w_{a,\epsilon}(Z_{a,\epsilon-L_{\epsilon}}))\] \[=\frac{\epsilon}{c_{0}}(u_{b}(-L_{\epsilon})-\gamma w_{b})+O( \epsilon^{2}|\log\epsilon|)\] \[=\frac{\epsilon}{c_{0}}(u_{b}^{1}-\gamma w_{b})+O(\epsilon^{2}| \log\epsilon|).\] This follows that \[M_{b,2} =\left\langle\left(\begin{array}{c}e^{c_{0}F^{2}(w_{b})L_{ \epsilon}}v_{b}^{\prime}(-L_{\epsilon})\\ -e^{c_{0}F^{2}(w_{b})L_{\epsilon}}u_{b}^{\prime}(-L_{\epsilon})\\ \int_{\infty}^{-L_{\epsilon}}u_{b}^{\prime}(z)e^{-c_{0}F^{2}(w_{b})z}\Delta_{ b}(z)dz\end{array}\right),\left(\begin{array}{c}u_{a,\epsilon}^{\prime}(Z_{a, \epsilon}-L_{\epsilon})\\ v_{a,\epsilon}^{\prime}(Z_{a,\epsilon}-L_{\epsilon})\\ w_{a,\epsilon}^{\prime}(Z_{a,\epsilon}-L_{\epsilon})\end{array}\right)\right\rangle\] \[=\frac{\epsilon}{c_{0}}\left(u_{b}^{1}-\gamma w_{b}\right)+\int_{ \infty}^{-\infty}u_{b}^{\prime}(z)e^{-c_{0}F^{2}(w_{b})z}\left(F(w_{b})u_{b}(z )+\frac{2F_{w}(w_{b})}{F^{2}(w_{b})}u_{b}^{\prime\prime}(z)\right)dz\] \[\qquad\qquad+O(\epsilon^{2}|\log\epsilon|)\] \[=-\frac{\epsilon}{c_{0}}(u_{b}^{1}-\gamma w_{b})\left(\int_{- \infty}^{+\infty}u_{b}^{\prime}(z)e^{-c_{0}F^{2}(w_{b})z}F(w_{b})u_{b}(z)dz \right.\] \[\qquad\qquad\left.+c_{0}F_{w}(w_{b})\int_{-\infty}^{+\infty}(u_{ b}^{\prime}(z))^{2}e^{-c_{0}F^{2}(w_{b})z}dz\right)+O(\epsilon^{2}|\log \epsilon|).\] Recall that \(u_{b}^{1}-\gamma w_{b}>0\), \(u_{b}(z)>0\), \(u_{b}^{\prime}(z)=v_{b}(z)<0\) and \[F_{w}(w_{b})=-\frac{1}{2c_{1}}\frac{1}{\sqrt{(1+\frac{M}{2c_{1}})^{2}-\frac{2} {c_{1}}w_{b}}}<0.\] It holds clearly that \(M_{b,2}\geq\epsilon/k_{2}\) for some \(k_{2}>1\), independent of \(a\) and \(\epsilon\). ### The Region \(R_{2}(\delta,\widetilde{M})\) The purpose of this section is to prove that the region \(R_{2}(\delta,\widetilde{M})\) does not contain any eigenvalue of system (17) for any \(\widetilde{M}>0\) and each \(\delta>0\) sufficiently small. As mentioned in the previous sections, our method is to prove that system (17) allows exponential dichotomies on each of the intervals \(I_{f},\ I_{r},\ I_{b}\) and \(I_{l}\), which together form a partition of the entire real line \(\mathbb{R}\). Recall that system (17) allows the exponential dichotomies on \(I_{r}\) and \(I_{l}\) by Proposition 4. Using the roughness results, the exponential dichotomies of the reduced eigenvalue problem generate the exponential dichotomies of system (17) on \(I_{f}\) and \(I_{b}\). Our plan is to compare the projections of the above exponential dichotomies at the endpoints of the intervals \(I_{f},\ I_{r},\ I_{b}\) and \(I_{l}\). The resulting estimates conclude that for \(\lambda\in R_{2}\), any exponential localized solution of system (17) must be trivial. #### 5.4.1. A reduced eigenvalue problem Similar to Section 5.3.1, we obtain a reduced eigenvalue problem by setting \(\epsilon\) to \(0\) in system (17) for \(\xi\) in \(I_{f}\) or \(I_{b}\) and \(\lambda\in R_{2}.\) Thus, the reduced eigenvalue problem is of the form \[\varphi^{\prime}=A_{j}(\xi,\lambda)\varphi,\ j=f,\ b, \tag{84}\] where \[A_{j}(\xi,\lambda)=A_{j}(\xi,\lambda;a)=\left(\begin{array}{cc}-\eta&F(w_{j })&0\\ F(w_{j})\left(f_{u}(u_{j},w_{j})+\lambda\right)&c_{0}F^{2}(w_{j})-\eta&\Delta_{ 2,j}\\ 0&0&-\frac{\lambda}{c_{0}}-\eta\end{array}\right),\] and \[\Delta_{2,j}=F(w_{j})f_{w}(u_{j},w_{j})+2\frac{F_{w}(w_{j})}{F^{2}(w_{j})}u_{j }^{\prime\prime}-\frac{\lambda}{c_{0}F_{w}(w_{j})}u_{j}^{\prime},\ j=f,\ b.\] Here \(u_{j}(\xi)\) denotes the \(u\)-component of \(\phi_{j}\), \(a\in[0,\frac{1}{2}-k_{1}]\) and \(\lambda\in R_{2}(\delta,\widetilde{M})\). By the particular structure of the coefficient matrix \(A_{j}(\xi,\lambda)\), the linear differential system (84) admits an invariant subspace \(\mathbb{C}^{2}\times\{0\}\subset\mathbb{C}^{3}\) on which the dynamics are given by \[\psi^{\prime}=C_{j}(\xi,\lambda)\psi,\ \ \ \ j=f,\ b, \tag{85}\] with \[C_{j}(\xi,\lambda)=\left(\begin{array}{cc}-\eta&F(w_{j})\\ F(w_{j})\left(f_{u}(u_{j},w_{j})+\lambda\right)&c_{0}F^{2}(w_{j})-\eta\end{array} \right).\] Next, we will show that systems (84) and (85) admit exponential dichotomies on both the half-lines. The equation for \(u_{f}\) is \[\begin{array}{l}u_{f}^{\prime}=F(0)v_{f},\\ v_{f}^{\prime}=c_{0}F^{2}(0)v_{f}+f(u_{f},0)F(0).\end{array}\] The reduced linear eigenvalue problem along the front \(u_{f}\) is given by \[\left(\begin{array}{c}p\\ q\end{array}\right)^{\prime}=\left(\begin{array}{cc}0&F(0)\\ F(0)\left(f_{u}(u_{f},0)+\lambda\right)&c_{0}F^{2}(0)\end{array}\right)\left( \begin{array}{c}p\\ q\end{array}\right),\] and the associated shifted eigenvalue problem is \[\left(\begin{array}{c}p\\ q\end{array}\right)^{\prime}=\left(\begin{array}{cc}-\eta&F(0)\\ F(0)(f_{u}(u_{f},0)+\lambda)&c_{0}F^{2}(0)-\eta\end{array}\right)\left( \begin{array}{c}p\\ q\end{array}\right).\] Analogously, the reduced equation along the back is given by \[\begin{array}{l}u_{b}^{\prime}=F(0)v_{b},\\ v_{b}^{\prime}=c_{0}F^{2}(w_{b})v_{b}+f(u_{b},w_{b})F(w_{b}).\end{array}\] and the associated shifted eigenvalue problem is \[\left(\begin{array}{c}p\\ q\end{array}\right)^{\prime}=\left(\begin{array}{cc}-\eta&F(w_{b})\\ F(w_{b})\left(f_{u}(u_{b},w_{b})+\lambda\right)&c_{0}F^{2}(w_{b})-\eta\end{array} \right)\left(\begin{array}{c}p\\ q\end{array}\right).\] So \(e^{-\eta\xi}\phi_{j}^{\prime}(\xi)\) is the exponential localized solution of system (85) at \(\lambda=0\) and it has no zeros. According to Sturm-Liouville Theorem 10, the eigenvalues of system (85) are finite and simple. Moreover, \(\lambda=0\) is the maximum eigenvalue. Thus, system (85) admits no exponentially localized solutions for \(\lambda\in R_{2}(\delta,\widetilde{M})\) with \(\delta>0\) sufficiently small. Then system (85) admits exponential dichotomy on \(\mathbb{R}\). This is the content of the following proposition. **Proposition 12**.: _Let \(k_{1},\ \widetilde{M}>0\). For each \(\delta>0\) sufficiently small, \(a\in[0,\frac{1}{2}-k_{1}]\) and \(\lambda\in R_{2}(\delta,\widetilde{M})\) system (84) admits exponential dichotomies on \(\mathbb{R}\) with \(\lambda\)- and \(a\)-independent constants \(C,\ \mu/2>0\)._ Proof.: According to Lemma 2, it follows that the asymptotic matrices \[C_{j,\pm\infty}(\lambda)=C_{j,\pm\infty}(\lambda;a):=\lim_{\xi\to\pm\infty}C_{ j}(\xi,\lambda)\] of system (85) admit a uniform spectral gap larger than \(\mu>0\) for \(a\in[0,\frac{1}{2}-k_{1}]\) and \(\lambda\in R_{2}(\delta,\widetilde{M})\). So system (85) admits exponential dichotomies on both the half-lines with constants \(C,\ \mu>0\) and projections \(\Pi^{u,s}_{j,\pm}(\xi,\lambda)=\Pi^{u,s}_{j,\pm}(\xi,\lambda;a),\ j=f,\ b\) by Theorems 9 and 8. Since \(R_{2}(\delta,\widetilde{M})\times[0,\frac{1}{2}-k_{1}]\) is compact, the constant \(C>0\) can be chosen independent of \(\lambda\) and \(a\). Since \(\lambda=0\) is the maximum eigenvalue of the coefficient matrix associated to system (85), this linear differential system admits no bounded solutions for \(\lambda\in R_{2}(\delta,\widetilde{M})\). According to [8, p.16-19], we can paste the exponential dichotomies by defining \(\Pi^{s}_{j}(0,\lambda)\) to be the projection onto \(\mathrm{R}(\Pi^{s}_{j,+}(0,\lambda))\) along \(\mathrm{R}(\Pi^{u}_{j,-}(0,\lambda))\). Thus, there exists an exponential dichotomy for \((\lambda,a)\in R_{2}(\delta,\widetilde{M})\times[0,\frac{1}{2}-k_{1}]\) to system (85) on \(\mathbb{R}\) with \(\lambda\)- and \(a\)-independent constants \(C,\ \mu>0\) and projections \(\Pi^{u,s}_{j}(\xi,\lambda)=\Pi^{u,s}_{j}(\xi,\lambda;a),\ j=f,\ b\). Similar to the proof of Proposition 6, by the variation of constants formula the exponential dichotomy of the subsystem (85) on \(\mathbb{R}\) can be transferred to the full system (84). By taking \(\delta>0\) sufficiently small, the exponential dichotomy on \(\mathbb{R}\) of system (84) has the constant \(C\) independent of \(a\) and \(\lambda\) and the constant \(\min\{\mu,\eta-\frac{\delta}{c_{0}}\}\geq\frac{\mu}{2}\). Next, we will show that the shifted eigenvalue system (17) admits no nontrivial exponentially localized solution in the region \(R_{2}(\delta,\widetilde{M})\), namely, the operator \(\mathcal{L}_{a,\epsilon}\) admits no spectrum in the region \(R_{2}(\delta,\widetilde{M})\). ### Absence of point spectrum in \(R_{2}(\delta,\widetilde{M})\) For proving our results we need the next result, which provides an estimation on projections of the evolution operator of a linear differential system in case of exponential dichotomy. **Lemma 3**.: ([4, **Lemma** 6.19]) _Let \(n\in\mathbb{N},\ a,\ b\in\mathbb{R}\) with \(a<b\) and \(A\in C([a,b],\ \mathbf{Mat}_{n\times n}(\mathbb{C}))\). Suppose that the linear differential system_ \[\varphi_{x}=A(x)\varphi, \tag{86}\] _has an exponential dichotomy on \([a,b]\) with constants \(C,m>0\) and projections \(P_{1}^{u,s}(x)\). Denote by \(T(x,y)\) the evolution of system (86). Let \(P_{2}\) be a projection such that \(\|P_{1}^{s}(b)-P_{2}\|\leq\delta_{0}\) for some \(\delta_{0}>0\), and let \(\nu\in\mathbb{C}^{n}\) be a vector such that \(\|P_{1}^{s}(a)\nu\|\leq k\|P_{1}^{u}(a)\nu\|\) for some \(k\geq 0\). If \(\delta_{0}(1+kC^{2}e^{-2m(b-a)})<1\), then it holds_ \[\|P_{2}T(b,a)\nu\|\leq\frac{\delta_{0}+kC^{2}e^{-2m(b-a)}(1+\delta_{0})}{1- \delta_{0}(1+kC^{2}e^{-2m(b-a)})}\|(1-P_{2})T(b,a)\nu\|.\] Recall again that \(1\) denotes the unit matrix or the identity operator in case of no confusion. **Proposition 13**.: _Let \(k_{1},\widetilde{M}>0\). For each \(\delta>0\) sufficiently small, \(a\in[0,\frac{1}{2}-k_{1}]\) and \(\lambda\in R_{2}(\delta,\widetilde{M})\), system (17) admits no nontrivial exponentially localized solution._ Proof.: By Theorem 1\((i)\), there exists an \(\epsilon_{0}>0\) such that for \(\epsilon\in(0,\epsilon_{0})\) one has the next estimation \[\|A(\xi,\lambda)-A_{f}(\xi,\lambda)\| \leq C\epsilon|\log\epsilon|,\quad\xi\in(-\infty,L_{\epsilon}],\] \[\|A(Z_{a,\epsilon}+\xi,\lambda)-A_{b}(\xi,\lambda)\| \leq C\epsilon|\log\epsilon|,\quad\xi\in[-L_{\epsilon},L_{ \epsilon}], \tag{87}\] with \(C>0\) a constant independent of \(\lambda,\ a,\ \epsilon\). From Proposition 12, there exists an exponential dichotomy to system (84) on \(\mathbb{R}\) with \(\lambda\)- and \(a\)- independent constants \(C,\ \mu/2>0\) and projections \(Q_{j}^{u,s}(\xi,\lambda)=Q_{j}^{u,s}(\xi,\lambda;a),\ j=f,\ b.\) The spectral projection onto the (un)stable eigenspace of the asymptotic matrices \(A_{j,\pm}(\lambda)=A_{j,\pm}(\lambda;a)\) of system (84) is denoted by \(P_{j,\pm}^{u,s}(\lambda)=P_{j,\pm}^{u,s}(\lambda;a)\). As in the proof of Proposition 8, it follows that \[\begin{split}\left\|Q_{f}^{u,s}(\pm\xi,\lambda)-P_{f,\pm}^{u,s}( \lambda)\right\|&\leq C\left(e^{-\sqrt{\frac{k}{2}}F(0)\xi}+e^{- \frac{\mu}{2}\xi}\right),\\ \left\|Q_{b}^{u,s}(\pm\xi,\lambda)-P_{b,\pm}^{u,s}(\lambda)\right\| &\leq C\left(e^{-\sqrt{\frac{k}{2}}F(w_{b})U_{2}(w_{b})\xi}+e^{- \frac{\mu}{2}\xi}\right)\end{split} \tag{88}\] for \(\xi\geq 0\). By the estimations in (87) and Theorem 7, the shifted eigenvalue problem (17) admits exponential dichotomies on \(I_{f}=(-\infty,L_{\epsilon}]\) and on \(I_{b}=[Z_{a,\epsilon}-L_{\epsilon},Z_{a,\epsilon}+L_{\epsilon}]\) with \(\lambda\)- and \(a\)-independent constants \(C,\mu/2>0\) and projections \(\mathcal{Q}_{j}^{u,s}(\xi,\lambda)=\mathcal{Q}_{j}^{u,s}(\xi,\lambda;a, \epsilon),\ j=f,\ b,\) which satisfy \[\begin{split}\left\|\mathcal{Q}_{f}^{u,s}(\xi,\lambda)-Q_{f}^{u,s }(\xi,\lambda)\right\|&\leq C\epsilon|\log\epsilon|,\\ \left\|\mathcal{Q}_{b}^{u,s}(Z_{a,\epsilon}+\xi,\lambda)-Q_{b}^{u,s }(\xi,\lambda)\right\|&\leq C\epsilon|\log\epsilon|,\end{split} \left|\xi\right|\leq L_{\epsilon}. \tag{89}\] Simultaneously, by Proposition 4, system (17) admits exponential dichotomies on \(I_{r}=[L_{\epsilon},Z_{a,\epsilon}-L_{\epsilon}]\) and on \(I_{l}=[Z_{a,\epsilon}+L_{\epsilon},\infty)\) with \(\lambda\)- and \(a\)-independent constants \(C,\mu>0\) and projections \(\mathcal{Q}_{r,l}^{u,s}(\xi,\lambda)=\mathcal{Q}_{r,l}^{u,s}(\xi,\lambda;a, \epsilon)\), which satisfy \[\|(\mathcal{Q}_{r}^{s}-\mathcal{P})(L_{\epsilon},\lambda)\|,\ \|(\mathcal{Q}_{r}^{s}- \mathcal{P})(Z_{a,\epsilon}-L_{\epsilon},\lambda)\|,\ \|(\mathcal{Q}_{l}^{s}- \mathcal{P})(Z_{a,\epsilon}+L_{\epsilon},\lambda)\|\leq C\epsilon|\log \epsilon|, \tag{90}\] where \(\mathcal{P}(\xi,\lambda)=\mathcal{P}(\xi,\lambda;a,\epsilon)\) is the spectral projection onto the stable eigenspace of \(A(\xi,\lambda)\). Since \(A_{f}(\xi,\lambda)\) (resp. \(A_{b}(\xi,\lambda)\)) converges at the exponential rate \(\sqrt{\frac{k}{2}}F(0)\) (resp. \(\sqrt{\frac{k}{2}}F(w_{b})U_{2}(w_{b})\)) to the asymptotic matrix \(A_{f,\pm}(\lambda)\) (resp. \(A_{b,\pm}(\lambda)\)) as \(\xi\to\pm\infty\). Combining this with (87) and (19), we obtain \[\|A(L_{\epsilon},\lambda)-A_{f,+}(\lambda)\|,\ \|A(Z_{a,\epsilon}\pm L_{\epsilon}, \lambda)-A_{b,\pm}(\lambda)\|\leq C\epsilon|\log\epsilon|.\] By continuity, the spectral projections associated with these matrices admit the same bound, namely, \[\begin{array}{l}\|(\mathcal{P}-P_{f,+}^{s})(L_{\epsilon},\lambda)\|,\,\|[(1- \mathcal{P})-P_{f,+}^{u}](L_{\epsilon},\lambda)\|\leq C\epsilon|\log\epsilon|,\\ \|(\mathcal{P}(Z_{a,\epsilon}\pm L_{\epsilon},\lambda)-P_{f,\pm}^{s}(\lambda) \|,\,\|(1-\mathcal{P})(Z_{a,\epsilon}\pm L_{\epsilon},\lambda)-P_{b,\pm}^{u}( \lambda)\|\leq C\epsilon|\log\epsilon|.\end{array} \tag{91}\] By (88), (89), (90), (91) and (19), it holds \[\begin{array}{l}\|(\mathcal{Q}_{r}^{u,s}-\mathcal{Q}_{f}^{u,s})(L_{\epsilon },\lambda)\|\leq C\epsilon|\log\epsilon|,\\ \|(\mathcal{Q}_{l}^{u,s}-\mathcal{Q}_{b}^{u,s})(Z_{a,\epsilon}+L_{\epsilon}, \lambda)\|,\,\|(\mathcal{Q}_{r}^{u,s}-\mathcal{Q}_{b}^{u,s})(Z_{a,\epsilon}- L_{\epsilon},\lambda)\|\leq C\epsilon|\log\epsilon|.\end{array} \tag{92}\] Denote by \(\varphi(\xi)\) an exponentially localized solution of system (17) at some \(\lambda\in R_{2}(\delta,\widetilde{M})\), and it holds \(\mathcal{Q}_{f}^{s}(0,\lambda)\varphi(0)=0\). Combining (92) with \(\nu\geq 2/\mu\), together with Lemma 3, we obtain \[\|\mathcal{Q}_{r}^{s}(L_{\epsilon},\lambda)\varphi(L_{\epsilon})\|\leq C \epsilon|\log\epsilon|\|\mathcal{Q}_{r}^{u}(L_{\epsilon},\lambda)\varphi(L_{ \epsilon})\|. \tag{93}\] At the endpoint \(Z_{a,\epsilon}-L_{\epsilon}\), applying Lemma 3 to the inequality (93) and using (92) we get a similar inequality as that in (93) \[\|\mathcal{Q}_{b}^{s}(Z_{a,\epsilon}-L_{\epsilon},\lambda)\varphi(Z_{a, \epsilon}-L_{\epsilon})\|\leq C\epsilon|\log\epsilon|\|\mathcal{Q}_{b}^{u}(Z_{ a,\epsilon}-L_{\epsilon},\lambda)\varphi(Z_{a,\epsilon}-L_{\epsilon})\|.\] Applying Lemma 3 again yields \[\begin{array}{ll}\|\mathcal{Q}_{l}^{s}(Z_{a,\epsilon}+L_{\epsilon},\lambda) \varphi(Z_{a,\epsilon}+L_{\epsilon})\|&\leq C\epsilon|\log\epsilon|\|\mathcal{ Q}_{l}^{u}(Z_{a,\epsilon}+L_{\epsilon},\lambda)\varphi(Z_{a,\epsilon}+L_{ \epsilon})\|\\ &=0.\end{array}\] This proves that \(\varphi(\xi)\) is the trivial solution of system (17). Now we have enough preparations to prove our main results. ## 6. Proof of the main results ### Proof of Theorem 2 In the regime \(a\in(0,1/2)\), the essential spectrum of \(\mathcal{L}_{a,\epsilon}\) is contained in the half plane \(\{\lambda\in\mathbb{C}|\ \text{Re}(\lambda)\leq\max\{-\epsilon\gamma,-ka\}\}\) by Proposition 1. From subsection 5.1 and Proposition 13 the regions \(R_{2}\) and \(R_{3}\) do not intersect the point spectrum of \(\mathcal{L}_{a,\epsilon}\). By Theorem 5 and Proposition 10, the point spectrum in \(R_{1}(\delta)\) to the right-hand side of the essential spectrum admits at most two eigenvalues. One of the eigenvalues is the simple translational eigenvalue \(\lambda_{0}=0\) and the other real eigenvalue is \(\lambda_{1}\), which is approximated by \(-M_{b,2}M_{b,1}^{-1}\), where \(M_{b,1}>0\) is \(\epsilon\)-independent and bounded by an \(a\)-independent constant. Finally, applying Proposition 11 to estimate \(M_{b,2}\) arrives the conclusion that there exists a constant \(b_{0}>0\) such that \(\lambda_{1}\leq-\epsilon b_{0}\). This finishes the proof of the theorem. ### Proof of Theorem 4 The proof follows directly from Propositions 1 and 3, subsection 5.1 and the fact that the shifted eigenvalue system (17) does not have an eigenvalue in the region \(\Omega_{+}\). We are done. ### Appendix A: The proof of Theorem 1 For \(a\in(0,\ 1/2)\), by Fenichel theory [15, 16, 17], the segments \(S^{r}_{0}\) and \(L_{0}\) persist for sufficiently small \(\epsilon>0\) as locally invariant manifolds \(S^{r}_{\epsilon}\) and \(L_{\epsilon}\). When \(a=0\), this argument holds too, which was obtained by Shen and Zhang [40] combining Fenichel's three theorems and the center manifold theorem. In both cases \(L_{\epsilon}\) approaches the origin as \(\xi\to+\infty\). In addition, the center-stable manifold \(W^{s}(S^{r}_{0})\) and the center-unstable manifold \(W^{u}(S^{r}_{0})\) persist as locally invariant center-stable and center-unstable manifolds \(W^{s}(S^{r}_{\epsilon})\) and \(W^{u}(S^{r}_{\epsilon})\), respectively. Similarity, the center-stable manifold \(W^{s}(L_{0})\) and the center-unstable manifold \(W^{u}(L_{0})\) persist as locally invariant center-stable and center-unstable manifolds \(W^{s}(L_{\epsilon})\) and \(W^{u}(L_{\epsilon})\), respectively. For any \(r\in\mathbb{N}\), there exists a \(C^{r}\)-change of coordinates \(\Psi_{\epsilon}:\mathcal{U}\to\mathbb{R}^{3}\), with \(\mathcal{U}\) an \(\epsilon\)-independent open neighborhood of \(S^{r}_{\epsilon}\), such that the flow under this new coordinates is given by the Fenichel normal form system [17, 26] (with notation abuse) \[\begin{split} U^{\prime}&=-\Lambda(U,V,W;c,a, \epsilon)U,\\ V^{\prime}&=\Gamma(U,V,W;c,a,\epsilon)V,\\ W^{\prime}&=\epsilon(1+H(U,V,W;c,a,\epsilon)UV). \end{split} \tag{94}\] Here the functions \(\Lambda,\ \Gamma\) and \(H\) are \(C^{r}\), and \(\Lambda\) and \(\Gamma\) are bounded below away from zero. The slow manifold \(S^{r}_{\epsilon}\) is represented by \(U=V=0\), and the manifolds \(W^{u}(S^{r}_{\epsilon})\) and \(W^{s}(S^{r}_{\epsilon})\) are given by \(U=0\) and \(V=0\), respectively. Consider the Fenichel neighborhood \(\Psi_{\epsilon}(\mathcal{U})\), which contains the box \[\left\{(U,V,W)\in\mathbb{R}^{3}|\ U,V\in[-\Theta,\Theta],\ W\in[-\Theta,W^{*} +\Theta]\right\}\] for \(W^{*}>0,\ 0<\Theta\ll W^{*}\), both independent of \(\epsilon\). And then, we define two manifolds \[\begin{split} N_{1}&:=\left\{(U,V,W)\in\mathbb{R}^{3 }|\ U=\Theta,\ V\in[-\Theta,\Theta],\ W\in[-\Theta,\Theta]\right\},\\ N_{2}&:=\left\{(U,V,W)\in\mathbb{R}^{3}|\ U,V\in[- \Theta,\Theta],\ W=W_{0}\right\},\end{split}\] with the flow of the Fenichel normal form system entering in \(N_{1}\) and exiting from \(N_{2}\), called _entry manifold_ and _exit manifold_, where \(0<W_{0}<W^{*}\). The next result will also be used in the proof of Theorem 1. **Theorem 6**.: ([4, 12, 24]) _Assume that_ * \(\Xi(\epsilon)\) _is a continuous function of_ \(\epsilon\) _satisfying_ (95) \[\lim_{\epsilon\to 0}\Xi(\epsilon)=\infty,\ \lim_{\epsilon\to 0}\epsilon\Xi( \epsilon)=0.\] * _there is a one-parameter family of solutions_ \((U,V,W)(\xi,\cdot)\) _to the Fenichel normal form system (_94_) with_ \[(U,V,W)(\xi_{1},\epsilon)\in N_{1},\ (U,V,W)(\xi_{2}(\epsilon),\epsilon) \in N_{2}\] _and_ \(\lim_{\epsilon\to 0}W(\xi_{1},\epsilon)=0\) _for some_ \(\xi_{1},\ \xi_{2}(\epsilon)\in\mathbb{R}\)_._ _Let \(U_{0}(\xi)\) be the solution to the system_ \[U^{\prime}=-\Lambda(U,0,0;c,a,0)U, \tag{96}\] _satisfying \(U_{0}(\xi_{1})=\Theta+\widetilde{U}_{0}\) with \(|\widetilde{U}_{0}|\ll\Theta\). Then, for \(0<\epsilon\ll 1\), there holds the next estimation_ \[\|(U,V,W)(\xi,\epsilon)-(U_{0}(\xi),0,0)\|\leq C\left(\epsilon\Xi(\epsilon)+| \widetilde{U}_{0}|+|W(\xi_{1},\epsilon)|\right),\] _for \(\xi\in[\xi_{1},\Xi(\epsilon)]\), where \(C>0\) is a constant independent of \(a\) and \(\epsilon\)._ Having the above preparation, we can prove Theorem 1. Proof.: Recall that \(\Xi_{\tau}(\epsilon):=-\tau\log\epsilon\), for every \(\tau>0\), satisfies the condition (95). \((i)\). By the geometric singular perturbation theory, we know that the solution \(\phi_{a,\epsilon}(\xi)\) is \(a\)-uniformly \(O(\epsilon)\)-close to \((\phi_{f}(\xi),0)\) upon entry in \(N_{1}\) at \(\xi_{f}=O(1)\). Since \((\phi_{f}(\xi),0)\) exponentially converges to \((1,0,0)\) for \(\epsilon=0\), it must be located in \(W^{s}(S^{r}_{0})\). Thus, it holds that \(\Psi_{0}(\phi_{f}(\xi),0)=(U_{0}(\xi),0,0)\), where \(U_{0}(\xi)\) is a solution to system (96). Let \(\Psi_{\epsilon}(\phi_{a,\epsilon}(\xi))=(U_{a,\epsilon}(\xi),V_{a,\epsilon}( \xi),W_{a,\epsilon}(\xi))\). Then we derive \[\|(U_{a,\epsilon}(\xi),V_{a,\epsilon}(\xi),W_{a,\epsilon}(\xi))-(U_{0}(\xi),0,0)\|\leq C\epsilon\Xi_{\tau}(\epsilon)\] for \(\xi\in[\xi_{f},\Xi_{\tau}(\epsilon)]\) by Theorem 6. Since the transformation \(\Psi_{\epsilon}\) for getting the Fenichel normal form is \(C^{r}\)-smooth in \(\epsilon\), when transforming back to the \((u,v,w)\)-coordinates there incur at most \(O(\epsilon)\) error. Therefore, \(\phi_{a,\epsilon}(\xi)\) is \(a\)-uniform \(O(\epsilon\Xi_{\tau}(\epsilon))\)-close to \((\phi_{f}(\xi),0)\) for \(\xi\in[\xi_{f},\Xi_{\tau}(\epsilon)]\). This proves the first estimation of statement \((i)\). Again by the geometric singular perturbation theory, the pulse solution \(\phi_{a,\epsilon}(\xi)\) arrives a neighborhood of the slow manifold \(S^{r}_{\epsilon}\), where the flow spends the time being of order \(\epsilon^{-1}\). Let \(Z_{a,\epsilon}=O(\epsilon^{-1})\) be the leading order of the time at which the pulse solution along the back exits the Fenichel neighborhood \(\mathcal{U}\) of \(S^{r}_{\epsilon}\). Using the similar manner as treating the flow in a neighborhood of the left slow manifold \(L_{\epsilon}\), we can obtain the second estimation in \((i)\) by using similar arguments as those in the proof of the first estimation. \((ii)\). Taking the \(a\)- and \(\epsilon\)-independent neighborhood \(\mathcal{U}\) smaller if necessary and choosing the \(a\)- and \(\epsilon\)-independent \(\xi_{0}\) sufficiently large, there holds that \(\phi_{a,\epsilon}(\xi)\) lies in the region \(\mathcal{U}\) for \(\xi\in[\xi_{0},Z_{a,\epsilon}-\xi_{0}]\). This verifies the estimation in \((ii)\) along the right branch \(S^{r}_{0}\). \((iii)\). The estimation could be proved using similar arguments as those in the proof of \((ii)\) along the left branch \(L_{0}\). It completes the proof of Theorem 1. ### Appendix B For readers' convenience we recall here some results on exponential dichotomy and trichotomy. For details, we refer readers to [4]. We also recall a theorem on Sturm-Liouville eigenvalue problem on an infinite interval. The linear differential system \[\varphi^{\prime}=A(x)\varphi \tag{97}\] is said to have an _exponential dichotomy_ on \(J\) if there are constants \(C,\mu>0\) and linear projections \(P^{s}(x),\ P^{u}(x):\ \mathbb{C}^{n}\to\mathbb{C}^{n},\ x\in J\) satisfying, for all \(x,\ y\in J\), * \(P^{s}(x)+P^{u}(x)=1\), * \(P^{s,u}(x)T(x,y)=T(x,y)P^{s,u}(y)\), * \(\|T(x,y)P^{s}(y)\|,\ \|T(y,x)P^{u}(x)\|\leq Ce^{-\mu(x-y)}\) for \(x\geq y\). Here, \(T(x,y)\) is the evolution operator of the linear system (97), and the interval \(J\) is typically taken to be either \(\mathbb{R}\), or \(\mathbb{R}_{+}\) or \(\mathbb{R}_{-}\). Equation (97) has an _exponential trichotomy_ on \(J\) with constants \(C,\ \mu,\ \nu>0\) and projections \(P^{s}(x),\ P^{u}(x),\ P^{c}(x):\ \mathbb{C}^{n}\to\mathbb{C}^{n},\ x\in J\) if the next conditions hold for all \(x,\ y\in J\), * \(P^{s}(x)+P^{u}(x)+P^{c}(x)=1\), * \(P^{s,u,c}(x)T(x,y)=T(x,y)P^{s,u,c}(y)\), * \(\|T(x,y)P^{s}(y)\|,\ \|T(y,x)P^{u}(x)\|\leq Ce^{-\mu(x-y)}\) for \(x\geq y\), * \(\|T(x,y)P^{c}(y)\|\leq Ce^{-\nu|x-y|}\). We often use the abbreviations \(T^{s,u,c}(x,y)=T(x,y)P^{s,u,c}(y)\) to implicitly represent the corresponding projections of the dichotomy or of the trichotomy. There are many known results on existence of exponential dichotomies. Here we only recall those we have used in the proof of our main results. **Theorem 7**.: ([8, 24]) _Let \(\varphi\in\mathbb{R}^{n}\) and suppose that the linear system (97) has an exponential dichotomy on \(J\). Then the linear system \(\varphi^{\prime}=(A(x)+B(x))\varphi\) also has an exponential dichotomy on \(J\) provided \(\|B(x)\|<\delta\) for all \(x\in J\), where \(\delta>0\) is chosen to be sufficiently small._ **Theorem 8**.: ([7, 24]) _If the linear system (97) has an exponential dichotomy on \(J\), and_ \[\int_{J}\|B(x)\|dx<\infty,\] _then the system \(\varphi^{\prime}=(A+B(x))\varphi\) also has an exponential dichotomy on \(J\) with the same decay rates as those for system (97)._ **Theorem 9**.: ([8, 24]) _Suppose that_ * _the_ \(n\times n\) _matrix_ \(A(x)\) _is bounded and hyperbolic for all_ \(x\in J\) _with_ \(k\) _eigenvalues having their real parts less than_ \(-\alpha<0\) _and_ \(n-k\) _eigenvalues having their real parts greater than_ \(\beta>0\)_;_ * \(A(x)\) _is continuously differentiable and there exists a_ \(\delta>0\) _such that_ \(\|A(x)\|<\delta\) _for all_ \(x\in J\)_._ _Then the linear system (97) has an exponential dichotomy on \(J\)._ Finally we recall Sturm-Liouville theorem on infinite interval. A Sturm-Liouville operator \(\mathcal{L}\) is a second order differential operator of the form \[\mathcal{L}u:=\partial_{x}^{2}u+a_{1}(x)\partial_{x}u+a_{0}(x)u.\] Consider the Sturm-Liouville operator \(\mathcal{L}\) acting on \(H^{2}(\mathbb{R})\) with smooth coefficients \(a_{0}(x)\) and \(a_{1}(x)\), which decay exponentially to constants \(a_{0}^{\pm},\ a_{1}^{\pm}\in\mathbb{R}\) as \(x\to\pm\infty\). Recall that \(H^{2}(\mathbb{R})\) is the Hilbert space formed by second order differentiable functions defined on \(\mathbb{R}\) with its element \(u\) having the norm The associated eigenvalue problem \[\mathcal{L}u=\lambda u\] has the following well-known properties, see e.g. [27]. **Theorem 10**.: ([27]) _Consider the eigenvalue problem \(\mathcal{L}u=\lambda u\) on the space \(H^{2}(\mathbb{R})\), with the coefficients of \(\mathcal{L}\) decaying exponentially to the constants \(a_{0}^{\pm},\ a_{1}^{\pm}\in\mathbb{R}\) as \(x\to\pm\infty.\) The following statements hold._ * _The point spectrum of_ \(\mathcal{L}\)_,_ \(\sigma_{p}(\mathcal{L})\)_, consists of a finite number, possibly zero, of simple eigenvalues, which can be enumerated in a strictly descending order_ \[\lambda_{0}>\lambda_{1}>\cdots>\lambda_{N}>b:=\max\{a_{0}^{-},a_{0}^{+}\}.\] * _For_ \(j=0,1,\cdots,N\)_, the eigenfunction_ \(u_{j}(x)\) _associated with the eigenvalue_ \(\lambda_{j}\) _can be normalized and it has exactly_ \(j\) _simple zeros._ ## Acknowledgments The authors sincerely appreciate the referees for their nice comments and suggestions which greatly improve this paper both in mathematics and presentations. This work is partially supported by National Key R&D Program of China grant number 2022YFA1005900. The authors are partially supported by National Natural Science Foundation (NNSF) of China grant numbers 12071284 and 12161131001. The second author is also partially supported by NNSF of China grant number 11871334, by Innovation Program of Shanghai Municipal Education Commission grant number 2021-01-07-00-02-E00087.
2308.13220
Improved Lerey inequality and Trudinger-Moser type inequality involving the Leray potential
We obtain three types of results in this paper. Firstly we improve Leray's inequality by providing several types of reminder terms, secondly we introduce several Hilbert spaces based on these improved Leray inequalities and discuss their embedding properties, thirdly we obtain some Trudinger-Moser type inequalities in the unit ball of R2 associated with the norms of these Hilbert spaces where the Leray potential is used. Our approach is based on analysis of radially symmetric functions.
Huyuan Chen, Yihong Du, Feng Zhou
2023-08-25T07:34:33Z
http://arxiv.org/abs/2308.13220v1
# Improved Leray Inequality and Trudinger-Moser type inequalities involving the Leray potential ###### Abstract. We obtain three types of results in this paper. Firstly we improve Leray's inequality by providing several types of reminder terms, secondly we introduce several Hilbert spaces based on these improved Leray inequalities and discuss their embedding properties, thirdly we obtain some Trudinger-Moser type inequalities in the unit ball of \(\mathbb{R}^{2}\) associated with the norms of these Hilbert spaces where the Leray potential is used. Our approach is based on analysis of radially symmetric functions. 1 Footnote 1: Email: [email protected] 2 Footnote 2: Email: [email protected] 3 Footnote 3: Email: [email protected] **Keywords**: Leray Inequality; Trudinger-Moser Inequalities; Leray Potential. **MSC2010**: 46E35; 26D15. ## 1. Introduction The well-known Leray inequality states: \[\int_{B_{1}}|\nabla w|^{2}dx-\frac{1}{4}\int_{B_{1}}\frac{w^{2}}{|x|^{2}(\ln \frac{1}{|x|})^{2}}dx>0,\quad\forall\,w\in C_{c}^{\infty}(B_{1}),\,w\not\equiv 0, \tag{1.1}\] where \(B_{1}\) is the unit ball in \(\mathbb{R}^{2}\), and more generally, we will write \(B_{r}=B_{r}(0)\) with \(B_{r}(x_{0})\) denoting the open ball with radius \(r\) centered at \(x_{0}\) in \(\mathbb{R}^{2}\). Leray used this inequality in his study of two dimensional viscous flows [7]. Thanks to the logarithmic function, the potential function \(\frac{1}{|x|^{2}(-\ln|x|)^{2}}\) in (1.1) has a weaker singularity at the origin than the usual inverse-square potential \(1/|x|^{2}\). On the other hand, since \(-\ln|x|\sim 1-|x|\) near \(|x|=1\), the potential function \(\frac{1}{|x|^{2}(-\ln|x|)^{2}}\) has a singularity of order \((1-|x|)^{-2}\) at the boundary \(\mathbb{S}^{1}=\partial B_{1}\). Note that \(\frac{1}{4}\) is a critical coefficient both for \(\frac{1}{|x|^{2}(-\ln|x|)^{2}}\) at the origin and for \(\frac{1}{(1-|x|)^{2}}\) at the boundary. For fixed \(R\in(0,1)\), since any function \(w\in C_{c}^{\infty}(B_{R})\) can be regarded as a function in \(C_{c}^{\infty}(B_{1})\) after extension by \(0\) to \(B_{1}\setminus B_{R}\), it follows from (1.1) that \[\int_{B_{R}}|\nabla w|^{2}dx-\frac{1}{4}\int_{B_{R}}\frac{w^{2}}{|x|^{2}(\ln \frac{1}{|x|})^{2}}dx>0,\quad\forall\,w\in C_{c}^{\infty}(B_{R}),\,w\neq 0. \tag{1.2}\] We note that in (1.2), the potential function in \(B_{R}\) has only one singularity at \(x=0\). For this inequality, there is an improved version with a remainder term, which follows from a more general result of Barbatis, Filippas and Tertikas [3] (whose left side can be reduced to the form of (1.2) after a change of variable; see(1.5)): \[\int_{B_{1}}|\nabla w|^{2}dx-\frac{1}{4}\int_{B_{1}}\frac{w^{2}}{|x|^{2}(\ln \frac{e}{|x|})^{2}}dx\geq\frac{1}{4}\sum_{i=2}^{\infty}\int_{B_{1}}\frac{|w|^{2 }}{|x|^{2}}\prod_{j=1}^{i}X_{j}^{2}(|x|)dx,\quad\forall\,w\in C_{c}^{\infty}(B_ {1}), \tag{1.3}\] where \[X_{1}(r)=(\ln\frac{e}{r})^{-1},\ X_{k}(r)=X_{1}(X_{k-1}(r))\ \text{for}\ k=2,.... \tag{1.4}\] Under the change of variable \(x\to ex\), (1.3) is reduced to the following equivalent form, which is consistent to (1.2) with \(R=r_{0}:=e^{-1}\): \[\int_{B_{r_{0}}}|\nabla w|^{2}dx-\frac{1}{4}\int_{B_{r_{0}}}\frac{w^{2}}{|x|^ {2}(\ln\frac{1}{|x|})^{2}}dx\geq\frac{1}{4}\sum_{i=2}^{\infty}\int_{B_{r_{0} }}\frac{|w|^{2}}{|x|^{2}}\prod_{j=1}^{i}X_{j}^{2}(e|x|)dx,\ \forall\,w\in C_{c}^{\infty}(B_{r_{0}}). \tag{1.5}\] Let us note that in (1.1), the ball \(B_{1}\) can be replaced by any domain \(\Omega\subset B_{1}\) containing the origin, since any function \(w\in C_{c}^{\infty}(\Omega)\) can be regarded as a function in \(C_{c}^{\infty}(B_{1})\) after extension by \(0\) over \(B_{1}\setminus\Omega\). Similarly in (1.5), the ball \(B_{r_{0}}\) can be replaced by any domain \(\Omega\subset B_{r_{0}}\) containing the origin. ### Leray's inequality with a remainder term In the first part of this paper, we obtain an improved version of (1.1), where the potential function has a double singularity (at both \(x=0\) and \(|x|=1\)), and also an improved version of (1.2) which is different from (1.5). More precisely we prove the following theorem. **Theorem 1.1**.: _The following inequalities hold:_ * (Leray's inequality with a remainder term) _There exists_ \(\mu_{2}>0\) _such that for any_ \(u\in C_{c}^{1}(B_{1})\)__ (1.6) \[\int_{B_{1}}\Big{(}|\nabla u|^{2}-\frac{1}{4}\frac{|u|^{2}}{|x|^{2}(\ln\frac{ 1}{|x|})^{2}}\Big{)}dx\geq\mu_{2}\int_{B_{1}}\frac{|u|^{2}}{|x|^{2}(\ln\frac{ 1}{|x|})^{2}\big{(}1+|\ln\ln\frac{1}{|x|}|\big{)}^{2}}\,dx.\] * (Leray's inequality with a remainder term for radial functions) _For any_ \(q>2\)_, there exists_ \(\mu_{q}>0\) _such that for every_ \(u\in C_{\text{rad},c}^{1}(B_{1})\)_,_ (1.7) \[\int_{B_{1}}\Big{(}|\nabla u|^{2}-\frac{1}{4}\frac{|u|^{2}}{|x|^{2}(\ln\frac{ 1}{|x|})^{2}}\Big{)}dx\geq\mu_{q}\Big{(}\int_{B_{1}}\frac{|u|^{q}}{|x|^{2} \Big{[}(\ln\frac{1}{|x|})\big{(}1+|\ln\ln\frac{1}{|x|}|\big{)}\Big{]}^{1+\frac {q}{2}}}\,dx\Big{)}^{\frac{2}{q}}.\] * (Leray's inequality with a remainder term and singularity at \(0\) only) _For any_ \(q>2\) _and_ \(r_{0}=e^{-1}\)_, there exists_ \(\mu_{q}>0\) _such that for every_ \(u\in C_{c}^{1}(B_{r_{0}})\)_,_ (1.8) \[\int_{B_{r_{0}}}\Big{(}|\nabla u|^{2}-\frac{1}{4}\frac{|u|^{2}}{|x|^{2}(\ln \frac{1}{|x|})^{2}}\Big{)}dx\geq\mu_{q}\Big{(}\int_{B_{r_{0}}}\frac{|u|^{q}}{| x|^{2}\Big{[}\big{(}\ln\frac{1}{|x|}\big{)}\big{(}1+|\ln\ln\frac{1}{|x|}|\big{)} \Big{]}^{1+\frac{q}{2}}}\,dx\Big{)}^{\frac{2}{q}}.\] Motivated by the Leray inequality (1.1), we denote by \(\mathcal{H}^{1}_{\mu,0}(B_{1})\), for \(\mu\geq-\frac{1}{4}\), the completion of \(C_{c}^{\infty}(B_{1})\) under the norm \[\|u\|_{\mu}=\sqrt{\int_{B_{1}}\Big{(}|\nabla u|^{2}dx+\mu\frac{u^{2}}{|x|^{2}(- \ln|x|)^{2}}\Big{)}dx},\] and so \(\mathcal{H}^{1}_{\mu,0}(B_{1})\) is a Hilbert space with inner product \[\langle u,v\rangle_{\mu}=\int_{B_{1}}\Big{(}\nabla u\cdot\nabla v\,dx+\mu\frac{ uv}{|x|^{2}(-\ln|x|)^{2}}\Big{)}dx.\] Set \[\mathcal{H}^{1}_{0}(B_{1})=\mathcal{H}^{1}_{0,0}(B_{1})\quad\text{and}\quad \mathcal{H}^{1}_{0}(B_{1})=\mathcal{H}^{1}_{-\frac{1}{4},0}(B_{1}).\] For \(q\in[1,+\infty)\), as usual we denote by \(W^{1,q}_{0}(B_{1})\) the completion of \(C^{\infty}_{c}(B_{1})\) under the norm \[\|u\|_{W^{1,q}}=\Big{(}\int_{B_{1}}|\nabla u|^{q}dx\Big{)}^{\frac{1}{q}},\] and denote \(\mathcal{H}^{1}_{0}(B_{1})=W^{1,2}_{0}(B_{1})\). We will show that \[\mathcal{H}^{1}_{\mu,0}(B_{1})=\mathcal{H}^{1}_{0}(B_{1})\text{ for }\mu>-\tfrac{1}{4}, \text{ but }\mathcal{H}^{1}_{0}(B_{1})\subsetneqq\hat{\mathcal{H}}^{1}_{0}(B_{1}). \tag{1.9}\] ### Some embedding results The second part of this paper gives some embedding results for the Hilbert space \(\hat{\mathcal{H}}^{1}_{0}(B_{1})\). **Theorem 1.2**.: \((i)\) _The embedding \(\hat{\mathcal{H}}^{1}_{0}(B_{1})\hookrightarrow W^{1,q}_{0}(B_{1})\) is continuous for any \(q\in[1,\,2)\)._ \((ii)\) _The embedding \(\hat{\mathcal{H}}^{1}_{0}(B_{1})\hookrightarrow L^{p}(B_{1},|x|^{-\alpha}dx)\) is compact for any \(p\in[1,+\infty)\) and \(\alpha\in[0,2)\)._ The following embeddings are compact and involve more general weight functions. **Theorem 1.3**.: _Let \(V:\,B_{1}\setminus\{0\}\to[0,+\infty)\) be a continuous nonzero potential._ * _If_ \[\begin{cases}\lim_{|x|\to 0^{+}}V(x)|x|^{2}(-\ln|x|)^{2}\big{(}\ln\ln\frac{1}{|x |}\big{)}^{2}=0,\\ \lim_{|x|\to 1^{-}}V(x)(-\ln|x|)^{2}\big{(}\ln\ln\frac{1}{|x|}\big{)}^{2}=0, \end{cases}\] _then the embedding_ \(\hat{\mathcal{H}}^{1}_{0}(B_{1})\hookrightarrow L^{2}(B_{1},Vdx)\) _is compact._ * _If_ \(q>2\) _and_ \[\lim_{|x|\to 0^{+}}V(x)|x|^{2}\Big{[}(-\ln|x|)\big{(}\ln\ln\frac{1}{|x |}\big{)}\Big{]}^{1+\frac{q}{2}}=0,\] _then for any_ \(r\in(0,1)\)_, the embedding_ \(\hat{\mathcal{H}}^{1}_{0}(B_{r})\hookrightarrow L^{q}(B_{r},Vdx)\) _is compact._ **Remark 1.4**.: _It follows in particular that_ * _for any_ \(\beta>1\)_, the following embedding is compact:_ \[\hat{\mathcal{H}}^{1}_{0}(B_{1})\hookrightarrow L^{2}\big{(}B_{1},|x|^{-2}(- \ln|x|)^{-2}\big{(}1+|\ln\ln\frac{1}{|x|}|\big{)}^{-2\beta}dx\big{)}.\] _Moreover, the embedding inequality (_1.6_) holds for_ \(u\in\hat{\mathcal{H}}^{1}_{0}(B_{1})\)_._ * _For any_ \(q>2\)_,_ \(r\in(0,1)\) _and_ \(\beta>1\)_, the following embedding is compact:_ \[\hat{\mathcal{H}}^{1}_{0}(B_{r})\hookrightarrow L^{q}\big{(}B_{r},|x|^{-2}(- \ln|x|)^{1+\frac{q}{2}}\big{(}1+|\ln\ln\frac{1}{|x|}|\big{)}^{-\beta(1+\frac{q }{2})}dx\big{)}.\] _Moreover, the embedding inequality (_1.8_) holds for_ \(u\in\hat{\mathcal{H}}^{1}_{0}(B_{1})\)_._ The above embedding results and their proofs can be used to obtain the following conclusions. **Theorem 1.5**.: \((i)\) _Let \(w\in C^{1}_{c}(B_{1})\), then there exists \(c>0\) independent of \(w\) such that_ \[\int_{B_{1}}|\nabla w|^{2}\ln\frac{1}{|x|}dx\geq c\int_{B_{1}}|x|^{-2}(\ln \frac{1}{|x|})^{-1}\big{(}1+\big{|}\ln\ln\frac{1}{|x|}\big{|}\big{)}^{-2}w^{2}dx \tag{1.10}\] _and for \(r\in(0,1)\)_ \[\int_{B_{r}}|\nabla w|^{2}\ln\frac{1}{|x|}dx\geq c\int_{B_{r}}|x|^{-2}(\ln \frac{1}{|x|})^{-1}\big{(}1+\big{|}\ln\ln\frac{1}{|x|}\big{|}\big{)}^{-1-\frac{ q}{2}}w^{q}dx. \tag{1.11}\] \((ii)\) _If \(\mathcal{H}^{1}_{\mathrm{in},0}(B_{r})\) is the completion of \(C^{\infty}_{c}(B_{r})\) under the norm_ \[\|w\|_{\mathrm{in}}=\sqrt{\int_{B_{r}}|\nabla w|^{2}\ln\frac{1}{|x|}dx},\] _and \(V:\,B_{1}\setminus\{0\}\to[0,+\infty)\) is a continuous nonzero potential, then_ \((a)\) _the embedding_ \(\mathcal{H}^{1}_{\ln,0}(B_{1})\hookrightarrow L^{2}(B_{1},Vdx)\) _is compact if_ \[\begin{cases}\lim_{|x|\to 0^{+}}V(x)|x|^{2}(\ln\frac{1}{|x|})\big{(}1+\big{|}\ln \ln\frac{1}{|x|}\big{|}\big{)}^{2}=0,\\ \lim_{|x|\to 1^{-}}V(x)(\ln\frac{1}{|x|})\big{(}1+\big{|}\ln\ln\frac{1}{|x|} \big{|}\big{)}^{2}=0;\end{cases}\] \((b)\) _for_ \(r\in(0,1)\)_, the embedding_ \(\mathcal{H}^{1}_{\ln,0}(B_{r})\hookrightarrow L^{q}(B_{r},Vdx)\) _is compact if_ \[\lim_{|x|\to 0^{+}}V(x)|x|^{2}(\ln\frac{1}{|x|})\big{(}\big{|}\ln\ln\frac{1}{|x|} \big{|}\big{)}^{1+\frac{q}{2}}=0.\] ### Trudinger-Moser type inequalities The Trudinger-Moser inequality ([10, 14]) \[\sup_{\int_{\Omega}|\nabla u|^{2}dx\leq 1,\ u\in C^{\infty}_{c}(\Omega)}\int_{ \Omega}e^{4\pi u^{2}}dx<\infty, \tag{1.12}\] where \(\Omega\) is a bounded domain in \(\mathbb{R}^{2}\), is an analogue of the following limiting Sobolev inequality in dimensions \(N\geq 3\): \[\sup_{\int_{\mathbb{R}^{N}}|\nabla u|^{2}dx\leq 1,\ u\in C^{\infty}_{c}( \mathbb{R}^{N})}\int_{\mathbb{R}^{N}}|u|^{2^{*}}dx<\infty,\ 2^{*}=\frac{2N}{N-2}.\] There are several extensions of (1.12) in the literature, where the constraint \(\int_{\Omega}|\nabla u|^{2}dx\leq 1\) is replaced by \[\int_{\Omega}\Big{[}|\nabla u|^{2}-V(x)u^{2}\Big{]}dx\leq 1\] with a suitable potential function \(V(x)\). For example, in Wang and Ye [16], it was shown that (1.12) remains valid for \(\Omega=B_{1}\) if \(\int_{\Omega}|\nabla u|^{2}dx\leq 1\) is replaced by \[\int_{B_{1}}\Big{[}|\nabla u|^{2}-V_{1}u^{2}\Big{]}dx\leq 1\ \text{with}\ V_{1}:=(1-|x|^{2})^{-2}.\] The potential function \(V_{1}\) here is related to the Leray potential \[V_{\text{Leray}}:=\frac{1}{4(\ln|x|)^{2}(\ln\frac{1}{|x|})^{2}}\] in the following way: \[\lim_{r\to 1^{-}}V_{1}(r)/V_{\text{Leray}}(r)=1.\] Tintarev [13] showed that (1.12) remains valid if \(\Omega=B_{1}\) and \(\int_{\Omega}|\nabla u|^{2}dx\leq 1\) is replaced by \[\int_{B_{1}}\Big{[}|\nabla u|^{2}-V_{2}u^{2}\Big{]}dx\leq 1\ \text{with}\ V_{2}:=\frac{V_{\text{ Leray}}(|x|)}{\max\{\sqrt{-\ln|x|},1\}}.\] Clearly \[\lim_{|x|\to 1^{-}}V_{2}(|x|)/V_{\text{Leray}}(|x|)=1,\ \lim_{|x|\to 0^{+}}V_{2}(|x|)/V_{\text{ Leray}}(|x|)=0.\] Let us note that \(V_{1}\) is regular at the origin with \(V_{1}(0)=1\) while \(V_{2}\) is singular at the origin. Set \[V_{3}(r):=\frac{1}{4r^{2}(1-\ln r)^{2}}.\] Then it is easily seen that \(V_{3}(r)\) is decreasing in \(r\) for \(r\in(0,1]\), \(V_{3}(1)=1/4\) and \[\lim_{r\to 0^{+}}V_{3}(r)/V_{\text{Leray}}(r)=1.\] For \(V=V_{3}\), it was shown by Psaradakisa and Spectora [11] that for any domain \(\Omega\subset B_{1}\), and any \(\epsilon>0\), there exists \(A_{\epsilon}>0\) such that \[\sup_{\int_{\Omega}[|\nabla u|^{2}-V_{3}u^{2}]dx\leq 1,\ u\in C^{\infty}_{c}( \Omega)}\int_{\Omega}e^{A_{\epsilon}u^{2}(x)/(1-\ln|x|)^{\epsilon}}dx<\infty,\] and the result is false for \(\epsilon=0\). Mallick and Tontarev [8] subsequently proved that for any domain \(\Omega\subset B_{1}\), there exists \(A>0\) such that \[\sup_{f_{\Omega}\|\nabla u|^{2}-V_{3}u^{2}\|dx\leq 1,\ u\in C^{\infty}_{c}( \Omega)}\int_{\Omega}e^{Au^{2}(x)/E(|x|)}dx<\infty,\] where \[E(|x|):=1-\ln(1-\ln|x|).\] In a different direction, Adimurthi and Druet [1] proved the following result: For any bounded domain \(\Omega\subset\mathbb{R}^{2}\), \[\sup_{f_{\Omega}\|\nabla u\|^{2}dx\leq 1,\ u\in C^{\infty}_{c}(\Omega)}\int_{ \Omega}e^{4\pi u^{2}(1+\alpha\|u\|_{2})}dx\begin{cases}<\infty\text{ if }\alpha\in[0,\lambda_{1}(\Omega)),\\ =\infty\text{ if }\alpha\geq\lambda_{1}(\Omega),\end{cases}\] where \(\lambda_{1}(\Omega)\) stands for the first eigenvalue of \(-\Delta\) in \(H^{1}_{0}(\Omega)\), and \(\|u\|_{2}=(\int_{\Omega}u^{2}dx)^{1/2}\). Inspired by these results, we will prove some related but different Trudinger-Moser type inequalities involving a potential \(V\) behaving like the Leray potential near \(0\) and/or near \(\partial B_{1}\). Let \(\Omega\subset B_{1}\) be a bounded domain containing the origin, \(\mu\geq-\frac{1}{4}\), and \(V:(0,1)\to[0,\infty)\) a continuous function such that \[\mu V(r)\geq-V_{\text{Leray}}(r)=-\frac{1}{4r^{2}(\ln\frac{1}{r})^{2}}\text{ for }r\in(0,1).\] We denote by \(\mathcal{H}^{1}_{V,\mu,0}(\Omega)\) the completion of \(C^{\infty}_{c}(\Omega\setminus\{0\})\) under the norm \[\|u\|_{V,\mu}=\sqrt{\int_{\Omega}\Big{(}|\nabla u|^{2}dx+\mu Vu^{2}\Big{)}dx},\] which is a Hilbert space with inner product \[\langle u,v\rangle_{{}_{V},\mu}=\int_{\Omega}\Big{(}\nabla u\cdot\nabla v\,dx +\mu Vuv\Big{)}dx.\] For \(\mu\geq-\frac{1}{4}\), we denote \[m_{\mu}:=4\pi\sqrt{1+4\mu}.\] **Theorem 1.6**.: _Assume that \(\mu>0\) and \(V:(0,1)\to[0,+\infty)\) is a continuous function satisfying_ \[V(r)\geq\frac{1}{r^{2}(-\ln r)^{2}}\quad\text{in }(0,1). \tag{1.13}\] _Then the following conclusions hold:_ * _For_ **radially symmetric** _functions in_ \(\mathcal{H}^{1}_{V,\mu,0}(B_{1})\) _we have_ \[\sup_{u\operatorname{is\,radial},\|u\|_{V,\mu}\leq 1}\int_{B_{1}}e^{m_{\mu}|u|^{2}} dx<\infty,\] _and this result is optimal: If_ \(\alpha>m_{\mu}\) _and_ (1.14) \[\lim_{r\to 0^{+}}V(r)r^{2}(-\ln r)^{2}=1,\] _then there exists a sequence of radially symmetric functions which concentrate at the origin such that_ \(\|u_{n}\|_{V,\mu}\leq 1\) _and_ \[\int_{B_{1}}e^{\alpha|u_{n}|^{2}}dx\to\infty\quad\text{as }\,n\to+\infty.\] * _For general functions in_ \(\mathcal{H}^{1}_{V,\mu,0}(B_{1})\) _we have_ \[\sup_{\|u\|_{V,\mu}\leq 1}\int_{B_{1}}e^{4\pi|u|^{2}}dx<\infty,\] _and this result is optimal: If_ \(\alpha>4\pi\) _and (_1.14_) holds, then there exists a sequence of functions concentrating at some point away from the origin, such that_ \(\left\|u_{n}\right\|_{{}_{V,\mu}}\leq 1\) _and_ \[\int_{B_{1}}e^{\alpha|u_{n}|^{2}}dx\to+\infty\quad\text{as}\;\;n\to+\infty.\] Here a sequence \(\{u_{n}\}\) is said to be concentrating at some point \(x_{0}\), if for any \(r\in(0,1)\) and any \(\epsilon>0\) there exists \(n_{0}>0\) such that \[\int_{B_{1}\setminus B_{r}(x_{0})}\Big{(}|\nabla u|^{2}dx+\mu Vu^{2}\Big{)}dx<\epsilon.\] Next we consider the case \(\mu\in(-\frac{1}{4},0)\). **Theorem 1.7**.: _Suppose \(\mu\in(-\frac{1}{4},0)\) and \(V:(0,1)\to[0,+\infty)\) is continuous and verifies_ \[V(r)\leq\frac{1}{r^{2}(-\ln r)^{2}}. \tag{1.15}\] _Then the following conclusions hold:_ * _For_ **radially symmetric** _functions,_ \[\sup_{u\operatorname{is\,radial},\left\|u\right\|_{V,\mu}\leq 1}\int_{B_{1}}e^{m _{\mu}|u|^{2}}dx<\infty,\] _and this result is optimal: If_ \(\alpha>m_{\mu}\) _and (_1.14_) holds, then there exists a sequence of radially symmetric functions which concentrate at the origin such that_ \(\left\|u_{n}\right\|_{{}_{V,\mu}}\leq 1\) _and_ \[\int_{B_{1}}e^{\alpha|u_{n}|^{2}}dx\to\infty\quad\text{as}\;\;n\to+\infty.\] * _For general functions, if_ \(V\) _is_ **decreasing** _in_ \((0,1)\) _and verifies (_1.15_), then_ \[\sup_{\left\|u\right\|_{V,\mu}\leq 1}\int_{B_{1}}e^{m_{\mu}|u|^{2}}dx<\infty.\] * _The result in_ (ii) _is optimal: If (_1.14_) holds, then for any_ \(\alpha>m_{\mu}\)_, there exists a sequence_ \(\{u_{n}\}_{n}\) _concentrating at the origin such that_ \(\left\|u_{n}\right\|_{{}_{V,\mu}}\leq 1\) _and_ \[\int_{B_{1}}e^{\alpha|u_{n}|^{2}}dx\to\infty\quad\text{as}\;\;n\to\infty.\] Let us note that \(V(r):=\frac{1}{r^{2}(1-\ln r)^{2}}\) is decreasing in \((0,1]\) and satisfies both (1.14) and (1.15). **Corollary 1.8**.: _Let \(\mu\in(-\frac{1}{4},0)\), \(r_{0}=1/e\) and \(V(r)=\frac{1}{r^{2}(\ln\frac{1}{r})^{2}}\). Then_ \[\sup_{\left\|u\right\|_{\mathcal{H}_{V,\mu,0}^{1}(B_{r_{0}})}\leq 1}\int_{B_{ r_{0}}}e^{m_{\mu}|u|^{2}}dx<\infty,\] _and the exponent \(m_{\mu}\) is optimal._ Proof.: Under the change of variable \(x\to ex\), the inequality is changed to an equivalent one over \(B_{1}\) with \(V(r)\) replaced by \(V(r/e)\), which is decreasing over \((0,1)\). Therefore we can use Theorem 1.7 (ii) to conclude. Finally we consider the critical case \(\mu=-\frac{1}{4}\). **Theorem 1.9**.: _Suppose that \(\mu=-\frac{1}{4}\) and \(V\in C((0,1))\) is nonnegative and verifies (1.15). Then the following conclusions hold:_ * _For_ **radially symmetric** _functions and_ \(p\in(0,1)\)_,_ \(\alpha>0,\)__ \[\sup_{u\operatorname{is\,radial},\left\|u\right\|_{V,-1/4}\leq 1}\int_{B_{1}}e^{ \alpha|u|^{p}}dx<\infty.\] _._ 2. _For general functions, if_ \(V\) _is_ **decreasing** _in_ \((0,1)\)_, then for_ \(p\in(0,1)\) _and_ \(\alpha>0\)_,_ \[\sup_{\|u\|_{V,-1/4}\leq 1}\int_{B_{1}}e^{\alpha|u|^{p}}dx<\infty.\] 3. _If there exist_ \(\theta>0\) _and_ \(C>0\) _such that_ (1.16) \[|V(r)r^{2}(-\ln r)^{2}-1|\leq C(-\ln r)^{-\theta}\ \ \ \text{for}\ r\in(0,\frac{1}{4}),\] _then there exists a sequence_ \(\{u_{n}\}\subset\mathcal{H}^{1}_{V,-1/4,0}(B_{1})\) _such that_ \(\|u_{n}\|_{\mathcal{H}^{1}_{V,-1/4}(B_{1})}=1\) _and for any_ \(p\geq 1\) _and any_ \(\alpha>0\)_,_ \[\int_{B_{1}}e^{\alpha|u_{n}|^{p}}dx\to\infty\ \ \ \text{as}\ n\to\infty.\] **Remark 1.10**.: _We end this subsection with some remarks:_ 1. _Theorem_ 1.6__(ii) _contrasts sharply with Theorem_ 1.6__(i) _and Theorem_ 1.7__(ii) _in that, the best components in the latter results are both given by_ \(m_{\mu}\) _while that in Theorem_ 1.6__(ii) _is_ \(4\pi\;(=m_{0})\)_._ 2. _By Corollaries_ 3.2 _and_ 3.3 _below, under condition (_1.15_), for_ \(\mu>-1/4\)_,_ \(\mathcal{H}^{1}_{V,\mu,0}(B_{1})=\mathcal{H}^{1}_{0}(B_{1})\)_, but_ \(\mathcal{H}^{1}_{0}(B_{1})\subsetneqq\mathcal{H}^{1}_{V,-1/4,0}(B_{1})\) _if additionally (_1.16_) holds. These clearly imply (_1.9_)._ ### Organisation of the paper The rest of the paper is organised as follows. In Section 2, we prove Theorem 1.1 based on a general inequality [9, Theorem 3, p.44]. In Section 3, we consider the embedding properties of \(\hat{\mathcal{H}}^{1}_{0}(B_{1})\) and related spaces, where we in particular prove Theorems 1.2, 1.3 and 1.5. Section 4 is devoted to the proof of Trudinger-Moser type inequalities for radially symmetric functions, where rather subtle and involved analysis is used to obtain the desired results; this is perhaps the most technically demanding part of the paper. Section 5 extends the results of Section 4 for radial functions to general functions, where significant difference is revealed (see Remark 1.8 (i) above). With all the main ingredients ready, the proofs of Theorems 1.6, 1.7 and 1.9 are completed at the end of Section 5. ## 2. Leray's inequality with remainder terms We prove Theorem 1.1 in this section. Our analysis will be based on the following proposition. **Proposition 2.1**.: [9, Theorem 3, p.44] _Suppose \(1\leq p\leq q\leq\infty\), \(\gamma\) and \(\nu\) are measures such that_ \[\mathcal{B}=\sup_{r>0}\gamma\big{(}(0,r)\big{)}^{\frac{1}{q}}\Big{(}\int_{r}^ {\infty}(\frac{d\nu^{*}}{dr})^{-\frac{1}{p-1}}\Big{)}^{\frac{p-1}{p}}<+\infty,\] _where \(\nu^{*}\) is the absolutely continuous part of \(\nu\). Then there exists \(c>0\) such that for any \(f\in C(\mathbb{R}_{+})\cap L^{p}(\mathbb{R}_{+},d\nu)\)_ \[\Big{(}\int_{0}^{\infty}\Big{|}\int_{0}^{r}f(t)dt\Big{|}^{q}d\gamma(r)\Big{)} ^{\frac{1}{q}}\leq c\Big{(}\int_{0}^{\infty}|f(r)|^{p}d\nu(r)\Big{)}^{\frac{1 }{p}}.\] _Moreover, if \(c\) is the best constant in the above inequality, then_ \[\mathcal{B}\leq c\leq\mathcal{B}(\frac{q}{q-1})^{\frac{p-1}{p}}q^{\frac{1}{q}},\] _and \(c=\mathcal{B}\) if \(p=1\) or \(q=\infty\)._ **Lemma 2.2**.: _Suppose that \(q\geq 2\), \(a_{0}:=e^{-e}\), and the functions \(a(r)\) and \(b(r)\) satisfy_ \[a(r)\geq\beta r(-\ln r),\ \ b(r)\geq\beta r(\ln\frac{1}{r})(\ln\ln\frac{1}{r})^{ \frac{q}{2}+1}\] _for all \(r\in(0,a_{0}]\) and some constant \(\beta>0\). Then there exists \(c_{1}=c_{1}(q,\beta)>0\) such that for any \(v\in C^{1}_{0}\big{(}[0,1)\big{)}\),_ \[\int_{0}^{a_{0}}v^{\prime}(r)^{2}a(r)dr\geq c_{1}\Big{(}\int_{0}^{a_{0}}\frac{ |v(r)|^{q}}{b(r)}dr\Big{)}^{\frac{2}{q}}. \tag{2.1}\] Proof.In what follows, for any interval \(I\subset\mathbb{R}\), \(\chi_{I}\) will denote the characteristic function of \(I\) : \[\chi_{I}(r)=\begin{cases}1,&r\in I,\\ 0,&r\not\in I.\end{cases}\] Set \(I=[0,a_{0}]\) and \[d\gamma=\frac{\chi_{{}_{I}}(s)}{b(s)}ds,\quad d\nu=\frac{s(-\ln s)}{\chi_{{}_{ I}}(s)}ds.\] Then for \(r\in I\), \[\gamma([0,r])\leq\frac{1}{\beta_{1}}\int_{0}^{r}\frac{1}{s\ln\frac{1}{s}(\ln \ln\frac{1}{s})^{\frac{q}{2}+1}}ds=\frac{2}{\beta_{1}q}\left(\ln\ln\frac{1}{r} \right)^{-\frac{q}{2}}\] and for \(r>a_{0}\), \[\gamma([0,r])=\gamma([0,a_{0}])\leq\frac{2}{\beta_{1}q}.\] Moreover, for \(r\in I\), \[\int_{r}^{+\infty}\Big{(}\frac{d\nu^{*}}{ds}\Big{)}^{-1}ds=\int_{r}^{a_{0}} \frac{1}{s(-\ln s)}ds=\ln\ln\frac{1}{r}-1\] and for \(r>a_{0}\), \[\int_{r}^{+\infty}\Big{(}\frac{d\nu^{*}}{ds}\Big{)}^{-1}ds=0.\] So we have that \[\mathcal{B} = \sup_{r>0}\gamma([0,r])^{\frac{1}{q}}\left(\int_{r}^{+\infty} \Big{(}\frac{d\nu^{*}}{ds}\Big{)}^{-1}ds\right)^{\frac{1}{2}}\] \[= \Big{(}\frac{2}{\beta_{1}q}\Big{)}^{\frac{1}{q}}\sup_{r\in I} \sqrt{\frac{\ln\ln\frac{1}{r}-1}{\ln\ln\frac{1}{r}}}\] \[\leq \Big{(}\frac{2}{\beta_{1}q}\Big{)}^{\frac{1}{q}}.\] Now we can apply Proposition 2.1 with \(q\geq 2=p\) and \(f(r)=\chi_{I}(r)\nu^{\prime}(r)\) to obtain (2.1). **Lemma 2.3**.: _Suppose \(q\geq 2\) and the functions \(\tilde{a}(r)\) and \(\tilde{b}(r)\) satisfy_ \[\tilde{a}(r)\geq\tilde{\beta}(-\ln r),\ \ \tilde{b}(r)\geq\tilde{\beta}(1-r)( \ln\frac{1}{1-r})^{\frac{q}{2}+1}\] _for all \(r\in[a_{0},1)\) and some constant \(\tilde{\beta}>0\). Then there exists \(c_{2}=c_{2}(q,\tilde{\beta})>0\) such that for any \(v\in C^{1}_{0}\big{(}(0,1)\big{)}\),_ \[\int_{a_{0}}^{1}|v^{\prime}(r)|^{2}\tilde{a}(r)dr\geq c_{2}\Big{(}\int_{a_{0} }^{1}\frac{|v(r)|^{q}}{\tilde{b}(r)}dr\Big{)}^{\frac{2}{q}}. \tag{2.2}\] Proof.In order to use Proposition 2.1, we set \[d\gamma=\frac{\chi_{{}_{J}}(s)}{\tilde{b}(1-s)}dr\quad\text{and}\quad d\nu= \frac{s}{\chi_{{}_{J}}(r)}dr,\] where \(J=(0,1-a_{0})\). Then for \(r\in J\), \[\gamma([0,r])\leq\frac{1}{\tilde{\beta}}\int_{0}^{r}\frac{1}{s\big{(}-\ln s \big{)}^{\frac{q}{2}+1}}ds=\frac{2}{\tilde{\beta}q}\big{(}-\ln r\big{)}^{- \frac{q}{2}},\] and for \(r\geq 1-a_{0}\) \[\gamma([0,r])=\gamma([0,1-a_{0}])\leq\frac{2}{\tilde{\beta}q}\Big{(}-\ln(1-a _{0})\Big{)}^{-\frac{q}{2}}.\] On the other hand, for \(r\in J\) \[\int_{r}^{+\infty}\Big{(}\frac{d\nu^{*}}{ds}\Big{)}^{-1}ds=\int_{r}^{1-a_{0}} \frac{1}{s}ds=-\ln r+\ln(1-a_{0})\leq-\ln r,\] and for \(r\geq 1-a_{0}\), \[\int_{r}^{+\infty}\Big{(}\frac{d\nu^{*}}{ds}\Big{)}^{-1}ds=0.\] So we have that \[\mathcal{B}=\sup_{r>0}\,\gamma([0,r])^{\frac{1}{q}}\left(\int_{r}^{+\infty} \Big{(}\frac{d\nu^{*}}{ds}\Big{)}^{-1}ds\right)^{\frac{1}{2}}\leq\Big{(}\frac{ 2}{\tilde{\beta}_{1}q}\Big{)}^{\frac{1}{q}}.\] Now we can apply Proposition 2.1 with \(q\geq 2=p\) and \(f(r)=\chi_{J}(r)w^{\prime}(r)\), \(w(r)=v(1-r)\), to obtain some \(\tilde{c}_{1}>0\), \[\int_{0}^{1-a_{0}}|\omega^{\prime}(s)|^{2}sds\geq\tilde{c}_{1}\Big{(}\int_{0} ^{1-a_{0}}\frac{|\omega(s)|^{q}}{\tilde{b}(1-s)}ds\Big{)}^{\frac{2}{q}},\] i.e. \[\int_{a_{0}}^{1}|\nu^{\prime}(s)|^{2}(1-s)ds\geq\tilde{c}_{1}\Big{(}\int_{a_{0 }}^{1}\frac{|v(s)|^{q}}{\tilde{b}(s)}ds\Big{)}^{\frac{2}{q}}.\] Since \(-\ln s\sim 1-s\) as \(s\to 1^{-}\), there exists \(\tilde{c}_{2}>0\) such that \[\int_{a_{0}}^{1}|\nu^{\prime}(s)|^{2}\tilde{a}(s)ds \geq \tilde{\beta}_{1}\int_{a_{0}}^{1}|\nu^{\prime}(s)|^{2}(-\ln s)ds \geq\tilde{c}_{2}\int_{a_{0}}^{1}|\nu^{\prime}(s)|^{2}(1-s)ds\] \[\geq \tilde{c}_{2}\tilde{c}_{1}\Big{(}\int_{a_{0}}^{1}\frac{|v(s)|^{q} }{\tilde{b}(s)}ds\Big{)}^{\frac{2}{q}}.\] Thus (2.2) holds. \(\Box\) **Proof of Theorem 1.1.** Let \[\mathcal{I}(w)=\int_{B_{1}}|\nabla w|^{2}dx-\frac{1}{4}\int_{B_{1}}\frac{w^{2 }}{|x|^{2}(-\ln|x|)^{2}}dx,\] which is nonnegative by Leray's inequality (1.1). _Part 1._ Let \(v_{0}\) be a radially symmetric function in \(C^{1}_{0}(B_{1})\), \(\alpha\geq 0\) a constant, and \[w_{0}(r)=(\alpha-\ln r)^{-\frac{1}{2}}v_{0}(r)\quad\text{for}\,\,\,r\in(0,1).\] Then \(w_{0}(0)=w_{0}(1)=0\), and we have \[\mathcal{I}_{\alpha}(v_{0}): = 2\pi\int_{0}^{1}\Big{(}v_{0}^{\prime}(r)^{2}-\frac{1}{4}\frac{v_ {0}(r)^{2}}{r^{2}(\alpha-\ln r)^{2}}\Big{)}rdr\] \[= 2\pi\int_{0}^{1}\Big{(}w_{0}^{\prime}(r)^{2}-\frac{w_{0}^{\prime }(r)w_{0}(r)}{r(\alpha-\ln r)}\Big{)}r(\alpha-\ln r)dr\] \[= 2\pi\int_{0}^{1}w_{0}^{\prime}(r)^{2}r(\alpha-\ln r)dr-2\pi\int_ {0}^{1}w_{0}^{\prime}(r)w_{0}(r)dr\] \[= 2\pi\int_{0}^{1}w_{0}^{\prime}(r)^{2}r(\alpha-\ln r)dr+\pi(w_{0} ^{2}(0)-w_{0}(1)^{2})\] \[= 2\pi\int_{0}^{1}w_{0}^{\prime}(r)^{2}r(\alpha-\ln r)dr. \tag{2.3}\] From Lemma 2.2 we obtain that for \(q\geq 2\), \[\int_{0}^{a_{0}}|w_{0}|^{2}r(-\ln r)dr \geq c_{1}\Big{(}\int_{0}^{a_{0}}\frac{|w_{0}|^{q}}{r(\ln\frac{1}{r})( \ln\ln\frac{1}{r})^{\frac{q}{2}+1}}dr\Big{)}^{\frac{2}{q}},\] and by Lemma 2.3 \[\int_{a_{0}}^{1}|u^{\prime}_{0}|^{2}r(-\ln r)dr \geq a_{0}c_{2}\Big{(}\int_{a_{0}}^{1}\frac{\big{|}w_{0}\big{|}^{q}}{(1- r)\big{(}\ln\frac{1}{1-r}\big{)}^{\frac{q}{2}+1}}dr\Big{)}^{\frac{2}{q}}.\] As a consequence, \[\mathcal{I}(v_{0})=\mathcal{I}_{0}(v_{0}) = 2\pi\int_{0}^{1}w^{\prime}_{0}(r)^{2}r(-\ln r)dr\] \[\geq c_{3}\Big{(}\int_{0}^{1}\Big{[}\frac{\chi_{[0,a_{0}]}(r)}{r(\ln \frac{1}{r})(\ln\ln\frac{1}{r})^{\frac{q}{2}+1}}+\frac{\chi_{[a_{0},1]}(r)}{(1- r)(\ln\frac{1}{1-r})^{\frac{q}{2}+1}}\Big{]}|w_{0}|^{q}dr\Big{)}^{\frac{2}{q}}.\] Since \[\lim_{r\to 1^{-}}\frac{r\ln\frac{1}{r}}{1-r}=\lim_{r\to 1^{-}}\frac{-\ln\ln \frac{1}{r}}{\ln\frac{1}{1-r}}=1,\lim_{r\to 0^{+}}\frac{1+\ln\ln\frac{1}{r}}{\ln \ln\frac{1}{r}}=\lim_{r\to 1^{-}}\frac{1-\ln\ln\frac{1}{r}}{-\ln\ln\frac{1}{r}}=1,\] we further obtain \[\mathcal{I}(v_{0}) \geq c_{4}\Big{(}\int_{0}^{1}\Big{[}\frac{\chi_{[0,a_{0}]}(r)}{r(\ln \frac{1}{r})(1+\ln\ln\frac{1}{r})^{\frac{q}{2}+1}}+\frac{\chi_{[a_{0},1]}(r)}{ r(\ln\frac{1}{r})\big{(}1+|\ln\ln\frac{1}{r}\big{)}^{\frac{q}{2}+1}}\Big{]}|w_{0}|^{ q}dr\Big{)}^{\frac{2}{q}}\] \[=c_{4}\Big{(}\int_{B_{1}}|x|^{-2}\Big{[}\big{(}\ln\frac{1}{|x|} \big{)}\big{(}1+\big{|}\ln\ln\frac{1}{|x|}\big{|}\big{)}\Big{]}^{-1-\frac{q}{ 2}}|v_{0}|^{q}dx\Big{)}^{\frac{2}{q}}. \tag{2.4}\] _Part 2. General functions for \(q=2\) in \(B_{1}\)._ Motivated by [5, 15], we prove the desired inequality for \(q=2\) by making use of the spherical harmonic functions. Any \(u\in C_{c}^{\infty}(B_{1})\) has a decomposition into spherical harmonics in the following form: \[u(x)=\sum_{m=0}^{\infty}u_{m}(r)h_{m}(\sigma),\] where \(\{h_{m}\}\) consists of the orthonormal eigenfunctions of the Laplacian-Beltrami operator on the sphere with corresponding eigenvalues \(\lambda_{m}=m^{2}\) for integer \(m\geq 0\). In particular, \(u_{0}\) is the radial part of \(u\) and \(h_{0}=1\). Note that \[\int_{B_{1}}|\nabla u|^{2}dx=\sum_{m=0}^{\infty}\int_{B_{1}}\Big{(}|\nabla u_{ m}|^{2}+\lambda_{m}\frac{u_{m}^{2}}{|x|^{2}}\Big{)}dx\] and by (2.4) \[\mathcal{I}(u) = \sum_{i=0}^{\infty}\mathcal{I}(u_{m})+2\pi\sum_{m=1}^{\infty} \Big{(}\lambda_{m}\int_{0}^{1}\frac{u_{m}^{2}}{|x|^{2}}dx\Big{)}\] \[\geq c_{7}\sum_{m=0}^{\infty}\Big{(}\int_{B_{1}}|x|^{-2}\big{(}\ln \frac{1}{|x|}\big{)}^{-1-\frac{q}{2}}(1+\big{|}\ln\ln\frac{1}{|x|}\big{|})^{-q} u_{m}^{q}dx\Big{)}^{\frac{2}{q}}+2\pi\sum_{m=1}^{\infty}\Big{(}\lambda_{m}\int_{0}^{1} \frac{u_{m}^{2}}{r}dr\Big{)}.\] Note that \[\int_{B_{1}}|x|^{-2}\big{(}\ln\frac{1}{|x|}\big{)}^{-2}\big{(}1+ \big{|}\ln\ln\frac{1}{|x|}\big{|}\big{)}^{-2}|u|^{2}dx\] \[= \int_{0}^{1}r^{-2}\big{(}\ln\frac{1}{r}\big{)}^{-2}\big{(}1+\big{|} \ln\ln\frac{1}{r}\big{|}\big{)}^{-2}\int_{\mathbb{S}^{1}}\Big{|}\sum_{m=0}^{ \infty}u_{m}(r)h_{m}(\sigma)\Big{|}^{2}d\sigma dr\] \[= \int_{0}^{1}r^{-2}\big{(}\ln\frac{1}{r}\big{)}^{-2}\big{(}1+\big{|} \ln\ln\frac{1}{r}\big{|}\big{)}^{-2}\Big{|}\sum_{m=0,n=0}^{\infty}u_{m}(r)u_{n} (r)\int_{\mathbb{S}^{1}}h_{m}(\sigma)h_{n}(\sigma)d\sigma dr\] \[= \sum_{m=0}^{\infty}\int_{B_{1}}|x|^{-2}\big{(}\ln\frac{1}{|x|} \big{)}^{-2}\big{(}1+\big{|}\ln\ln\frac{1}{|x|}\big{|}\big{)}^{-2}|u_{m}|^{2}dx,\] where we used the facts that \(\int_{\mathbb{S}^{1}}h_{i}(\sigma)h_{j}(\sigma)d\sigma=0\) for \(i\neq j\) and \(\int_{\mathbb{S}^{1}}|h_{i}(\sigma)|^{2}d\sigma=1\) for \(i=0,1,\cdots\). Now we apply (2.5) with \(q=2\) and obtain \[\mathcal{I}(u) \geq c_{7}\sum_{i=0}^{\infty}\int_{B_{1}}|x|^{-2}\big{(}\ln\frac{1}{|x |}\big{)}^{-2}\big{(}1+\big{|}\ln\ln\frac{1}{|x|}\big{|}\big{)}^{-2}u_{m}^{2}dx\] \[= c_{7}\int_{B_{1}}|x|^{-2}\big{(}\ln\frac{1}{|x|}\big{)}^{-2} \big{(}1+\big{|}\ln\ln\frac{1}{|x|}\big{|}\big{)}^{-2}u^{2}dx.\] However, this method fails for \(q>2\). _Part 3. General functions for \(q>2\) in \(B_{r_{0}}\)._ Let \(r_{q}\in(0,1)\) be chosen such that the function \(r^{-2}\big{[}(-\ln r)\big{(}1+|\ln\ln\frac{1}{r}|\big{)}\big{]}^{-1-\frac{q}{2}}\) is decreasing in \((0,r_{q})\). We will make use of the rearrangement argument. Let \(u^{*}\) denote the symmetric decreasing rearrangement of \(u\) (extended by \(0\) over \(\mathbb{R}^{2}\setminus B_{r_{0}}\)), and let us denote \[V_{1}(r):=\frac{1}{r^{2}(-\ln r)^{2}},\quad V_{2}(r):=\frac{1}{r^{2}\Big{[}(- \ln r)\big{(}1+|\ln\ln\frac{1}{r}|\big{)}\big{]}^{1+\frac{q}{2}}},\] which are decreasing in \((0,r_{0})\) and in \((0,r_{q})\), respectively. By the Polya-Szego inequality we have \[\int_{B_{r_{0}}}|\nabla u^{*}|^{2}dx\leq\int_{B_{r_{0}}}|\nabla u|^{2}dx.\] Combining this with (2.4), using also the Hardy-Littlewood inequality and \(V_{1}^{*}=V_{1}\) in \(B_{r_{0}}\), we obtain \[\mathcal{I}(u) = \int_{B_{r_{0}}}|\nabla u|^{2}dx-\frac{1}{4}\int_{B_{r_{0}}}u^{2 }|x|^{-2}\big{(}\ln\frac{1}{|x|}\big{)}^{-2}dx\] \[\geq \int_{B_{r_{0}}}|\nabla u^{*}|^{2}dx-\frac{1}{4}\int_{B_{r_{0}}}u ^{2}V_{1}dx\] \[\geq \int_{B_{r_{0}}}|\nabla u^{*}|^{2}dx-\frac{1}{4}\int_{B_{r_{0}}}(u ^{*})^{2}V_{1}dx\] \[\geq c\Big{(}\int_{B_{r_{0}}}(u^{*})^{q}V_{2}dx\Big{)}^{\frac{2}{q}}.\] If \(r_{q}\geq r_{0}\), then \[\int_{B_{r_{0}}}(u^{*})^{q}V_{2}dx=\int_{B_{r_{0}}}(u^{*})^{q}V_{2}^{*}dx\geq \int_{B_{r_{0}}}u^{q}V_{2}dx,\] which ends the proof. If \(r_{q}<r_{0}\), then due to \(V_{2}(0^{+})=+\infty\) there exists \(\hat{r}_{0}\in(0,r_{q})\) such that \(V_{2}(\hat{r}_{0})\geq\max_{r\in[r_{q},R]}V_{2}(r)\). We now define \[\bar{V}_{2}(r):=\begin{cases}V_{2}(r),&r\in[0,\hat{r}_{0}),\\ V_{2}(\hat{r}_{0}),&r\geq\hat{r}_{0},\end{cases}\quad\underline{V_{2}}(r):= \begin{cases}V_{2}(r),&r\in[0,\hat{r}_{0}),\\ \min_{r\in[r_{q},R]}V_{2}(r),&r\geq\hat{r}_{0}.\end{cases}\] Then it is clear that \[\begin{cases}\underline{V_{2}}\leq V_{2}\leq\bar{V}_{2},\\ \underline{V_{2}}\geq c_{q}\bar{V}_{2}&\text{for some constant $c_{q}\in(0,1)$},\\ \text{both $\underline{V_{2}}$ and $\bar{V}_{2}$ are non-increasing.}\end{cases}\] Now \[\int_{B_{R}}(u^{*})^{q}V_{2}dx\geq\int_{B_{R}}(u^{*})^{q}\underline{V_{2}}dx =\int_{B_{R}}(u^{*})^{q}\underline{V_{2}}^{*}dx\geq\int_{B_{R}}u^{q}\underline {V_{2}}dx\geq c_{q}\int_{B_{R}}u^{q}V_{2}dx,\] and thus \[\mathcal{I}(u) \geq c\Big{(}\int_{B_{r_{0}}}V_{2}(u^{*})^{q}dx\Big{)}^{\frac{2}{q}} \geq cc_{q}^{2/q}\Big{(}\int_{B_{r_{0}}}V_{2}u^{q}dx\Big{)}^{\frac{2}{q}}.\] The proof is complete. ## 3. Embedding results for \(\hat{\mathcal{H}}^{1}_{0}(B_{1})\) and related spaces ### Embedding and compactness We prove Theorems 1.2 and 1.3 here. **Proof of Theorem 1.2.**\((i)\) Recall that the Sobolev space \(W^{1,q}_{0}(B_{1})\) is the completion of \(C^{\infty}_{c}(B_{1})\) under the norm \[\|u\|_{1,q}=\Big{(}\int_{B_{1}}|\nabla u|^{q}dx\Big{)}^{\frac{1}{q}}.\] We first show that \[\hat{\mathcal{H}}^{1}_{0}(B_{1})\hookrightarrow W^{1,q}_{0}(B_{1})\quad\text{ for any $q\in[1,2)$}.\] For any given radially symmetry function \(u_{0}\in C^{\infty}_{c}(B_{1})\), we set \[w_{0}(x)=(-\ln|x|)^{-\frac{1}{2}}u_{0}(x).\] By direct computations, \[\int_{B_{1}}\Big{(}|\nabla u_{0}|^{2}-\frac{1}{4}\frac{u_{0}^{2}}{|x|^{2}(- \ln|x|)^{2}}\Big{)}dx=\int_{B_{1}}|\nabla w_{0}|^{2}(-\ln|x|)dx\] and \[\int_{B_{1}}|\nabla u_{0}|^{q}dx = \int_{B_{1}}\Big{|}(-\ln|x|)^{\frac{1}{2}}\nabla w_{0}-\frac{1}{ 2}(-\ln|x|)^{-\frac{1}{2}}\frac{x}{|x|^{2}}w_{0}\Big{|}^{q}dx\] \[\leq 2^{q}\int_{B_{1}}(-\ln|x|)^{\frac{q}{2}}|\nabla w_{0}|^{q}dx+ \int_{B_{1}}(-\ln|x|)^{-\frac{q}{2}}|x|^{-q}|w_{0}|^{q}dx.\] Since \[\int_{B_{1}}(-\ln|x|)^{\frac{q}{2}}|\nabla w_{0}|^{q}dx\leq\Big{(}\int_{B_{1}} (-\ln|x|)|\nabla w_{0}|^{2}dx\Big{)}^{\frac{q}{2}}|B_{1}|^{1-\frac{q}{2}},\] by the Holder inequality and (1.10) \[\int_{B_{1}}(-\ln|x|)^{-\frac{q}{2}}|x|^{-q}|w_{0}|^{q}dx\] \[\leq \Big{(}\int_{B_{1}}|x|^{-2}(-\ln|x|)^{-1}\big{(}1+|\ln\ln\frac{1} {|x|}|\big{)}^{-2}w_{0}^{2}dx\Big{)}^{\frac{q}{2}}\Big{(}\int_{B_{1}}\big{(}1+| \ln\ln\frac{1}{|x|}|\big{)}^{\frac{2q}{2-q}}dx\Big{)}^{1-\frac{q}{2}}\] \[\leq c\Big{(}\int_{B_{1}}(-\ln|x|)|\nabla w_{0}|^{2}dx\Big{)}^{\frac{q }{2}},\] where we have used \(\int_{B_{1}}\big{(}1+|\ln\ln\frac{1}{|x|}|\big{)}^{\frac{2q}{2-q}}dx<+\infty\). As a consequence, we obtain that for \(q\in[1,2)\), there exists \(c=c(q)\) such that \[\int_{B_{1}}|\nabla u_{0}|^{q}dx\leq c\Big{(}\int_{B_{1}}(-\ln|x|)|\nabla w_{0} |^{2}dx\Big{)}^{\frac{q}{2}}=c\Big{(}\int_{B_{1}}\Big{(}|\nabla u_{0}|^{2}- \frac{1}{4}\frac{u_{0}^{2}}{|x|^{2}(-\ln|x|)^{2}}\Big{)}dx\Big{)}^{\frac{q}{2}}.\] Hence the embedding \(\hat{\mathcal{H}}^{1}_{0}(B_{1})\hookrightarrow W^{1,q}_{0}(B_{1})\) is continuous for \(q\in[1,2)\). \((ii)\) Since \(W^{1,q}_{0}(B_{1})\hookrightarrow L^{p}(B_{1})\) is compact for \(p<\frac{2q}{2-q}\) (see [6, Theorem 7.10]), and \(\frac{2q}{2-q}\rightarrow+\infty\) as \(q\to 2^{-}\), for any \(p\in[1,+\infty)\), we may take \(q<2\) close to \(2\) such that \(p<\frac{2q}{2-q}\), and then the embedding \(\hat{\mathcal{H}}^{1}_{0}(B_{1})\hookrightarrow W^{1,q}_{0}(B_{1})\) is continuous, and the embedding \(W^{1,q}_{0}(B_{1})\hookrightarrow L^{p}(B_{1})\) is compact. It follows that \(\hat{\mathcal{H}}^{1}_{0}(B_{1})\hookrightarrow L^{p}(B_{1})\) is compact. This proves the conclusion in \((ii)\) with \(\alpha=0\). When \(\alpha\in(0,2)\), let \[\theta=\frac{2+\alpha}{2\alpha}>1,\qquad\theta^{\prime}=\frac{\theta}{\theta- 1}=\frac{2+\alpha}{2-\alpha}.\] Then by the Holder inequality \[\big{(}\int_{B_{1}}u^{p}|x|^{-\alpha}dx\big{)}^{\frac{1}{p}} \leq \big{(}\int_{B_{1}}u^{p\theta^{\prime}}dx\big{)}^{\frac{1}{p \theta^{\prime}}}\big{(}\int_{B_{1}}|x|^{-\alpha\theta}dx\big{)}^{\frac{1}{q}},\] where \(\alpha\theta\in(1,2)\). Now the compactness of embedding follows from the fact that \(W^{1,q}_{0}(B_{1})\hookrightarrow L^{p\theta^{\prime}}(B_{1})\) is compact for \[p\theta^{\prime}<\frac{2q}{2-q},\] which is satisfied for any given \(p\geq 1\) if we choose \(q\in(0,2)\) close enough to \(2\). **Proof of Theorem 1.3.** Let \(u_{n}\rightharpoonup 0\) weakly in \(\hat{\mathcal{H}}^{1}_{0}(B_{1})\) as \(n\rightarrow+\infty\). Then \(\{u_{n}\}\) is uniformly bounded in \(\hat{\mathcal{H}}^{1}_{0}(B_{1})\). For given \(\epsilon>0\), there exists \(\sigma_{0}\in(0,\frac{1}{8})\) such that for \(x\in B_{\sigma_{0}}\cup\big{(}B_{1}\setminus B_{1-\sigma_{0}}\big{)}\) there holds \[V(x)|x|^{2}(-\ln|x|)^{2}\big{(}1+|\ln\ln\frac{1}{|x|}|\big{)}^{2}\leq 2V(x)|x|^{ 2}(-\ln|x|)^{2}\big{|}\ln\frac{1}{|x|}\big{|}^{2}\leq\epsilon.\] Thus, by the improved Leray inequality (1.6) we obtain \[\int_{B_{\sigma_{0}}\cup\big{(}B_{1}\setminus B_{1-\sigma_{0}} \big{)}}|u_{n}(x)|^{2}Vdx\] \[\leq \epsilon\int_{B_{\sigma_{0}}\cup\big{(}B_{1}\setminus B_{1-\sigma _{0}}\big{)}}|u_{n}(x)|^{2}|x|^{-2}(-\ln|x|)^{-2}\big{(}1+|\ln\ln\frac{1}{|x|} \big{|}\big{)}^{-2}dx\] \[\leq \epsilon\int_{B_{1}}|u_{n}(x)|^{2}|x|^{-2}(-\ln|x|)^{-2}\big{(}1 +|\ln\ln\frac{1}{|x|}|\big{)}^{-2}dx\] \[\leq \epsilon\|u_{n}\|_{\hat{\mathcal{H}}^{1}_{0}}^{2}.\] Moreover, by the compactness embedding \(\hat{\mathcal{H}}^{1}_{0}(B_{1})\hookrightarrow L^{2}(B_{1})\), \[\int_{B_{1-\sigma_{0}}\setminus B_{\sigma_{0}}}|u_{n}(x)|^{2}Vdx \leq \Big{(}\max_{\frac{2}{B_{1-\sigma_{0}}\setminus B_{\sigma_{0}}}}V \Big{)}\int_{B_{1}}|u_{n}(x)|^{2}dx\leq\epsilon\mbox{ for all large }n.\] Therefore we obtain \[\int_{B_{1}}|u_{n}(x)|^{2}Vdx\to 0\quad\mbox{as }n\rightarrow+\infty.\] It follows that \(\hat{\mathcal{H}}^{1}_{0}(B_{1})\) is compactly embedded in \(L^{2}(B_{1},Vdx)\). To obtain the compactness of the embedding \[\hat{\mathcal{H}}^{1}_{0}(B_{r_{q}})\hookrightarrow L^{q}\big{(}B_{r_{q}},Vdx \big{)},\] we use (1.7) instead of (1.6), and repeat the above analysis with obvious modifications. The proof is completed. ### General weighted spaces Recall that \(\mathcal{H}^{1}_{\mu,0}(B_{1})\) is the completion of \(C^{\infty}_{c}(B_{1})\) under the norm \[\|u\|_{\mu}=\sqrt{\int_{B_{1}}\Big{(}|\nabla u|^{2}+\mu\frac{u^{2}}{|x|^{2}(-\ln |x|)^{2}}\Big{)}dx}\] and it is a Hilbert space with the inner product \[\langle u,v\rangle_{\mu}=\int_{B_{1}}\Big{(}\nabla u\cdot\nabla v\,+\mu\frac{ uv}{|x|^{2}(-\ln|x|)^{2}}\Big{)}dx.\] Recall also that \[\mathcal{H}^{1}_{0}(B_{1})=\mathcal{H}^{1}_{0,0}(B_{1})\quad\text{and}\quad \hat{\mathcal{H}}^{1}_{0}(B_{1})=\mathcal{H}^{1}_{-\frac{1}{4},0}(B_{1}).\] We have the following result. **Theorem 3.1**.: _The following conclusions hold:_ \((i)\) _For \(\mu>-\frac{1}{4}\), \(\mathcal{H}^{1}_{\mu,0}(B_{1})=\mathcal{H}^{1}_{0}(B_{1})\)._ \((ii)\)_\(\mathcal{H}^{1}_{0}(B_{1})\subsetneq\hat{\mathcal{H}}^{1}_{0}(B_{1})\)._ **Proof.**\((i)\) By (1.1), for \(u\in\mathcal{H}^{1}_{\mu,0}(B_{1})\), we have \[\|u\|_{\mu}^{2}=\int_{B_{1}}|\nabla u|^{2}dx+\mu\int_{B_{1}}\frac{u^{2}}{|x|^ {2}(-\ln|x|)^{2}}dx\geq(1+4\min\{0,\mu\})\int_{B_{1}}|\nabla u|^{2}dx,\] \[\|u\|_{\mu}^{2}\leq\big{(}1+4\max\{0,\mu\}\big{)}\int_{B_{1}}|\nabla u|^{2}dx.\] The desired conclusion then follow immediately. \((ii)\) If \(u\in\mathcal{H}^{1}_{0}(B_{1})\), then by Leray's inequality, \[\int_{B_{1}}\frac{u^{2}}{|x|^{2}(-\ln|x|)^{2}}dx<+\infty.\] Hence \(u\in\hat{\mathcal{H}}^{1}_{0}(B_{1})\), which proves \(\mathcal{H}^{1}_{0}(B_{1})\subset\hat{\mathcal{H}}^{1}_{0}(B_{1})\). On the other hand, let \(h_{0}(x)=(-\ln|x|)^{\frac{1}{2}}\eta_{0}(2|x|)\), where \(\eta_{0}:\mathbb{R}\to[0,1]\) is a smooth cutoff function such that \[\eta_{0}(t)=\begin{cases}1&\text{ for }\,|t|<\frac{1}{2},\\ 0&\text{ for }\,|t|>1.\end{cases}\] Direct computation shows that \(h_{0}\in\hat{\mathcal{H}}^{1}_{0}(B_{1})\), but it is not in \(\mathcal{H}^{1}_{0}(B_{1})\). \(\square\) Recall that \(\mathcal{H}^{1}_{{}_{V},\mu,0}(B_{1})\) with \(\mu\geq-\frac{1}{4}\) is the completion of \(C^{\infty}_{c}(B_{1}\setminus\{0\})\) under the norm \(\|\cdot\|_{{}_{V},\mu}\) defined in (1.13). **Corollary 3.2**.: _Let \(\Omega\) be a domain in \(B_{1}\) containing the origin, \(V:B_{1}\setminus\{0\}\to[0,+\infty)\) be a continuous function satisfying_ \[\begin{cases}\limsup_{|x|\to 0^{+}}V(x)|x|^{2}(-\ln|x|)^{2}<+\infty,\\ \limsup_{|x|\to 1^{-}}V(x)(1-|x|)^{2}<+\infty.\end{cases} \tag{3.6}\] _Then for any \(\mu>0\), \(\mathcal{H}^{1}_{{}_{V},\mu,0}(\Omega)=\mathcal{H}^{1}_{0}(\Omega)\)._ **Proof.** It follows from (3.6) that \[V(x)\leq c|x|^{-2}(-\ln|x|)^{-2}\ \text{ for some }c>0\text{ and all }x\in B_{1}\setminus\{0\},\] which implies, in view of \(\mu>0\), \[\mathcal{H}^{1}_{{}_{V},\mu,0}(\Omega)\subset\mathcal{H}^{1}_{c\mu,0}(\Omega)= \mathcal{H}^{1}_{0}(\Omega).\] On the other hand, Leray's inequality implies \(\mathcal{H}^{1}_{0}(\Omega)\subset\mathcal{H}^{1}_{{}_{V},\mu,0}(\Omega)\). \(\square\) **Corollary 3.3**.: _Let \(\Omega\) be a domain in \(B_{1}\) containing the origin, \(V:B_{1}\setminus\{0\}\to[0,+\infty)\) be continuous and verify_ \[V(x)\leq\frac{1}{|x|^{2}(-\ln|x|)^{2}}. \tag{3.7}\] _Then the following conclusions hold:_ \((i)\) _For \(\mu\in(-\frac{1}{4},0)\), \(\mathcal{H}^{1}_{{}_{V},\mu,0}(\Omega)=\mathcal{H}^{1}_{0}(\Omega)\)._ \((ii)\) _For \(\mu=-\frac{1}{4}\), assume additionally that (1.16) holds, then \(\mathcal{H}^{1}_{0}(\Omega)\subsetneq\mathcal{H}^{1}_{{}_{V},-1/4,0}(\Omega)\)._ **Proof.** For \(\mu\in(-\frac{1}{4},0)\), by (3.7) and the Leray inequality, we easily see that \(\mathcal{H}^{1}_{{}_{V},\mu,0}(\Omega)=\mathcal{H}^{1}_{0}(\Omega)\) and so \((i)\) holds true. \((ii)\) Let \(r_{0}\in(0,1)\) be such that \(B_{r_{0}}(0)\subset\Omega\), and then take \(h_{0}(x):=(-\ln|x|)^{\frac{1}{2}}\eta_{0}(\frac{2}{r_{0}}|x|)\); then \(h_{0}\not\in\mathcal{H}^{1}_{0}(B_{r_{0}})\) by a direct computation. However \(h_{0}\in\mathcal{H}^{1}_{{}_{V},-1/4,0}(B_{r_{0}})\) since \(h_{0}\in\hat{\mathcal{H}}^{1}_{0}(B_{r_{0}})\) and \[\int_{B_{r_{0}}}\Big{|}V-\frac{1}{|x|^{2}(-\ln|x|)^{2}}\Big{|}\,h_{0}^{2}\,dx<+\infty,\] which is guaranteed by (1.16). \(\square\) **Proof of Theorem 1.5.** \((i)\) Let \(w_{0}\in C^{1}_{c}(B_{1})\) and \[u_{0}(x)=(-\ln|x|)^{\frac{1}{2}}w_{0}(x).\] Then \(u_{0}\in C^{1}(B_{1}\setminus\{0\})\cap\hat{\mathcal{H}}^{1}_{0}(B_{1})\). Direct computation shows that \[\mathcal{I}(u_{0}) = \int_{B_{1}}\Big{(}|\nabla u_{0}|^{2}-\frac{1}{4}\frac{u_{0}^{2}} {|x|^{2}(-\ln|x|)^{2}}\Big{)}dx\] \[= \int_{B_{1}}\Big{(}\Big{|}\frac{1}{2}(-\ln|x|)^{-\frac{1}{2}} \frac{x}{|x|^{2}}w_{0}+(-\ln|x|)^{\frac{1}{2}}\nabla w_{0}\Big{|}^{2}-\frac{1 }{4}\frac{w_{0}^{2}}{|x|^{2}(-\ln|x|)}\Big{)}dx\] \[= \int_{B_{1}}|\nabla w_{0}|^{2}(-\ln|x|)dx+\frac{1}{2}\int_{B_{1}} \frac{x\cdot\nabla(w_{0}^{2})}{|x|^{2}}dx\] \[= \int_{B_{1}}|\nabla w_{0}|^{2}(-\ln|x|)dx,\] where \[\int_{B_{1}}\frac{x\cdot\nabla(w_{0}^{2})}{|x|^{2}}dx=\int_{B_{1}}\text{div}( \frac{xw_{0}^{2}}{|x|^{2}})dx=0.\] Then (1.10) and (1.11) follow from the embedding inequality (1.6) in Remark 1.4 by setting \(w_{0}(x)=(-\ln|x|)^{-\frac{1}{2}}u_{0}\). Part \((ii)\) (a) and (b) follow from the proofs of Theorem 1.2 and Theorem 1.3 respectively. \(\square\) ## 4. Trudinger-Moser type inequalities for radial functions Define \[\mathcal{H}^{1}_{\text{rad},\mu,0}(B_{1}):=\left\{w\in\mathcal{H}^{1}_{\mu,0}( B_{1}):\,\text{$w$ is radially symmetric}\right\}.\] We will prove the following two theorems in this section. **Theorem 4.1**.: _Let \(\mu>-\frac{1}{4}\). Then_ \[\sup_{u\in\mathcal{H}^{1}_{\text{rad},\mu,0}(B_{1}),\,\,\|u\|_{\mu}\leq 1}\int_{B_{1}}e^{4\pi\sqrt{1+4\mu}\,u^{2}}dx<\infty,\] _and the result fails when \(4\pi\sqrt{1+4\mu}\) is replaced by any \(\alpha>4\pi\sqrt{1+4\mu}\)._ Note that in the above result \(4\pi\sqrt{1+4\mu}=m_{\mu}\to 0\) as \(\mu\to-\frac{1}{4}\), which suggests that the inequality should be different in the case \(\mu=-\frac{1}{4}\). Our Trudinger-Moser type inequality for \(\mu=-\frac{1}{4}\) is the following. **Theorem 4.2**.: \((i)\) _For any \(p\in(0,1)\) and any \(\alpha>0\), there exists \(c=c_{p,\alpha}\) depending on \(p\) and \(\alpha\) such that for every \(u\in\mathcal{H}^{1}_{\mathrm{rad},-1/4,0}(B_{1})\) with \(\|u\|_{-1/4}\leq 1\), there holds_ \[\int_{B_{1}}e^{\alpha|u|^{p}}dx\leq c_{p,\alpha}.\] \((ii)\) _For any \(p\geq 1\) and any \(\alpha>0\), there exists a sequence \(\{u_{n}\}\) such that \(\|u_{n}\|_{-1/4}\leq 1\),_ \[\int_{B_{1}}e^{\alpha|u_{n}|^{p}}dx\to+\infty\quad\text{as}\,\,\,n\to+\infty.\] The following simple lemma will be very useful later. **Lemma 4.3**.: _Let \(\mu>-\frac{1}{4}\), \(t_{1}>2\), \(\tau_{0}=\frac{1-\sqrt{1+4\mu}}{2}\), and_ \[\mathcal{J}_{t_{1}}(w):=\frac{1}{1-2\tau_{0}}\int_{2}^{+\infty}[w^{\prime}(t) ]^{2}t^{2\tau_{0}}dt\,\,\,\,\text{for}\,\,\,\,w\in\mathbb{X}_{t_{1}}\] _with_ \[\begin{cases}\mathbb{X}_{t_{1}}:=\Big{\{}v\in C^{0,1}([2,\infty)):v(2)=0,\,\,v (t_{1})=C_{t_{1}}(t_{1}^{1-2\tau_{0}}-2^{1-2\tau_{0}})\Big{\}},\\ C_{t_{1}}:=\big{(}t_{1}^{1-2\tau_{0}}-2^{1-2\tau_{0}}\big{)}^{-1/2}.\end{cases}\] _Then \(\mathcal{J}_{t_{1}}(w)\geq 1\), with equality holding only if \(w=\zeta_{t_{1}}\), where_ \[\zeta_{t_{1}}(t):=\begin{cases}C_{t_{1}}(t^{1-2\tau_{0}}-2^{1-2\tau})&\text{ for}\,\,\,t\in[2,t_{1}],\\ C_{t_{1}}(t_{1}^{1-2\tau_{0}}-2^{1-2\tau})&\text{ for}\,\,\,t\in(t_{1},+ \infty).\end{cases} \tag{4.1}\] **Proof.** Direct computation shows that \[\mathcal{J}_{t_{1}}(\zeta_{t_{1}}) = \frac{1}{1-2\tau_{0}}\int_{2}^{t_{1}}[\zeta_{t_{1}}^{\prime}(t)]^ {2}t^{2\tau_{0}}dt\] \[= \big{(}t_{1}^{1-2\tau_{0}}-2^{1-2\tau_{0}}\big{)}^{-1}(1-2\tau_{0 })\int_{2}^{t_{1}}t^{-2\tau_{0}}dt\,=1.\] Given \(w\in\mathbb{X}_{t_{1}}\), let \(v:=w-\zeta_{t_{1}}\); then \(v(2)=v(t_{1})=0\) and we have \[\mathcal{J}_{t_{1}}(w) = \frac{1}{1-2\tau_{0}}\int_{2}^{+\infty}\big{(}v^{\prime}+\zeta_{t _{1}}^{\prime}\big{)}^{2}t^{2\tau_{0}}dt\] \[= \frac{1}{1-2\tau_{0}}\int_{2}^{+\infty}\big{[}v^{\prime}(t)\big{]} ^{2}t^{2\tau_{0}}dt+2C_{t_{1}}\int_{2}^{t_{1}}v^{\prime}(t)dt+1\] \[= \frac{1}{1-2\tau_{0}}\int_{2}^{+\infty}\big{[}v^{\prime}(t)\big{]} ^{2}t^{2\tau_{0}}dt+1\] \[\geq 1,\] where the last inequality becomes an equality only if \(\int_{2}^{+\infty}v^{\prime}(t)^{2}t^{2\tau_{0}}dt=0\), which implies \(v^{\prime}(t)=0\) in \((2,+\infty)\), and we thus obtain, in view of \(v(2)=0\), \(v(t)=0\) in \((2,+\infty)\). \(\Box\) ### Proof of Theorem 4.1 Let \[C^{\infty}_{\mathrm{rad},c}(B_{1}):=\big{\{}w\in C^{\infty}_{c}(B_{1})\colon w \text{ is radially symmetric}\big{\}}.\] Note that \(C^{\infty}_{\mathrm{rad},c}(B_{1})\) is dense in \(\mathcal{H}^{1}_{\mathrm{rad},\mu,0}(B_{1})\). **Lemma 4.4**.: _Suppose \(\mu>-\frac{1}{4}\) and \(m_{\mu}=4\pi\sqrt{1+4\mu}\). Then_ \[\sup_{u\in\mathcal{H}^{1}_{\mathrm{rad},\mu,0}(B_{1}),\,\|u\|_{\mu}\leq 1}\int_{B_{1}}e^{m_{ \mu}u^{2}}dx<\infty.\] **Proof.** Since \(C^{\infty}_{\mathrm{rad},c}(B_{1}\setminus\{0\})\) is dense in \(\mathcal{H}^{1}_{\mathrm{rad},\mu,0}(B_{1})\), to prove the desired inequality, we start with estimating \(\int_{B_{1}}e^{m_{\mu}u_{0}^{2}}dx\) for an arbitrary function \(u_{0}\in C^{\infty}_{\mathrm{rad},c}(B_{1}\setminus\{0\})\) with \(\|u_{0}\|_{\mu}\leq 1\). Denote \[\tau_{0}=\tau_{0}(\mu):=\frac{1-\sqrt{1+4\mu}}{2},\] and write \[u_{0}(r)=(1-\ln r)^{\tau_{0}}v_{0}(r)\quad\mbox{for}\;\;r\in(0,1).\] Then \(v_{0}(r)\) has compact support in \((0,1)\). Direct computations give \[\mathcal{I}(u_{0}) = 2\pi\int_{0}^{1}\Big{(}u_{0}^{\prime}(r)^{2}+\mu\frac{u_{0}(r)^{ 2}}{r^{2}(1-\ln r)^{2}}\Big{)}rdr\] \[= 2\pi\int_{0}^{1}v_{0}^{\prime}(r)^{2}(1-\ln r)^{2\tau_{0}}rdr-4 \pi\tau_{0}\int_{0}^{1}v_{0}^{\prime}(r)v_{0}(r)(1-\ln r)^{2\tau_{0}-1}dr\] \[+2\pi(\tau_{0}^{2}+\mu)\int_{0}^{1}v_{0}^{2}(r)(1-\ln r)^{2\tau_{ 0}-2}r^{-1}dr\] \[= 2\pi\int_{0}^{1}v_{0}^{\prime}(r)^{2}(1-\ln r)^{2\tau_{0}}rdr,\] where we have used \(\tau_{0}^{2}-\tau_{0}-\mu=0\). Note also that \(\tau_{0}\in(0,\frac{1}{2})\) for \(\mu\in(-\frac{1}{4},0)\) and \(\tau_{0}<0\) for \(\mu>0\). Let \[r=e^{1-\frac{1}{2}t},\qquad v_{0}(r)=w_{0}(t).\] Then \(w_{0}(t)=0\) for \(t>2\) close to \(2\), and \(w(\infty)=0\). Moreover, \[\mathcal{I}(u_{0})=2\pi\int_{0}^{1}v_{0}^{\prime}(r)^{2}(1-\ln r)^{2\tau_{0}} rdr=4^{1-\tau_{0}}\pi\int_{2}^{\infty}w_{0}^{\prime}(t)^{2}t^{2\tau_{0}}dt.\] Thus, for given \(t>2\), \[w_{0}^{2}(t)=\Big{(}\int_{2}^{t}w_{0}^{\prime}(s)ds\Big{)}^{2} \leq \int_{2}^{t}w_{0}^{\prime}(s)^{2}s^{2\tau_{0}}ds\int_{2}^{t}s^{-2 \tau_{0}}ds\] \[= (1-2\tau_{0})^{-1}\int_{2}^{t}w_{0}^{\prime}(s)^{2}s^{2\tau_{0}} ds\,(t^{1-2\tau_{0}}-2^{1-2\tau_{0}})\] and \[1\geq\mathcal{I}(u_{0})=4^{1-\tau_{0}}\pi\int_{2}^{\infty}w_{0}^{\prime}(s)^{ 2}s^{2\tau_{0}}ds\geq 4^{1-\tau_{0}}\pi(1-2\tau_{0})w_{0}^{2}(t)(t^{1-2\tau_{0}}-2 ^{1-2\tau_{0}})^{-1},\] which implies that \[w_{0}^{2}(t)\leq\frac{t^{1-2\tau_{0}}-2^{1-2\tau_{0}}}{4^{1-\tau_{0}}\pi(1-2 \tau_{0})}\quad\mbox{ for }t>2. \tag{4.2}\] We now have, by simple calculations, \[\mathcal{N}(u_{0}):=\int_{B_{1}}e^{m_{\mu}u_{0}^{2}(|x|)}dx = 2\pi\int_{0}^{1}e^{m_{\mu}u_{0}^{2}(r)}rdr\] \[= \pi e^{2}\int_{2}^{+\infty}e^{m_{\mu}4^{-\tau_{0}}s^{2\tau_{0}}w_ {0}^{2}(s)-s}ds\] \[= \pi e^{2}\int_{2}^{+\infty}e^{s^{2\tau_{0}}\tilde{w}_{0}^{2}(s)- s}ds,\] where \[\tilde{w}_{0}(t):=2^{-\tau_{0}}\sqrt{m_{\mu}}\,w_{0}(t).\] We note that \[4^{1-\tau_{0}}\pi(1-2\tau_{0})=4^{-\tau_{0}}m_{\mu}\] and hence, by (4.2), \[\tilde{w}_{0}^{2}(t)\leq t^{1-2\tau_{0}}-2^{1-2\tau_{0}},\quad t^{2\tau_{0}} \tilde{w}_{0}^{2}(t)-t\leq-2^{1-2\tau_{0}}t^{2\tau_{0}}.\] Therefore, when \(\mu\in(-1/4,0)\) and hence \(\tau_{0}\in(0,1/2)\), \[\mathcal{N}(u_{0})=\pi e^{2}\int_{2}^{+\infty}e^{s^{2\tau_{0}}\tilde{w}_{0}^{2 }(s)-s}ds\leq\pi e^{2}\int_{2}^{\infty}e^{-2^{1-2\tau_{0}}s^{2\tau_{0}}}ds<\infty.\] This completes the proof for the case \(\mu\in(-1/4,0)\). It remains to consider the case \(\mu>0\) and hence \(\tau_{0}<0\). Since \[\tilde{w}_{0}^{2}(t)\leq t^{1-2\tau_{0}}-2^{1-2\tau_{0}}=\left[C_{t}\left(t^{ 1-2\tau_{0}}-2^{1-2\tau_{0}}\right)\right]^{2}\ \text{for $t>2$},\] we have \[\sigma_{0}:=1-\sup_{t>2}\frac{\tilde{w}_{0}(t)}{C_{t}\left(t^{1-2\tau_{0}}-2^{ 1-2\tau_{0}}\right)}\geq 0.\] As \(w_{0}(t)\) and hence \(\tilde{w}_{0}(t)=0\) for \(t>2\) close to \(2\), and \[w_{0}(\infty)=\tilde{w}_{0}(\infty)=0,\lim_{t\to\infty}C_{t}(t^{1-2\tau_{0}}-2 ^{1-2\tau_{0}})=\infty,\] there exists \(t_{1}\in(2,+\infty)\) such that \[\sigma_{0}=1-\frac{\tilde{w}_{0}(t_{1})}{C_{t_{1}}(t_{1}^{1-2\tau_{0}}-2^{1-2 \tau_{0}})}.\] Let us also note that \[\mathcal{J}_{t_{1}}(\tilde{w}_{0}) = \frac{1}{1-2\tau_{0}}\int_{2}^{+\infty}[\tilde{w}_{0}^{\prime}(t )]^{2}t^{2\tau_{0}}dt\] \[= 4^{1-\tau_{0}}\pi\int_{2}^{\infty}[w_{0}^{\prime}(t)]^{2}t^{2 \tau_{0}}\,dt\ \ =\mathcal{I}(u_{0})\leq 1.\] We first show that \(\sigma_{0}=0\) cannot happen. Indeed, if \(\sigma_{0}=0\), then \[\tilde{w}_{0}(t_{1})=C_{t_{1}}(t_{1}^{1-2\tau_{0}}-2^{1-2\tau_{0}}),\] and it follows from Lemma 4.3 that \[\tilde{w}_{0}\equiv\zeta_{t_{1}}\quad\text{in}\ \,(2,\infty),\] where \(\zeta_{t_{1}}\) is defined in (4.1). Thus \(\tilde{w}_{0}(t)>0\) for all \(t>2\), which contradicts the fact that \(\tilde{w}_{0}(t)=0\) for \(t>2\) close to \(2\). Thus we must have \(\sigma_{0}\in(0,1)\). We now have \[s^{2\tau_{0}}\tilde{w}_{0}^{2}(s)-s\leq(1-\sigma_{0})^{2}C_{s}^{2}s^{2-2\tau_{ 0}}-s=\left[(1-\sigma_{0})^{2}C_{s}^{2}s^{1-2\tau_{0}}-1\right]s\quad\text{for $s>2$},\] and also, by (4.2), \[s^{2\tau_{0}}\tilde{w}_{0}^{2}(s)-s\leq-(1-2^{2\tau_{0}-2}\pi^{-1})s\ \text{ for $s>2$}.\] Therefore, \[\mathcal{N}(u_{0}) = \pi e^{2}\int_{2}^{3}e^{s^{2\tau_{0}}\tilde{w}_{0}^{2}(s)-s}ds+ \pi e^{2}\int_{3}^{\infty}e^{s^{2\tau_{0}}\tilde{w}_{0}^{2}(s)-s}ds\] \[\leq \pi e^{2}\int_{2}^{3}e^{-(1-2^{2\tau_{0}-2}\pi^{-1})s}ds+\pi e^{2 }\int_{3}^{\infty}e^{s^{2\tau_{0}}\tilde{w}_{0}^{2}(s)-s}ds.\] Define \[G(\sigma_{0}):=\int_{3}^{\infty}e^{s^{2\tau_{0}}\tilde{w}_{0}^{2}(s)-s}ds.\] Clearly, \[(1-\sigma_{0})^{2}C_{s}^{2}s^{1-2\tau_{0}}-1=(1-\sigma_{0})^{2}\frac{s^{1-2 \tau_{0}}}{s^{1-2\tau_{0}}-2^{1-2\tau_{0}}}-1<-\sigma_{0}\ \text{ for all large $s$}.\] It follows that \[G(\sigma_{0})\leq\tilde{G}(\sigma_{0}):=\int_{3}^{\infty}e^{[(1-\sigma_{0})^{2}C _{s}^{2}s^{1-2\tau_{0}}-1]s}ds<\infty.\] Since \(\tilde{G}(\sigma_{0})\) is a continuous decreasing function of \(\sigma_{0}\) for \(\sigma_{0}\in(0,1)\), it is now clear that to obtain our desired result, it suffices to show that \[\limsup_{\sigma_{0}\to 0^{+}}G(\sigma_{0})<\infty.\] Thanks to \[\mathcal{J}_{1}(\zeta_{t_{1}})=\frac{1}{1-2\tau_{0}}\int_{2}^{t_{1}}\zeta_{t_ {1}}^{\prime}(s)^{2}s^{2\tau_{0}}ds=1,\] \[\mathcal{J}_{t_{1}}(\tilde{w}_{0})=\frac{1}{1-2\tau_{0}}\int_{2}^{\infty}[ \tilde{w}_{0}^{\prime}(t)]^{2}t^{2\tau_{0}}dt=\mathcal{I}(u_{0})\leq 1\] and \[\int_{2}^{t_{1}}\big{[}\tilde{w}_{0}^{\prime}(s)-(1-\sigma_{0})\zeta_{t_{1}}^ {\prime}(s)\big{]}\zeta_{t_{1}}^{\prime}(s)s^{2\tau_{0}}ds=C_{t_{1}}(1-2\tau_ {0})\int_{2}^{t_{1}}\big{[}\tilde{w}_{0}^{\prime}(s)-(1-\sigma_{0})\zeta_{t_{ 1}}^{\prime}(s)\big{]}ds=0,\] we obtain \[\int_{2}^{t_{1}}\big{(}\tilde{w}_{0}^{\prime}-(1-\sigma_{0})\zeta _{t_{1}}^{\prime}\big{)}^{2}s^{2\tau_{0}}ds+\int_{t_{1}}^{\infty}(\tilde{w}_{ 0}^{\prime})^{2}s^{2\tau_{0}}ds\] \[= \int_{2}^{t_{1}}\big{[}\tilde{w}_{0}^{\prime}-(1-\sigma_{0}) \zeta_{t_{1}}^{\prime}+(1-\sigma_{0})\zeta_{t_{1}}^{\prime}\big{]}^{2}s^{2 \tau_{0}}ds-\int_{2}^{t_{1}}\big{[}(1-\sigma_{0})\zeta_{t_{1}}^{\prime}\big{]} ^{2}s^{2\tau_{0}}ds+\int_{t_{1}}^{\infty}(\tilde{w}_{0}^{\prime})^{2}s^{2\tau_ {0}}ds\] \[\leq (1-2\tau_{0})[1-(1-\sigma_{0})^{2}]\] \[= (1-2\tau_{0})\sigma_{0}(2-\sigma_{0}).\] Therefore, for \(s>t_{1}\), \[\tilde{w}_{0}(s) \leq \tilde{w}_{0}(t_{1})+\left(\int_{t_{1}}^{s}\tilde{w}_{0}^{\prime }(t)^{2}t^{2\tau_{0}}dt\right)^{\frac{1}{2}}\Big{(}\int_{t_{1}}^{s}t^{-2\tau_ {0}}dt\Big{)}^{\frac{1}{2}}\] \[\leq \min\Big{\{}t_{1}^{\frac{1-2\tau_{0}}{2}},(1-\sigma_{0})C_{t_{1}} t_{1}^{1-2\tau_{0}}\Big{\}}+\sqrt{\sigma_{0}(2-\sigma_{0})}\Big{(}s^{\frac{1-2 \tau_{0}}{2}}-t_{1}^{\frac{1-2\tau_{0}}{2}}\Big{)},\] and for \(s\in(2,t_{1}]\), \[\tilde{w}_{0}(s)-(1-\sigma_{0})\zeta_{t_{1}}(s) \leq \left(\int_{2}^{s}\big{[}\tilde{w}_{0}^{\prime}-(1-\sigma_{0}) \zeta_{t_{1}}^{\prime}\big{]}^{2}t^{2\tau_{0}}dt\right)^{\frac{1}{2}}\Big{(} \int_{2}^{s}t^{-2\tau_{0}}dt\Big{)}^{\frac{1}{2}}\] \[\leq \sqrt{\sigma_{0}(2-\sigma_{0})}s^{\frac{1-2\tau_{0}}{2}},\] \[\tilde{w}_{0}(s)-(1-\sigma_{0})\zeta_{t_{1}}(s) \leq \left(\int_{s}^{t_{1}}\big{[}\tilde{w}_{0}^{\prime}-(1-\sigma_{0}) \zeta_{t_{1}}^{\prime}\big{]}^{2}t^{2\tau_{0}}dt\right)^{\frac{1}{2}}\Big{(} \int_{s}^{t_{1}}t^{-2\tau_{0}}dt\Big{)}^{\frac{1}{2}}\] \[\leq \sqrt{\sigma_{0}(2-\sigma_{0})}\Big{(}t_{1}^{\frac{1-2\tau_{0}}{2} }-s^{\frac{1-2\tau_{0}}{2}}\Big{)}.\] Thus \[s^{2\tau_{0}}\tilde{w}_{0}(s)^{2}-s\!\leq\!\begin{cases}\big{[}\min\Big{\{}t_{1 }^{\frac{1-2\tau_{0}}{2}},(1\!-\!\sigma_{0})C_{t_{1}}t_{1}^{1-2\tau_{0}}\Big{\}} s^{\tau_{0}}\!+\!\sqrt{\sigma_{0}(2\!-\!\!\sigma_{0})}(s^{\frac{1}{2}}\!-\!s^{ \tau_{0}}t_{1}^{\frac{1}{2}-\tau_{0}})\big{]}^{2}\!-\!s,&s\!>\!t_{1},\\ \big{[}(1\!-\!\!\sigma_{0})s^{\tau_{0}}\zeta_{t_{1}}(s)\!+\!\sqrt{\sigma_{0}(2- \sigma_{0})}s^{\tau_{0}}\min\big{\{}s^{\frac{1-2\tau_{0}}{2}},t_{1}^{\frac{1-2 \tau_{0}}{2}}-s^{\frac{1-2\tau_{0}}{2}}\big{\}}\big{]}^{2}\!-\!s,&s\!\in\!(2,t_{ 1}].\end{cases}\] If \(t_{1}\leq 3\), then \[s^{2\tau_{0}}\tilde{w}_{0}(s)^{2}-s\leq\big{[}\sqrt{3}+\sqrt{\sigma_{0}(2- \sigma_{0})s}\big{]}^{2}-s\text{ for }s>3,\] and hence \[\limsup_{\sigma_{0}\to 0^{+}}G(\sigma_{0})\leq\lim_{\sigma_{0}\to 0^{+}}\int_{3}^{ \infty}e^{\big{[}\sqrt{3}+\sqrt{\sigma_{0}(2-\sigma_{0})s}\big{]}^{2}-s}dx<\infty.\] If \(t_{1}\in(3,M]\) for some \(M>0\), then \[G(\sigma_{0}) \leq\int_{3}^{t_{1}}e^{\big{[}(1-\sigma_{0})s^{\tau_{0}}\zeta_{t_{1 }}(s)+\sqrt{\sigma_{0}(2-\sigma_{0})s}\big{]}^{2}-s}ds+\int_{t_{1}}^{\infty}e^{ \big{[}(1-\sigma_{0})C_{t_{1}}t_{1}^{1-2\tau_{0}}s^{\tau_{0}}+\sqrt{\sigma_{0}( 2-\sigma_{0})s}\big{]}^{2}-s}ds\] \[\leq\int_{3}^{M}e^{\big{[}(1-\sigma_{0})s^{\tau_{0}}\zeta_{M}(s)+ \sqrt{\sigma_{0}(2-\sigma_{0})s}\big{]}^{2}-s}ds+\int_{3}^{\infty}e^{\big{[}(1 -\sigma_{0})C_{M}M^{1-2\tau_{0}}s^{\tau_{0}}+\sqrt{\sigma_{0}(2-\sigma_{0})s} \big{]}^{2}-s}ds\] \[\to\int_{3}^{M}e^{s^{2\tau_{0}}\zeta_{M}(s)^{2}-s}ds+\int_{3}^{ \infty}e^{\big{[}C_{M}M^{1-2\tau_{0}}s^{\tau_{0}}\big{]}^{2}-s}ds\] \[<+\infty\ \ \ \text{as}\ \sigma_{0}\to 0^{+}.\] Therefore, if the desired result in the lemma does not hold, then we can find a positive sequence \(\{\sigma_{n}\}\) decreasing to \(0\) as \(n\to\infty\), and a positive sequence \(\{s_{n}\}\) increasing to \(\infty\) as \(n\to\infty\) such that \[\int_{3}^{s_{n}}e^{A_{n}(s)^{2}-s}ds+\int_{s_{n}}^{\infty}e^{B_{n}(s)^{2}-s}ds \to\infty\ \text{as}\ n\to\infty, \tag{4.3}\] where \[\begin{cases}A_{n}(s)=(1-\sigma_{n})s^{\tau_{0}}\zeta_{s_{n}}(s)+\sqrt{\sigma_ {n}(2-\sigma_{n})}\min\big{\{}s^{\frac{1}{2}},s^{\tau_{0}}s_{n}^{\frac{1}{2}- \tau_{0}}-s^{\frac{1}{2}}\big{\}},\\ B_{n}(s)=(1-\sigma_{n})C_{s_{n}}s_{n}^{1-2\tau_{0}}s^{\tau_{0}}+\sqrt{\sigma_ {n}(2-\sigma_{n})}(s^{\frac{1}{2}}-s^{\tau_{0}}s_{n}^{\frac{1}{2}-\tau_{0}}). \end{cases}\] **Step 1.** Estimate of \(\int_{3}^{s_{n}}e^{A_{n}(s)^{2}-s}ds\). Fix \(\delta\in(0,1)\) small. For \(3\leq s\leq(1-\delta)s_{n}\), we have \[A_{n}(s) \leq(1-\sigma_{n})s^{\tau_{0}}\zeta_{s_{n}}(s)+\sqrt{\sigma_{n}( 2-\sigma_{n})}s^{\frac{1}{2}}\] \[=\Big{[}(1-\sigma_{n})\sqrt{\frac{s^{1-2\tau_{0}}}{s_{n}^{1-2\tau _{0}}-2^{1-2\tau_{0}}}}+\sqrt{\sigma_{n}(2-\sigma_{n})}\Big{]}s^{\frac{1}{2}}\] \[\leq\Big{[}(1-\sigma_{n})(1-\delta)^{\frac{1-2\tau_{0}}{2}}\sqrt{ \frac{s_{n}^{1-2\tau_{0}}}{s_{n}^{1-2\tau_{0}}-2^{1-2\tau_{0}}}}+\sqrt{\sigma _{n}(2-\sigma_{n})}\Big{]}s^{\frac{1}{2}}\] \[=[(1-\delta)^{\frac{1-2\tau_{0}}{2}}+o(1)]s^{\frac{1}{2}}.\] It follows that \[A_{n}(s)^{2}-s=[(1-\delta)^{1-2\tau_{0}}-1+o(1)]s\leq-\delta_{0}s\ \text{with}\ \delta_{0}=\tfrac{1-(1-\delta)^{1-2\tau_{0}}}{2}>0\ \text{for all large}\ n.\] Therefore \[\int_{3}^{(1-\delta)s_{n}}e^{A_{n}(s)^{2}-s}ds\leq\int_{3}^{(1-\delta)s_{n}}e^ {-\delta_{0}s}ds\leq e^{-3\delta_{0}}/\delta_{0}\ \text{for all large}\ n.\] Next we fix \(r>0\) small so that \[r\frac{1-2\tau_{0}}{\sqrt{2}}<1.\] Then for \((1-\delta)s_{n}\leq s\leq(1-r\sqrt{\sigma_{n}})s_{n}\), we have \[A_{n}(s) \leq(1-\sigma_{n})s^{\tau_{0}}\zeta_{s_{n}}(s)+\sqrt{\sigma_{n}( 2-\sigma_{n})}\Big{[}(1-\delta)^{\tau_{0}-\frac{1}{2}}-1\Big{]}s^{\frac{1}{2}}\] \[=\Big{\{}(1-\sigma_{n})\sqrt{\frac{s^{1-2\tau_{0}}}{s_{n}^{1-2 \tau_{0}}-2^{1-2\tau_{0}}}}+\sqrt{\sigma_{n}(2-\sigma_{n})}\Big{[}(1-\delta)^{ \tau_{0}-\frac{1}{2}}-1\Big{]}\Big{\}}s^{\frac{1}{2}}\] \[\leq\Big{\{}(1-\sigma_{n})\Big{(}1-r\sqrt{\sigma_{n}}\Big{)}^{ \frac{1-2\tau_{0}}{2}}\sqrt{\frac{s_{n}^{1-2\tau_{0}}}{s_{n}^{1-2\tau_{0}}-2^{1 -2\tau_{0}}}}+\sqrt{\sigma_{n}(2-\sigma_{n})}\Big{[}(1-\delta)^{\tau_{0}- \frac{1}{2}}-1\Big{]}\Big{\}}s^{\frac{1}{2}}\] \[=\bigg{\{}1-\Big{(}\frac{1-2\tau_{0}}{2}r-\sqrt{2}\Big{[}(1-\delta )^{\tau_{0}-\frac{1}{2}}-1\Big{]}+o(1)\Big{)}\sqrt{\sigma_{n}}+[2^{-2\tau_{0}} +o(1)]s_{n}^{2\tau_{0}-1}\bigg{\}}\ s^{\frac{1}{2}}.\] By choosing \(\delta>0\) small enough, we have \[\sigma^{*}:=\frac{1-2\tau_{0}}{2}r-\sqrt{2}\Big{[}(1-\delta)^{\tau_{0}-\frac{ 1}{2}}-1\Big{]}>0,\] and hence, for \((1-\delta)s_{n}\leq s\leq(1-r\sqrt{\sigma_{n}})s_{n}\), \[A_{n}(s)^{2}-s\leq-\sigma^{*}\sqrt{\sigma_{n}}s+4^{1-\tau_{0}}s_{n}^{2\tau_{0}-1 }s\leq-\sigma^{*}\sqrt{\sigma_{n}}s_{n}+4^{1-\tau_{0}}s_{n}^{2\tau_{0}}.\] It follows that \[\int_{(1-\delta)s_{n}}^{(1-r\sqrt{\sigma_{n}})s_{n}}e^{A_{n}(s)^{2}-s}ds\leq \delta s_{n}e^{-\sigma^{*}\sqrt{\sigma_{n}}s_{n}+4^{1-\tau_{0}}s_{n}^{2\tau_{0 }}}\to 0\text{ as }n\to\infty,\] provided that \[\sqrt{\sigma_{n}}s_{n}\geq s_{n}^{\epsilon}\text{ for some small }\epsilon>0\text{ and all large }n. \tag{4.4}\] For \((1-r\sqrt{\sigma_{n}})s_{n}\leq s\leq s_{n}\), we have \[A_{n}(s) \leq(1-\sigma_{n})s^{\tau_{0}}\zeta_{s_{n}}(s)\!+\!\sqrt{\sigma_ {n}(2-\sigma_{n})}\Big{[}\Big{(}1-r\sqrt{\sigma_{n}}\Big{)}^{\tau_{0}-\frac{1 }{2}}-1\Big{]}s^{\frac{1}{2}}\] \[=(1-\sigma_{n})C_{s_{n}}s^{1-\tau_{0}}+\sqrt{\sigma_{n}(2-\sigma_ {n})}\Big{[}\Big{(}1-r\sqrt{\sigma_{n}}\Big{)}^{\tau_{0}-\frac{1}{2}}-1\Big{]} s^{\frac{1}{2}}=:\tilde{A}_{n}(s),\] and so \[\tilde{A}_{n}(s) \geq(1-\sigma_{n})(1-r\sqrt{\sigma_{n}})^{\frac{1-2\tau_{0}}{2}} \frac{s_{n}^{\frac{1-2\tau_{0}}{2}}}{\sqrt{s_{n}^{1-2\tau_{0}}-2^{1-2\tau_{0}} }}s^{\frac{1}{2}}+o(1)s^{\frac{1}{2}}=[1+o(1)]s^{\frac{1}{2}}.\] \[\tilde{A}_{n}^{\prime}(s) =(1-\sigma_{n})C_{s_{n}}(1-\tau_{0})s^{-\tau_{0}}+\sqrt{\sigma_{n }(2-\sigma_{n})}\Big{[}\Big{(}1-r\sqrt{\sigma_{n}}\Big{)}^{\tau_{0}-\frac{1}{2 }}-1\Big{]}\frac{1}{2}s^{-\frac{1}{2}}\] \[\geq(1-\sigma_{n})(1-\tau_{0})(1-r\sqrt{\sigma_{n}})^{-\tau_{0}} \frac{s_{n}^{\frac{1-2\tau_{0}}{2}}}{\sqrt{s_{n}^{1-2\tau_{0}}-2^{1-2\tau_{0} }}}s^{-\frac{1}{2}}+o(1)s^{-\frac{1}{2}}\] \[=[1-\tau_{0}+o(1)]s^{-\frac{1}{2}},\] \[\Big{(}e^{\tilde{A}_{n}(s)^{2}-s}\Big{)}^{\prime} =e^{\tilde{A}_{n}(s)^{2}-s}[2\tilde{A}_{n}(s)\tilde{A}_{n}^{\prime} (s)-1]\] \[\geq\ [1-2\tau_{0}+o(1)]e^{\tilde{A}_{n}(s)^{2}-s}\ \geq e^{\tilde{A}_{n}(s)^{2}-s}\ \text{ for all large }n.\] We thus have \[\int_{(1-r\sqrt{\sigma_{n}})s_{n}}^{s_{n}}e^{A_{n}(s)^{2}-s}ds \leq\int_{(1-r\sqrt{\sigma_{n}})s_{n}}^{s_{n}}e^{\tilde{A}_{n}(s) ^{2}-s}ds\] \[\leq\int_{(1-r\sqrt{\sigma_{n}})s_{n}}^{s_{n}}\Big{(}e^{\tilde{A} _{n}(s)^{2}-s}\Big{)}^{\prime}\,ds\leq e^{\tilde{A}_{n}(s_{n})^{2}-s_{n}}.\] Since \[\tilde{A}_{n}(s_{n}) \leq(1-\sigma_{n})s_{n}^{\tau_{0}}\zeta_{s_{n}}(s_{n})\!+\!\sqrt{ \sigma_{n}(2-\sigma_{n})}\Big{[}\Big{(}1-r\sqrt{\sigma_{n}}\Big{)}^{\tau_{0}- \frac{1}{2}}-1\Big{]}s_{n}^{\frac{1}{2}}\] \[=(1-\sigma_{n})\frac{s_{n}^{\frac{1-2\tau_{0}}{2}}}{\sqrt{s_{n}^ {1-2\tau_{0}}-2^{1-2\tau_{0}}}}s_{n}^{\frac{1}{2}}\!+\!\sqrt{\sigma_{n}(2- \sigma_{n})}\Big{[}\Big{(}1-r\sqrt{\sigma_{n}}\Big{)}^{\tau_{0}-\frac{1}{2}}-1 \Big{]}s_{n}^{\frac{1}{2}}\] \[=\bigg{\{}(1-\sigma_{n})\Big{[}1+\Big{(}\frac{1}{2}+o(1)\Big{)}2^ {1-2\tau_{0}}s_{n}^{2\tau_{0}-1}\Big{]}+\Big{(}\frac{1-2\tau_{0}}{\sqrt{2}}r+o( 1)\Big{)}\sigma_{n}\bigg{\}}\,s_{n}^{\frac{1}{2}}\] \[=\bigg{\{}1-\sigma_{n}\Big{[}1-\frac{1-2\tau_{0}}{\sqrt{2}}r+o( 1)\Big{]}+\Big{[}2^{-2\tau_{0}}+o(1)\Big{]}s_{n}^{2\tau_{0}-1}\bigg{\}}\,s_{n} ^{\frac{1}{2}}\] \[\leq\left(1+2^{1-2\tau_{0}}s_{n}^{2\tau_{0}-1}\right)s_{n}^{\frac {1}{2}}\ \text{ for all large }n\text{ due to the choice of }r,\] we obtain \[\tilde{A}_{n}(s_{n})^{2}-s_{n}\leq 2^{3-2\tau_{0}}s_{n}^{2\tau_{0}}\text{ for all large }n,\] and \[\int_{(1-r\sqrt{\sigma_{n}})s_{n}}^{s_{n}}e^{A_{n}(s)^{2}-s}ds\leq e^{\tilde{A}_ {n}(s_{n})^{2}-s_{n}}\leq e^{2^{3-2\tau_{0}}s_{n}^{2\tau_{0}}}\to 1\ \ \text{ as }n\to\infty.\] If (4.4) does not hold, then for any small \(\epsilon>0\), there is a subsequence of \(\{\sigma_{n}\}\), still denoted by itself for simplicity of notation, such that \[\sqrt{\sigma_{n}}\leq s_{n}^{\epsilon-1}\text{ for all }n\geq 1.\] Therefore, for fixed \(\sigma\in(1-\epsilon,1)\) and \(s\in[(1-\delta)s_{n},(1-s_{n}^{-\sigma})s_{n}]\), \[A_{n}(s) =s^{\tau_{0}}\zeta_{s_{n}}(s)+O(s_{n}^{\epsilon-1})s^{\frac{1}{2}}\] \[\leq(1-s_{n}^{-\sigma})^{\frac{1-2\tau_{0}}{2}}\frac{s_{n}^{\frac {1-2\tau_{0}}{2}}}{\sqrt{s_{n}^{1-2\tau_{0}}-2^{1-2\tau_{0}}}}s^{\frac{1}{2}}+ O(s_{n}^{\epsilon-1})s^{\frac{1}{2}}\] \[=\big{[}1-\frac{1-2\tau_{0}}{2}s_{n}^{-\sigma}+o(s_{n}^{-\sigma}) \big{]}s^{\frac{1}{2}}.\] Hence \[A_{n}(s)^{2}-s\leq\big{[}-(1-2\tau_{0})s_{n}^{-\sigma}+o(s_{n}^{-\sigma}) \big{]}s\leq-s_{n}^{-\sigma}s\leq-(1-\delta)s_{n}^{1-\sigma}\text{ for all large }n.\] It follows that \[\int_{(1-\delta)s_{n}}^{(1-s_{n}^{-\sigma})s_{n}}e^{A_{n}(s)^{2}-s}ds\leq \delta s_{n}e^{-(1-\delta)s_{n}^{1-\sigma}}\to 0\text{ as }n\to\infty.\] For \(s\in[(1-s_{n}^{-\sigma})s_{n},s_{n}]\), we have \[A_{n}(s)\leq(1-\sigma_{n})C_{s_{n}}s^{1-\tau_{0}}+\sqrt{\sigma_{n}(2-\sigma_{ n})}\Big{[}\Big{(}1-s_{n}^{-\sigma}\Big{)}^{\tau_{0}-\frac{1}{2}}-1\Big{]}s^{ \frac{1}{2}}=:A_{n}^{*}(s),\] and \[A_{n}^{*}(s)=[1+o(1)]s^{1/2},\ A_{n}^{*}(s)^{\prime}=[1-\tau_{0}+o(1)]s^{-1/2}.\] Therefore \[\Big{(}e^{A_{n}^{*}(s)^{2}-s}\Big{)}^{\prime}=e^{A_{n}^{*}(s)^{2}-s}[2A_{n}^{ *}(s)A_{n}^{*}(s)^{\prime}-1]=e^{A_{n}(s)^{2}-s}[1-2\tau_{0}+o(1)]\geq e^{A_{n }(s)^{2}-s}\] for all large \(n\). It follows that \[\int_{(1-s_{n}^{-\sigma})s_{n}}^{s_{n}}e^{A_{n}(s)^{2}-s}ds\leq\int_{(1-s_{n} ^{-\sigma})s_{n}}^{s_{n}}e^{A_{n}^{*}(s)^{2}-s}\leq\int_{(1-s_{n}^{-\sigma})s _{n}}^{s_{n}}\Big{(}e^{A_{n}^{*}(s)^{2}-s}\Big{)}^{\prime}\,ds\leq e^{A_{n}^{* }(s_{n})^{2}-s_{n}}\] for all large \(n\). We now calculate \[A_{n}^{*}(s_{n}) =(1-\sigma_{n})C_{s_{n}}s_{n}^{1-\tau_{0}}+\sqrt{\sigma_{n}(2- \sigma_{n})}\Big{[}\Big{(}1-s_{n}^{-\sigma}\Big{)}^{\tau_{0}-\frac{1}{2}}-1 \Big{]}s_{n}^{\frac{1}{2}}\] \[=\Big{\{}(1-\sigma_{n})\frac{s_{n}^{\frac{1-2\tau_{0}}{2}}}{\sqrt {s_{n}^{1-2\tau_{0}}-2^{1-2\tau_{0}}}}+\big{[}\frac{2\tau_{0}-1}{\sqrt{2}}+o(1 )\big{]}\sqrt{\sigma_{n}}s_{n}^{-\sigma}\Big{\}}s_{n}^{1/2}\] \[=\Big{\{}1+O(s_{n}^{2\epsilon-2})+O(s_{n}^{2\tau_{0}-1})+O(s_{n}^ {\epsilon-1-\sigma})\Big{\}}s_{n}^{1/2}.\] Therefore \[A_{n}^{*}(s_{n})^{2}-s_{n}=\Big{\{}O(s_{n}^{2\epsilon-2})+O(s_{n}^{2\tau_{0}-1 })+O(s_{n}^{\epsilon-1-\sigma})\Big{\}}s_{n}=o(1)\] provided that \(\epsilon>0\) is sufficiently small, which gives \[\int_{(1-s_{n}^{-\sigma})s_{n}}^{s_{n}}e^{A_{n}(s)^{2}-s}ds\leq e^{A_{n}^{*}(s _{n})^{2}-s_{n}}=e^{o(1)}\to 1\text{ as }n\to\infty.\] Summarising, we have proved \[\liminf_{n\to\infty}\int_{3}^{s_{n}}e^{A_{n}(s)^{2}-s}ds<\infty.\] **Step 2.** Estimate of \(\int_{s_{n}}^{\infty}e^{B_{n}(s)^{2}-s}ds\). Clearly, for \(s\geq s_{n}\), \[B_{n}(s)\leq(1-\sigma_{n})C_{s_{n}}s_{n}^{1-2\tau_{0}}s^{\tau_{0}}+\sqrt{2 \sigma_{n}}s_{n}^{\tau_{0}}(s^{\frac{1}{2}-\tau_{0}}-s_{n}^{\frac{1}{2}-\tau_{0} })=:\tilde{B}_{n}(s).\] Fix \(\epsilon_{0}>0\) small. For \(s_{n}\leq s\leq(1+\epsilon_{0})s_{n}\), we have \[\tilde{B}_{n}(s)\geq(1-\sigma_{n})(1+\epsilon_{0})^{\tau_{0}}\frac{s_{n}^{\frac{ 1-2\tau_{0}}{2}}}{\sqrt{s_{n}^{1-2\tau_{0}}-2^{1-2\tau_{0}}}}s_{n}^{1/2}=[(1+ \epsilon_{0})^{\tau_{0}}+o(1)]s_{n}^{1/2},\] and \[-\tilde{B}_{n}^{\prime}(s) =-\tau_{0}(1-\sigma_{n})\frac{s_{n}^{1-2\tau_{0}}}{\sqrt{s_{n}^{1 -2\tau_{0}}-2^{1-2\tau_{0}}}}s^{\tau_{0}-1}+\sqrt{2\sigma_{n}}s_{n}^{\tau_{0}}s ^{-\frac{1}{2}-\tau_{0}}(\frac{1}{2}-\tau_{0})\] \[\geq-\tau_{0}(1-\sigma_{n})(1+\epsilon_{0})^{\tau_{0}-1}\frac{s_ {n}^{\frac{1-2\tau_{0}}{2}}}{\sqrt{s_{n}^{1-2\tau_{0}}-2^{1-2\tau_{0}}}}s_{n}^ {-1/2}+o(1)s_{n}^{-1/2}\] \[=[-\tau_{0}(1+\epsilon_{0})^{\tau_{0}-1}+o(1)]s_{n}^{-1/2}.\] It follows that \[-\tilde{B}_{n}(s)\tilde{B}_{n}^{\prime}(s)\geq-\tau_{0}(1+\epsilon_{0})^{2 \tau_{0}-1}+o(1)\geq-\tau_{0}(1+2\epsilon_{0})^{2\tau_{0}-1}\text{ for all large }n,\] and therefore \[\left(e^{\tilde{B}_{n}(s)^{2}-s}\right)^{\prime}=e^{\tilde{B}_{n}(s)^{2}-s}[2 \tilde{B}_{n}(s)\tilde{B}_{n}^{\prime}(s)-1]\leq-\xi_{0}e^{\tilde{B}_{n}(s)^{2 }-s}\] with \[\xi_{0}:=-2\tau_{0}(1+2\epsilon_{0})^{2\tau_{0}-1}+1>0.\] We thus obtain \[\int_{s_{n}}^{(1+\epsilon_{0})s_{n}}e^{B_{n}(s)^{2}-s}ds\leq\int_{s_{n}}^{(1+ \epsilon_{0})s_{n}}e^{\tilde{B}_{n}(s)^{2}-s}ds\leq-\xi_{0}^{-1}\int_{s_{n}}^{ (1+\epsilon_{0})s_{n}}\left(e^{\tilde{B}_{n}(s)^{2}-s}\right)^{\prime}ds\leq \xi_{0}^{-1}e^{\tilde{B}_{n}(s_{n})^{2}-s_{n}}\] for all large \(n\). Clearly \[\tilde{B}_{n}(s_{n})=(1-\sigma_{n})C_{s_{n}}s_{n}^{1-2\tau_{0}}s_{n}^{\tau_{0 }}\leq\frac{s_{n}^{\frac{1-2\tau_{0}}{2}}}{\sqrt{s_{n}^{1-2\tau_{0}}-2^{1-2 \tau_{0}}}}s_{n}^{1/2}=\big{[}1+O(s_{n}^{2\tau_{0}-1})\big{]}s_{n}^{1/2}.\] Therefore \[\tilde{B}_{n}(s_{n})^{2}-s_{n}=O(s_{n}^{2\tau_{0}-1})s_{n}=o(1),\] and thus \[\int_{s_{n}}^{(1+\epsilon_{0})s_{n}}e^{B_{n}(s)^{2}-s}ds\leq\xi_{0}^{-1}e^{ \tilde{B}_{n}(s_{n})^{2}-s_{n}}\leq\xi_{0}^{-1}e^{o(1)}\to\xi_{0}^{-1}\text{ as }n\to\infty.\] For \(s\geq(1+\epsilon_{0})s_{n}\), we have \[B_{n}(s) \leq(1-\sigma_{n})C_{s_{n}}s_{n}^{1-2\tau_{0}}s^{\tau_{0}}+\sqrt {2\sigma_{n}}s^{\frac{1}{2}}\] \[\leq(1-\sigma_{n})(1+\epsilon_{0})^{\tau_{0}}C_{s_{n}}s_{n}^{1-2 \tau_{0}}s_{n}^{\tau_{0}}+\sqrt{2\sigma_{n}}s^{\frac{1}{2}}\] \[=\big{[}(1+\epsilon_{0})^{\tau_{0}}+o(1)\big{]}\frac{s_{n}^{\frac{ 1-2\tau_{0}}{2}}}{\sqrt{s_{n}^{1-2\tau_{0}}-2^{1-2\tau_{0}}}}s_{n}^{1/2}+o(1)s ^{1/2}\] \[=\big{[}(1+\epsilon_{0})^{\tau_{0}}+o(1)\big{]}s_{n}^{1/2}+o(1)s^ {1/2}.\] It follows that \[B_{n}(s)^{2}-s\leq\big{[}(1+\epsilon_{0})^{2\tau_{0}}+o(1)\big{]}s_{n}-\big{[} 1+o(1)\big{]}s\leq(1+\epsilon_{0}/2)^{2\tau_{0}}(s_{n}-s)\text{ for all large }n.\] Therefore, \[\int_{(1+\epsilon_{0})s_{n}}^{\infty}e^{B_{n}(s)^{2}-s}ds \leq\int_{(1+\epsilon_{0})s_{n}}^{\infty}e^{(1+\epsilon_{0}/2)^ {2\tau_{0}}(s_{n}-s)}ds\] \[=(1+\epsilon_{0}/2)^{-2\tau_{0}}e^{-(1+\epsilon_{0}/2)^{2\tau_{0 }}\epsilon_{0}s_{n}}\to 0\quad\text{as }\,n\to\infty.\] We thus obtain \[\limsup_{n\to\infty}\int_{s_{n}}^{\infty}e^{B_{n}(s)^{2}-s}ds\leq\xi_{0}^{-1}.\] Combining this with the conclusion proved in Step 2a we obtain \[\liminf_{n\to\infty}\left[\int_{3}^{s_{n}}e^{A_{n}(s)^{2}-s}ds+\int_{s_{n}}^{ \infty}e^{B_{n}(s)^{2}-s}ds\right]<\infty.\] But this clearly contradicts (4.3). The proof of the lemma is now complete. **Lemma 4.5**.: _Let \(\mu>-\frac{1}{4}\). Then there exist a sequence of radially symmetric functions \(u_{n}\in\mathcal{H}^{1}_{\mu,0}(B_{1})\) such that_ \[\|u_{n}\|_{\mu}^{2}=\int_{B_{1}}\Big{(}|\nabla u_{n}(x)|^{2}+\mu\frac{u_{n}(x) ^{2}}{|x|^{2}(-\ln|x|)^{2}}\Big{)}dx=1\] _and_ \[\lim_{n\to\infty}\int_{B_{1}}e^{\alpha|u_{n}|^{2}}dx\to\infty\ \ \mbox{for any}\ \alpha>m_{\mu}=4\pi\sqrt{1+4\mu}.\] **Proof.** It is convenient to do the construction for a transformed form of \(u_{n}\). Let \[u_{n}(r)=(-\ln r)^{\eta_{0}}v_{n}(r)\ \ \ \mbox{for}\ \ r\in(0,1);\] then \(v_{n}(0)=0\) and \(v_{n}(r)\) also vanishes in the small neighbourhood of \(r=1\). Moreover, \[\|u_{n}\|_{\mu}^{2} = 2\pi\int_{0}^{1}\Big{(}u_{n}^{\prime}(r)^{2}+\mu\frac{u_{n}(r)^ {2}}{r^{2}(-\ln r)^{2}}\Big{)}rdr\] \[= 2\pi\int_{0}^{1}v_{n}^{\prime}(r)^{2}(-\ln r)^{2\tau_{0}}rdr.\] Let \[r=e^{-\frac{1}{2}t}\ \ \ \mbox{and}\ \ \ v_{n}(r)=w_{n}(t);\] then \[\|u_{n}\|_{\mu}^{2}=2^{2-2\tau_{0}}\pi\int_{0}^{\infty}w_{n}^{\prime}(t)^{2}t ^{2\tau_{0}}dt=:\mathcal{J}(w_{n})\] and \[\int_{B_{1}}e^{\alpha|u_{n}|^{2}}dx=2\pi\int_{0}^{1}e^{\alpha u_{n}^{2}}rdr= \pi\int_{0}^{\infty}\exp(\alpha 2^{-2\tau_{0}}t^{2\tau_{0}}w_{n}^{2}-t)dt=: \mathcal{N}(w_{n}).\] We now construct functions \(\{w_{n}\}\) such that \(\mathcal{J}(w_{n})=1\) and \(\mathcal{N}(w_{n})\to\infty\). Let \[\zeta_{\sigma}(t):=\left\{\begin{aligned} t^{\sigma}&\ \ \mbox{for}\ \ t\in[0,1),\\ 1&\ \ \mbox{for}\ \ t\in[1+\infty),\end{aligned}\right. \tag{4.5}\] with \(\sigma>\frac{1}{2}-\tau_{0}\) to be determined later. Then define, for all large integer \(n\) and some constant \(\nu>0\) to be specified below, \[w_{n}(t):=\nu n^{\frac{1}{2}-\tau_{0}}\zeta_{\sigma}(n^{-1}t).\] Direct computation gives \[\int_{0}^{\infty}w_{n}^{\prime}(t)^{2}t^{2\tau_{0}}dt = \nu^{2}\sigma^{2}\int_{0}^{n}n^{1-2\tau_{0}-2\sigma}t^{2\tau_{0} +2\sigma-2}dt\] \[= \frac{\nu^{2}\sigma^{2}}{2\tau_{0}+2\sigma-1}.\] Therefore \[\mathcal{J}(w_{n})=2^{2-2\tau_{0}}\pi\frac{\nu^{2}\sigma^{2}}{2\tau_{0}+2 \sigma-1}=1 \tag{4.6}\] provided that \[\nu=\big{(}2^{2-2\tau_{0}}\pi f_{\mu}(\sigma)\big{)}^{-1/2},\] where \[f_{\mu}(\sigma):=\frac{\sigma^{2}}{2\sigma+2\tau_{0}-1}\ \ \ \mbox{for}\ \sigma> \frac{1}{2}-\tau_{0}.\] We next estimate \(\mathcal{N}(w_{n})\). To this end, we first choose an optimal \(\sigma\). It is easily seen that \(f_{\mu}(\sigma)\) achieves its minimum at \[\sigma=\sigma_{\mu}=:1-2\tau_{0}=\sqrt{1+4\mu}\] with \[f_{\mu}(\sigma_{\mu})=1-2\tau_{0}=\sqrt{1+4\mu}.\] Then (4.6) holds with \(\sigma=\sigma_{\mu}\) and \[\nu=\nu_{\mu}:=\left(2^{2-2\tau_{0}}\pi f_{\mu}(\sigma_{\mu})\right)^{-1/2}.\] With \((\sigma,\nu)=(\sigma_{\mu},\nu_{\mu})\) in the definition of \(w_{n}\), by direct computation, we have \[\mathcal{N}(w_{n}) > \pi\int_{0}^{n}\exp(\alpha\nu_{\mu}^{2}2^{-2\tau_{0}}n^{1-2\tau_{ 0}-2\sigma_{\mu}}t^{2\sigma_{\mu}+2\tau_{0}}-t)dt\] \[> \pi\int_{(1-\epsilon_{0})n}^{n}\exp(\alpha\nu_{\mu}^{2}2^{-2\tau _{0}}n^{1-2\tau_{0}-2\sigma_{\mu}}(1-\epsilon_{0})^{2\sigma_{\mu}+2\tau_{0}}n^ {2\sigma_{\mu}+2\tau_{0}}-n)dt\] \[= \pi e^{(\alpha\nu_{\mu}^{2}2^{-2\tau_{0}}(1-\epsilon_{0})^{2 \sigma_{\mu}+2\tau_{0}}-1)n}\int_{(1-\epsilon_{0})n}^{n}dt\] \[\geq \pi\epsilon_{0}n\to\infty\quad\text{as}\,\,\,n\to\infty\] if \[\alpha\nu_{\mu}^{2}2^{-2\tau_{0}}(1-\epsilon_{0})^{2\sigma_{\mu}+2\tau_{0}}-1 \geq 0,\] which holds when \[\alpha>2^{2\tau_{0}}\nu_{\mu}^{-2}=4\pi\sqrt{1+4\mu}\,\,\text{and}\,\,0< \epsilon_{0}\ll 1. \tag{4.7}\] Theorem 4.1 clearly follows directly from Lemmas 4.4 and 4.5. ### Proof of Theorem 4.2 For the case \(\mu=-\frac{1}{4}\), if we do the transform that \(r=e^{-\frac{1}{2}t}\) and \(w_{0}(t)=v_{0}(r)=(-\ln r)^{-\tau_{0}}u_{0}(r)\), it fails to provide a useful estimate of the form (4.2), since \(1-2\tau_{0}=0\) in this case. Instead, we will find a new transformation. **Lemma 4.6**.: _Let \(u_{0}\in\mathcal{H}^{1}_{\mathrm{rad},-1/4,0}(B_{1})\) satisfy_ \[\|u_{0}\|_{-1/4}^{2}=\int_{B_{1}}\Big{(}|\nabla u_{0}(x)|^{2}-\frac{u_{0}(x)^ {2}}{4|x|^{2}(-\ln|x|)^{2}}\Big{)}dx\leq 1.\] _Then for \(p\in(0,1)\) and \(\alpha>0\), there exists a constant \(C_{p,\alpha}>0\) independent of \(u_{0}\) such that_ \[\int_{B_{1}}e^{\alpha|u_{0}|^{p}}dx\leq C_{p,\alpha}.\] **Proof.** Replacing \(u_{0}\) by \(|u_{0}|\), we may assume that \(u_{0}\in\mathcal{H}^{1}_{\mathrm{rad},\mu,0}(B_{1})\) is nonnegative. By the embedding theorem we see that \(u_{0}\) is continuous in \([0,1]\). Moreover, since the space \(C^{\infty}_{\mathrm{rad},c}(B_{1})\) is dense in \(\mathcal{H}^{1}_{\mathrm{rad},\mu,0}(B_{1})\), we may further assume that \(u_{0}\in C^{\infty}_{\mathrm{rad},c}(B_{1})\). By Theorem 1.2, \[\|u_{0}\|_{L^{2}(B_{1})}\leq C\|u_{0}\|_{\mu}\leq C,\] with \(C\) independent of \(u_{0}\). Fix \(r_{1}<r_{2}\) such that \(0<r_{1}\leq 1/4,\,\,3/4\leq r_{2}<1\). Then \[C\geq\|u_{0}\|_{L^{2}(B_{1})}\geq\|u_{0}\|_{L^{2}(B_{r_{2}}\setminus B_{r_{1}}) }\geq|B_{r_{2}}\setminus B_{r_{1}}|^{1/2}\inf_{B_{r_{1}}\setminus B_{r_{2}}}u_ {0}.\] Therefore there exists \(r_{0}\in[r_{1},r_{2}]\) such that \[u_{0}(r_{0})\leq\tilde{C}:=C|B_{r_{2}}\setminus B_{r_{1}}|)^{-1/2}.\] Let \[u_{0}(r)=(-\ln r)^{\frac{1}{2}}v_{0}(r)\quad\text{for}\,\,\,r\in(0,1),\] \[v_{0}(r)=w_{0}(t),\,\,r=e^{\frac{1}{2}(1-e^{t_{0}})},\,\,r_{0}=e^{\frac{1}{2}( 1-e^{t_{0}})}.\] Then \(dr=-\frac{1}{2}e^{\frac{1}{2}(1-e^{t})}e^{t}dt\), \[w_{0}(0)=\lim_{t\to 0^{+}}w_{0}(t)=\lim_{r\to 1^{-}}v_{0}(r)=0,\qquad w_{0}(t_{0}) \leq\frac{2\tilde{C}}{e^{t_{0}}-1},\] and direct computation shows that \[\begin{split} 1\geq\|u_{0}\|_{-1/4}^{2}&=2\pi\int_{0}^{1} v_{0}^{\prime}(r)^{2}r(-\ln r)dr\\ &=2\pi\int_{0}^{+\infty}w_{0}^{\prime}(t)^{2}\,\frac{e^{t}-1}{e^{t }}dt\\ &\geq 2\pi\big{(}1-e^{-t_{0}}\big{)}\int_{t_{0}}^{+\infty}w_{0}^{ \prime}(t)^{2}\,dt+2\pi e^{-t_{0}}\int_{0}^{t_{0}}w_{0}^{\prime}(t)^{2}tdt, \end{split} \tag{4.8}\] since \(1-e^{-t}\geq e^{-t_{0}}t\) for \(t\in(0,t_{0})\). Moreover, \[2\pi\int_{0}^{1}e^{\alpha|u_{0}|^{p}}rdr=\pi\int_{0}^{+\infty}\Phi_{0}(t)dt, \tag{4.9}\] where \[\Phi_{0}(t):=e^{\alpha\frac{(e^{t}-1)^{p}}{2p}|w_{0}|^{p}+1-e^{t}+t}.\] By (4.8) we have \[\int_{t_{0}}^{+\infty}w_{0}^{\prime}(t)^{2}\,dt\leq b_{0}^{2}:=\big{[}2\pi \big{(}1-e^{-t_{0}}\big{)}\big{]}^{-1}.\] It follows that, for \(t>t_{0}\), \[w_{0}(t)\leq\int_{t_{0}}^{t}w_{0}^{\prime}(s)ds+w_{0}(t_{0}) \leq \Big{(}\int_{t_{0}}^{t}w_{0}^{\prime}(s)^{2}ds\Big{)}^{\frac{1}{ 2}}\Big{(}\int_{1}^{t}ds\Big{)}^{\frac{1}{2}}+w_{0}(t_{0})\] \[\leq b_{0}t^{\frac{1}{2}}+\tilde{C},\] and hence, for \(p\in(0,1)\) \[\pi\int_{t_{0}}^{+\infty}\Phi_{0}(t)dt = \pi\int_{t_{0}}^{+\infty}e^{\alpha\frac{(e^{t}-1)^{p}}{2p}|w_{0} |^{p}+1-e^{t}+t}dt\] \[\leq \pi\int_{t_{0}}^{+\infty}e^{\alpha 2^{-p}e^{pt}\big{[}b_{0}t^{ \frac{1}{2}}+\tilde{C}\big{]}^{p}+1-e^{t}+t}dt\] \[\leq C_{p,\alpha}.\] where we have used the fact that \(t_{0}\) can be bounded (from above and below) by positive numbers depending on \(r_{1},r_{2}\) only, via \(r_{1}\leq r_{0}\leq r_{2}\). By (4.8) we also have \[\int_{0}^{t_{0}}w_{0}^{\prime}(t)^{2}t\,dt\leq b_{1}^{2}:=\big{[}2\pi(1-e^{-t_ {0}})\big{]}^{-1}.\] Thus, for \(t\in(0,t_{0})\), \[w_{0}(t) \leq w_{0}(t_{0})-\int_{t}^{t_{0}}w_{0}^{\prime}(s)ds\] \[\leq w_{0}(t_{0})+\Big{(}\int_{t}^{t_{0}}w_{0}^{\prime}(s)^{2}dsds \Big{)}^{\frac{1}{2}}\Big{(}\int_{t}^{t_{0}}s^{-1}ds\Big{)}^{\frac{1}{2}}\leq \tilde{C}+b_{1}\sqrt{\ln\frac{t_{0}}{t}}.\] Since \(p\in(0,1)\), for any \(\epsilon>0\), there exists \(t_{1}\in(0,t_{0})\) such that \[\Big{(}\tilde{C}+b_{1}\sqrt{\ln\frac{t_{0}}{t}}\Big{)}^{\frac{p}{2}}\leq \epsilon\ln\frac{t_{0}}{t}\quad\text{for }\,t\in(0,t_{1}).\] Now we take \(\epsilon=\frac{1}{2}\frac{1}{\alpha 2^{-p}e^{p\mu_{0}}}\) and obtain \[\pi\int_{0}^{t_{0}}\Phi_{0}(t)dt \leq \pi\int_{0}^{t_{0}}e^{\alpha 2^{-p}e^{pt}\left[b_{1}\sqrt{\ln\frac{t_{0}}{t} }+\hat{C}\right]^{p}+1-e^{t}+t}dt\] \[\leq \pi\int_{0}^{t_{1}}e^{\frac{1}{2}\ln\frac{t_{0}}{t}}dt+\pi\int_{t _{1}}^{t_{0}}e^{\alpha 2^{-p}e^{pt}\left[b_{1}\sqrt{\ln\frac{t_{0}}{t}}+\hat{C} \right]^{p}+1-e^{t}+t}dt\] \[= \pi\int_{0}^{t_{1}}\frac{\sqrt{t_{0}}}{\sqrt{t}}dt+\pi\int_{t_{1} }^{t_{0}}e^{\alpha 2^{-p}e^{pt}\left[b_{1}\sqrt{\ln\frac{t_{0}}{t}}+\hat{C} \right]^{p}+1-e^{t}+t}dt\] \[\leq C_{p,\alpha},\] where we have used the fact that \(t_{0}\) and \(t_{1}\) can be bounded (from above and below) by positive numbers depending on \(r_{1},r_{2}\) and \(p\) only. Therefore \[2\pi\int_{0}^{1}e^{\alpha|u_{0}|^{p}}rdr=\pi\int_{0}^{+\infty}\Phi_{0}(t)dt\leq 2 C_{p,\alpha},\] and the proof is complete. **Lemma 4.7**.: _There exist a sequence of functions \(u_{n}\in\mathcal{H}^{1}_{\mathrm{rad},-1/4,0}(B_{1})\) such that_ \[\int_{B_{1}}\Big{(}|\nabla u_{n}(x)|^{2}-\frac{u_{n}(x)^{2}}{4|x|^{2}(-\ln|x|)^ {2}}\Big{)}dx\leq 1\] _and_ \[\lim_{n\to\infty}\int_{B_{1}}e^{\alpha|u_{n}|^{p}}dx=\infty\quad\text{ for any }\alpha>0,\ p\geq 1.\] **Proof.** We will construct a sequence of radially symmetric functions \(\{u_{n}\}\subset C^{\infty}_{c}(B_{1})\), which stays inside the unit ball of \(\mathcal{H}^{1}_{\mu_{0},0}(B_{1})\), but \(\int_{B_{1}}e^{\alpha|u_{n}|^{p}}dx\to\infty\) for any \(p\geq 1\) and any \(\alpha>0\). Using the same transformations as in the proof of Lemma 4.6, we have \[\|u_{n}\|_{-1/4}^{2}=2\pi\int_{0}^{+\infty}w_{n}^{\prime}(t)^{2}\,\frac{e^{t} -1}{e^{t}}dt\] and \[\int_{B_{1}}e^{\alpha|u_{n}|^{p}}dx=\pi\int_{0}^{+\infty}e^{\alpha\left[\frac {e^{t}-1}{2}\right]^{p}|w_{n}|^{p}+1-e^{t}+t}dt.\] Given \(\kappa>4\), we let \(\eta_{\kappa}:(-\infty,+\infty)\to[0,1]\) be a smooth function such that \[\eta_{\kappa}(t):=\begin{cases}0&\text{for }\,t\in(-\infty,1),\\ 1&\text{for }\,t\in(2,\kappa-1),\\ 0&\text{for }\,t\in(\kappa,+\infty).\end{cases} \tag{4.10}\] Denote \[b_{1}(\kappa):=\int_{1}^{\kappa}\eta_{\kappa}(s)ds\quad\text{and}\quad b_{2}( \kappa):=\sqrt{4\pi\int_{1}^{\kappa}\eta_{\kappa}(s)^{2}ds};\] then \[\frac{b_{1}(\kappa)}{b_{2}(\kappa)} \geq \frac{\int_{2}^{\kappa-1}1ds}{\sqrt{4\pi\int_{1}^{\kappa}1ds}}= \frac{\kappa-3}{\sqrt{4\pi(\kappa-1)}}\to+\infty\quad\text{as }\ \kappa\to+\infty.\] Define \[\phi_{\kappa}(t):=\eta_{\kappa}(t)-\eta_{\kappa}(t-2\kappa)\quad\text{for }\,t \in\mathbb{R}, \tag{4.11}\] and \[w_{\kappa}(t):=\frac{1}{b_{2}(\kappa)}\int_{0}^{t}\phi_{\kappa}(s)ds\quad \text{for }\,t\geq 0.\] It is easily seen that \(w_{\kappa}\) is nonnegative and \(w_{\kappa}\in C^{\infty}_{c}((0,\infty))\). Moreover, \[2\pi\int_{0}^{+\infty}w^{\prime}_{\kappa}(t)^{2}\,\frac{e^{t}-1}{e ^{t}}dt = \frac{2\pi}{b_{2}(\kappa)^{2}}\int_{0}^{+\infty}\phi_{\kappa}(t)^{2}\, \frac{e^{t}-1}{e^{t}}dt\] \[\leq \frac{2\pi}{b_{2}(\kappa)^{2}}\int_{0}^{+\infty}\phi_{\kappa}(t)^ {2}dt=1\] and for \(t\in[\kappa,2\kappa]\), \[w_{\kappa}(t)=\frac{1}{b_{2}(\kappa)}\int_{0}^{\kappa}\phi_{ \kappa}(s)ds=\frac{b_{1}(\kappa)}{b_{2}(\kappa)},\] \[\pi\int_{0}^{+\infty}e^{\alpha\left[\frac{e^{t}-1}{2}\right]^{p}| w_{\kappa}|^{p}+1-e^{t}+t}dt > \pi\int_{\kappa}^{2\kappa}e^{\alpha\left[\frac{e^{t}-1}{2}\right] ^{p}|w_{\kappa}|^{p}+1-e^{t}+t}dt\] \[= \pi\int_{\kappa}^{2\kappa}e^{\alpha\left[\frac{e^{t}-1}{2}\right] ^{p}\left[\frac{b_{1}(\kappa)}{b_{2}(\kappa)}\right]^{p}+1-e^{t}+t}dt\] \[\to +\infty\ \ \ \mbox{as}\ \ \kappa\to+\infty,\] which ends the proof. Clearly Theorem 4.2 follows from Lemmas 4.6 and 4.7 directly. ## 5. Trudinger-Moser type inequalities for general functions ### The case \(\mu>-\frac{1}{4}\) **Lemma 5.1**.: _Let \(\mu\in(-\frac{1}{4},0)\), \(V:(0,1)\to[0,+\infty)\) be a continuous function verifying (1.14). Then there exists a sequence \(\{u_{n}\}\subset\mathcal{H}^{1}_{V,\mu,0}(B_{1})\) such that \(\|u_{n}\|_{V,\mu}=1\) and for any \(\alpha>m_{\mu}\), \(r_{0}\in(0,1)\),_ \[\lim_{n\to\infty}\int_{B_{r_{0}}}e^{\alpha|u_{n}|^{2}}dx=\infty.\] **Proof.** As before, for \(u_{n}\in C^{0,1}_{\mathrm{rad},c}(B_{1})\), under the transformation \[u_{n}(r)=(-\ln r)^{\gamma_{0}}v_{n}(r)\ \ \ \mbox{for}\ \ r\in(0,1),\] we have \[\|u_{n}\|_{V,\mu}^{2} = 2\pi\int_{0}^{1}\Big{(}u^{\prime}_{n}(r)^{2}+\mu Vu_{n}(r)^{2} \Big{)}rdr\] \[= 2\pi\int_{0}^{1}v^{\prime}_{n}(r)^{2}(-\ln r)^{2\gamma_{0}}rdr+ 2\pi\mu\int_{0}^{1}\Big{(}V(r)-\frac{1}{r^{2}(-\ln r)^{2}}\Big{)}v_{n}(r)^{2} (-\ln r)^{2\gamma_{0}}rdr.\] By (1.14), given small \(\epsilon>0\), there exists \(r_{\epsilon}\in(0,1)\) such that \[|V(r)r^{2}(-\ln r)^{2}-1|\leq\epsilon\ \ \ \mbox{for}\ r\in(0,r_{\epsilon}]. \tag{5.1}\] Let \[r_{\epsilon}=e^{-\frac{1}{2}(t+t_{\epsilon})}\ \ \ \mbox{and}\ \ \ v_{n}(r)=w_{n}(t),\] where \(t_{\epsilon}=-2\ln r_{\epsilon}\) so that \(w_{n}(-t_{\epsilon})=v_{n}(1)=0\). Then \[2\pi\int_{0}^{r_{0}}e^{\alpha u_{n}^{2}}rdr=\pi\int_{-t_{\epsilon}-2\ln r_{0} }^{\infty}e^{\alpha 2^{-2\gamma_{0}}(t+t_{\epsilon})^{2\gamma_{0}}w_{n}^{2}-(t+t_{ \epsilon})}dt=:\tilde{\mathcal{N}}(w_{n})\] and \[2\pi\int_{0}^{1}u^{\prime}_{n}(r)^{2}rdr=2\pi\int_{0}^{1}v^{\prime}_{n}(r)^{ 2}(-\ln r)^{2\gamma_{0}}rdr=2^{2-2\gamma_{0}}\pi\int_{-t_{\epsilon}}^{\infty} w^{\prime}_{n}(t)^{2}(t+t_{\epsilon})^{2\gamma_{0}}dt.\] Next we construct the functions \(\{w_{n}\}\). Let \[\zeta_{0}(t):=\begin{cases}0,&t\leq 0,\\ t^{\sqrt{1+4\mu}},&t\in[0,1),\\ 1&t\in[1,+\infty).\end{cases} \tag{5.2}\] Then we define \[w_{n}(t):=\begin{cases}\nu_{0}n^{\frac{1}{2}-\tau_{0}}\zeta_{0}(t/n),&t\geq 1, \\ \nu_{0}n^{\frac{1}{2}-\tau_{0}}\zeta_{0}(1/n)t,&t\leq 1,\end{cases}\ \ \text{with}\ \ \nu_{0}=(2^{2-2\tau_{0}}\pi)^{-1/2}(1-2\tau_{0})^{-1/2}.\] Let us note that \(\mu\in(-1/4,0)\) implies \(2\tau_{0}\in(0,1)\). Therefore \[2^{2-2\tau_{0}}\pi\int_{-t_{\epsilon}}^{\infty}w_{n}^{\prime}(t) ^{2}(t+t_{\epsilon})^{2\tau_{0}}dt\] \[= (1-2\tau_{0})n^{-1+2\tau_{0}}\Big{[}\int_{\epsilon}^{1}(t+t_{ \epsilon})^{2\tau_{0}}dt+\int_{1}^{n}t^{-4\tau_{0}}(t+t_{\epsilon})^{2\tau_{0 }}dt\Big{]}\] \[\leq (1-2\tau_{0})n^{-1+2\tau_{0}}\Big{[}(1+t_{\epsilon})^{2\tau_{0}} +\int_{1}^{n}(t+t_{\epsilon})^{-2\tau_{0}}dt\Big{]}\] \[\leq (1-2\tau_{0})n^{-1+2\tau_{0}}(1+t_{\epsilon})^{2\tau_{0}}+\big{(} 1+t_{\epsilon}/n)^{1-2\tau_{0}}\] \[= 1+O(n^{-1+2\tau_{0}}).\] By (5.1) \[\Big{|}2\pi\mu\int_{0}^{1}\Big{(}V(r)r^{2}(-\ln r)^{2}-1\Big{)} \frac{(-\ln r)^{2\tau_{0}}}{r^{2}(-\ln r)^{2}}v_{n}(r)^{2}rdr\Big{|}\] \[\leq 2\pi\mu\epsilon\nu_{0}^{2}n^{1-2\tau_{0}}\Big{[}\int_{\epsilon} ^{1}[\zeta_{0}(1/n)]^{2}(t+t_{\epsilon})^{-2+2\tau_{0}}dt+\int_{1}^{\infty}[ \zeta_{0}(t/n)]^{2}(t+t_{\epsilon})^{-2+2\tau_{0}}dt\Big{]}\] \[\leq 2\pi\mu\epsilon\nu_{0}^{2}n^{1-2\tau_{0}}\Big{(}\int_{0}^{n}n^{ -2\sqrt{1+4\mu}}t^{-2\tau_{0}}dt+\int_{n}^{+\infty}t^{-2+2\tau_{0}}dt\Big{)}\] \[= \frac{4\pi\mu\nu_{0}^{2}}{1-2\tau_{0}}\epsilon.\] Thus \[\|u_{n}\|_{Y,\mu}^{2}\leq 1+C(\epsilon+n^{-1+2\tau_{0}}).\] Set \(\hat{u}_{n}:=u_{n}/\|u_{n}\|_{V,\mu}\) and \(\hat{w}_{n}=w_{n}/\|u_{n}\|_{V,\mu}\). Then, for all large \(n\), \[\tilde{\mathcal{N}}(\hat{w}_{n}) = \pi\int_{-t_{\epsilon}-2\ln r_{0}}^{\infty}e^{\alpha 2^{-2\tau_{0} }(t+t_{\epsilon})^{2\tau_{0}}\hat{w}_{n}^{2}-(t+t_{\epsilon})}dt\] \[> \pi\int_{(1-\epsilon_{0})n}^{n}e^{\frac{\alpha 2^{-2\tau_{0}}\hat{w}_{n}^{2}-1+2\tau_{0}}{1+C( \epsilon+n^{-1+2\tau_{0}})}-(t+t_{\epsilon})}dt\] \[\geq \pi e^{\big{[}\frac{\alpha 2^{-2\tau_{0}}\hat{w}_{0}^{2}(1- \epsilon_{0})^{2-2\tau_{0}}}{1+C(\epsilon+n^{-1+2\tau_{0}})}-(1+\frac{t_{ \epsilon}}{n})\big{]}\int_{(1-\epsilon_{0})n}^{n}1dt\] \[\geq \pi(1-\epsilon_{0})n\rightarrow+\infty\ \ \ \text{as}\ \ n\rightarrow+\infty\] provided that \[\frac{\alpha 2^{-2\tau_{0}}\nu_{0}^{2}(1-\epsilon_{0})^{2-2\tau_{0}}}{1+C( \epsilon+n^{-1+2\tau_{0}})}-(1+\frac{t_{\epsilon}}{n})\geq 0,\] which holds for all large \(n\) if \[\alpha>2^{2\tau_{0}}\nu_{0}^{-2}=4\pi\sqrt{1+4\mu}\] and \(\epsilon>0\), \(\epsilon_{0}>0\) are chosen small enough. **Lemma 5.2**.: _Let \(\mu>0\), \(V:(0,1)\to[0,+\infty)\) be a continuous function. Then for any \(x_{0}\in B_{1}\) and \(r_{0}\in(0,1)\) such that \(\overline{B_{r_{0}}(x_{0})}\subset B_{1}\setminus\{0\}\), there exists a sequence of functions \(\{u_{n}\}\) with support in \(B_{r_{0}}(x_{0})\) and radially symmetric about \(x_{0}\), such that \(\|u_{n}\|_{V,\mu}=1\) and_ \[\lim_{n\to\infty}\int_{B_{1}}e^{\alpha|u_{n}|^{2}}dx=\infty\mbox{ for any } \alpha>4\pi.\] **Proof.** Let \(u_{n}\) be a sequence of nonnegative functions of \(r=|x-x_{0}|\) with supports contained in \(B_{r_{0}}(x_{0})\), and extended by \(0\) outside the supporting sets. Then \[\|u_{n}\|_{V,\mu}^{2}\leq\mathcal{I}(u_{n}):=2\pi\int_{0}^{r_{0}}u_{n}^{ \prime}(r)^{2}rdr+2\pi\mu\|V\|_{L^{\infty}(B_{r_{0}}(x_{0}))}\int_{0}^{r_{0}} u_{n}^{2}(r)rdr.\] Let \[r=e^{-\frac{1}{2}t}\quad\mbox{and}\quad u_{n}(r)=w_{n}(t),\quad t\in[-2\ln r_{ 0},+\infty).\] Then \[\int_{B_{r_{0}}}e^{\alpha|u_{n}|^{2}}dx=2\pi\int_{0}^{r_{0}}e^{\alpha u_{n}^{2 }}rdr=\pi\int_{-2\ln r_{0}}^{\infty}e^{\alpha w_{n}^{2}-t}dt:=\mathcal{N}(w_{ n})\] and \[\mathcal{I}(u_{n})=4\pi\int_{-2\ln r_{0}}^{\infty}w_{n}^{\prime}(t)^{2}dt+2 \pi\mu\|V\|_{L^{\infty}(B_{r_{0}}(x_{0}))}\int_{-2\ln r_{0}}^{\infty}w_{n}^{2} (t)e^{-t}dt.\] Next we construct the functions \(\{w_{n}\}\). Let \[\zeta_{0}(t):=\begin{cases}0&\mbox{for}\ \ t\in[0,1/4),\\ 2t-\frac{1}{2}&\mbox{for}\ \ t\in[1/4,1/2),\\ t&\mbox{for}\ \ t\in[1/2,1),\\ 1&\mbox{for}\ \ t\in[1,+\infty).\end{cases} \tag{5.3}\] We then define, for all large \(n\), \[w_{n}(t):=\sqrt{\frac{n}{4\pi}}\,\zeta_{0}\big{(}\frac{t}{n}\big{)}.\] Clearly, for all large \(n\), \[4\pi\int_{-2\ln r_{0}}^{\infty}w_{n}^{\prime}(t)^{2}dt=n^{-1}\Big{(}\int_{n/ 2}^{n/4}2dt+\int_{n/2}^{n}1dt\Big{)}=1\] and \[2\pi\int_{-2\ln r_{0}}^{\infty}w_{n}^{2}(t)e^{-t}dt = 2\pi\int_{n/4}^{\infty}w_{n}^{2}(t)e^{-t}dt\] \[\leq \frac{2}{n}\int_{n/4}^{n}t^{2}e^{-t}dt+\frac{n}{2}\int_{n}^{+ \infty}e^{-t}dt\] \[\leq e^{-n/2}.\] Thus \[\|u_{n}\|_{V,\mu}^{2}\leq\mathcal{I}_{n}(u_{n})\leq 1+2\pi\mu\|V\|_{L^{\infty}(B_{ r_{0}}(x_{0}))}e^{-n/2}=:1+Ce^{-n/2}.\] Set \(\hat{u}_{n}:=u_{n}/\|u_{n}\|_{V,\mu}\) and \(\hat{w}_{n}=w_{n}/\|u_{n}\|_{V,\mu}\). Then, for all large \(n\) and \(\epsilon_{0}\in(0,1/2)\), \[\mathcal{N}(\hat{w}_{n}) = \pi\int_{0}^{+\infty}e^{\alpha\hat{w}_{n}^{2}(t)-t}dt\] \[> \pi\int_{(1-\epsilon_{0})n}^{n}e^{\frac{\alpha}{1+Ce^{-n/2}}\frac {n}{4\pi}(\frac{t}{n})^{2}-t}dt\] \[\geq \pi e^{n\left[\frac{\alpha(1-\epsilon_{0})^{2}}{4\pi(1+Ce^{-n/2}) }-1\right]}\int_{(1-\epsilon_{0})n}^{n}1dt\,\geq\pi\epsilon_{0}n\to\infty\quad \text{as}\,\,\,n\to\infty\] provided that \[\frac{\alpha(1-\epsilon_{0})^{2}}{4\pi(1+Ce^{-n/2})}-1\geq 0,\] which holds for all large \(n\) if \(\alpha>4\pi\) and \(\epsilon_{0}>0\) is chosen sufficiently small. Proof of Theorem 1.6.: _Part \((i)\)-\((ii)\):_ In the radially symmetric case, the bound follows from Lemma 4.4 and in the nonradial case, the bound follows from [9, Theorem 1]. Part \((iii)\)-\((iv)\) follows from Lemma 5.1 for \(\alpha>m_{\mu}\) in the radial case and from Lemma 5.2 for \(\alpha>4\pi\) in the nonradial case. Proof of Theorem 1.7.: For \(u\in\mathcal{H}^{1}_{V,\mu,0}(B_{1})\) such that \(\|u\|_{V,\mu}\leq 1\), using (1.15) and \(\mu<0\), we obtain that \(u\in\mathcal{H}^{1}_{\mu,0}(B_{1})\) and \(\|u\|_{\mu}\leq\|u\|_{V,\mu}\leq 1\). _Part \((i)\)._ If \(u\) is radially symmetric, it follows from Lemma 4.4 that \[\int_{B_{1}}e^{m_{\mu}|u|^{2}}dx<\infty.\] _Part \((ii)\)._ Without loss of generality we assume \(u\in C_{c}^{0,1}(B_{1})\), and let \(u^{*}\) denote its associated radially decreasing rearrangement. By the Polya-Szego inequality and rearrangement inequalities we have, since \(r\in(0,1)\to V(r)\) is decreasing, \[\int_{B_{1}}|\nabla u^{*}|^{2}dx\leq\int_{B_{1}}|\nabla u|^{2}dx,\] \[\int_{B_{1}}u^{2}V(|x|)dx\leq\int_{B_{1}}(u^{*})^{2}V^{*}(|x|)dx=\int_{B_{1}} (u^{*})^{2}V(|x|)dx\] and \[\int_{B_{1}}e^{\alpha u^{p}}dx\leq\int_{B_{1}}e^{\alpha(u^{*})^{p}}dx.\] Hence, for \(\mu\in[-\frac{1}{4},0)\), we have \[1\geq\int_{B_{1}}\Big{(}|\nabla u|^{2}+\mu u(x)^{2}V(|x|)\Big{)}dx\geq\int_{B_ {1}}\Big{(}|\nabla u^{*}|^{2}+\mu u^{*}(x)^{2}V(|x|)\Big{)}dx.\] Now we can apply Lemma 4.4 to \(u^{*}\) to obtain the desired conclusion. Part \((iii)\) follows from Lemma 5.1. ### The case \(\mu=-\frac{1}{4}\) We give the proof of Theorem 1.9 here. For any function \(u\in\mathcal{H}^{1}_{V,-\frac{1}{4},0}(B_{1})\) such that \(\|u\|_{{}_{V,-\frac{1}{4}}}\leq 1\), by (1.15), \[u\in\mathcal{H}_{0}(B_{1})\quad\text{and}\quad\|u\|_{-\frac{1}{4}}\leq\|u\|_{{ }_{V,-\frac{1}{4}}}\leq 1.\] _Part \((i)\)._ If \(u\) is radial, it follows from Lemma 4.6 that for any \(\alpha>0\), \(p\in(0,1)\), there exists \(C=C_{\alpha,p}>0\) independent of \(u\) such that \[\int_{B_{1}}e^{\alpha|u|^{p}}dx\leq C.\] _Part \((ii)\)._ As before we may assume \(u\in C^{1}_{c}(B_{1})\), and let \(u^{*}\) denote its associated radially decreasing rearrangement. By the arguments in the proof of Theorem 1.7\((ii)\), we have \[\int_{B_{1}}\Big{(}|\nabla u|^{2}-\frac{1}{4}u(x)^{2}V(|x|)\Big{)}dx\geq\int_{B _{1}}\Big{(}|\nabla u^{*}|^{2}-\frac{1}{4}u^{*}(x)^{2}V(|x|)\Big{)}dx.\] Now we can apply Lemma 4.6 to \(u^{*}\) to obtain the desired bound. _Part \((iii)\)._ By (1.16), there exist \(C>0\) and \(\theta>0\) such that \[|V(r)r^{2}(-\ln r)^{2}-1|\leq C(-\ln r)^{-\theta}\quad\text{for}\ r\in(0,\frac{ 1}{4}).\] Let \[\begin{cases}u(r)=(-\ln r)^{\frac{1}{2}}v(r)\quad\text{for}\ \,r\in(0,1),\\ \frac{1}{4}=e^{\frac{1}{2}(1-e^{t_{0}})},\ r=e^{\frac{1}{2}(1-e^{t+t_{0}})}, \ v(r)=w(t);\end{cases}\] then \[dr=-\frac{1}{2}e^{\frac{1}{2}(1-e^{t+t_{0}})}e^{t+t_{0}}dt,\ \lim_{t\to-t_{0}^{+}}w(t)=\lim_{r\to 1^{-}}v(r)=0\] and \[\|u\|_{-1/4}^{2}=2\pi\int_{0}^{1}v^{\prime}(r)^{2}r(-\ln r)dr=2\pi \int_{-t_{0}}^{+\infty}w^{\prime}(t)^{2}\,\frac{e^{t+t_{0}}-1}{e^{t+t_{0}}}dt,\] \[\Big{|}\int_{0}^{1/4}\Big{(}V(r)r^{2}(-\ln r)^{2}-1\Big{)}\frac{- \ln r}{r^{2}(-\ln r)^{2}}v(r)^{2}rdr\Big{|} \leq C\int_{0}^{1/4}\frac{1}{r(-\ln r)^{1+\theta}}v(r)^{2}dr\] \[= C\int_{0}^{+\infty}w(t)^{2}\,\frac{e^{t+t_{0}}}{(e^{t+t_{0}}-1) ^{1+\theta}}dt,\] \[\int_{B_{1}}e^{\alpha|u|^{p}}dx=2\pi\int_{0}^{1}e^{\alpha|u|^{p}}rdr=\pi\int_ {-t_{0}}^{+\infty}e^{\alpha\frac{(e^{t+t_{0}}-1)^{p}}{2^{p}}|w|^{p}+1-e^{t+t_{ 0}}+(t+t_{0})}dt.\] We now construct a sequence of radially symmetric functions \(\{\hat{u}_{n}\}\subset C^{\infty}_{c}(B_{1})\), whose supporting sets shrink to the origin as \(n\to\infty\), and satisfies \[\|\hat{u}_{n}\|_{V,-1/4}=1,\ \lim_{n\to\infty}\int_{B_{1}}e^{\alpha|\hat{u}_{n}| ^{p}}dx=\infty\] for any \(p\geq 1\) and any \(\alpha>0\). Given \(n>4\), we let \(\eta_{n}:(-\infty,+\infty)\to[0,1]\) be a smooth function such that \[\eta_{n}(t)=\begin{cases}0&\text{for}\ \,t\in(-\infty,0),\\ 1&\text{for}\ \,t\in(1,n-1),\\ 0&\text{for}\ \,t\in(n,+\infty).\end{cases} \tag{5.4}\] Define \[b_{1}(n):=\int_{1}^{n}\eta_{n}(s)ds\quad\text{and}\quad b_{2}(n):=\sqrt{4\pi \int_{0}^{n}\eta_{n}(s)^{2}ds};\] then \[\frac{b_{1}(n)}{b_{2}(n)}\geq\frac{n-2}{\sqrt{4\pi n}}\to+\infty\quad\text{as }\,n\to+\infty.\] Set \[\phi_{n}(t):=\eta_{n}(t)-\eta_{n}(t-2n)\quad\text{for}\ \,t\in\mathbb{R}, \tag{5.5}\] and \[w_{n}(t):=\frac{1}{b_{2}(n)+1}\int_{0}^{t}\phi_{n}(s)ds\quad\text{for}\ \,t\geq 0;\] then \(w_{n}\) is nonnegative and \(w_{n}\in C^{\infty}_{c}((0,\infty))\). Moreover, \[\|u_{n}\|_{-1/4}^{2}=2\pi\int_{-t_{0}}^{+\infty}w_{n}^{\prime}(t)^{2 }\,\frac{e^{t+t_{0}}-1}{e^{t+t_{0}}}dt \leq \frac{2\pi}{[b_{2}(n)+1]^{2}}\int_{0}^{+\infty}\phi_{n}^{2}(t)dt\] \[= \frac{b_{2}(n)^{2}}{[b_{2}(n)+1]^{2}}<1,\] and due to \(w_{n}(t)=0\) for \(t\leq 0\), \[\|u_{n}\|_{V,-1/4}^{2}-\|u_{n}\|_{-1/4}^{2} = \int_{0}^{1}\Big{(}V(r)r^{2}(-\ln r)^{2}-1\Big{)}\frac{-\ln r}{r^ {2}(-\ln r)^{2}}v_{n}(r)^{2}rdr\] \[= \int_{0}^{1/4}\Big{(}V(r)r^{2}(-\ln r)^{2}-1\Big{)}\frac{1}{r(- \ln r)}v_{n}(r)^{2}dr\] \[\leq C\int_{0}^{+\infty}w_{n}(t)^{2}\,\frac{e^{t+t_{0}}}{(e^{t+t_{0}} -1)^{1+\theta}}dt\] \[\leq C\int_{0}^{+\infty}\frac{t^{2}e^{t+t_{0}}}{(e^{t+t_{0}}-1)^{1+ \theta}}dt\] \[=: C_{0}.\] Therefore \[\|u_{n}\|_{V,-1/4}^{2}\leq 1+C_{0}.\] Note that for \(t\in[n,2n]\), \[w_{n}(t)=\int_{0}^{n}\phi_{n}(s)ds=\frac{b_{1}(n)}{b_{2}(n)+1}\] and with \[\hat{u}_{n}:=u_{n}/\|u_{n}\|_{V,-1/4},\ \hat{w}_{n}:=w_{n}/\|u_{n}\|_{V,-1/4},\] we have, for all large \(n\), \[\int_{B_{1}}e^{\alpha|\hat{u}_{n}|^{p}}dx = \pi\int_{-t_{0}}^{+\infty}e^{\alpha\left(\frac{e^{t+t_{0}}-1}{2} \right)^{p}|\hat{w}_{n}|^{p}+1-e^{t+t_{0}}+(t+t_{0})}dt\] \[> \pi\int_{n}^{2n}e^{\alpha\left(\frac{e^{t+t_{0}}-1}{2}\right)^{p} |\hat{w}_{n}|^{p}+1-e^{t+t_{0}}+(t+t_{0})}dt\] \[\geq \pi\int_{n}^{2n}e^{\alpha\left[\frac{b_{1}(n)}{b_{2}(n)+1}\frac{1 }{4(1+C_{0})^{2/p}}\right]^{p}e^{p(t+t_{0})}-e^{t+t_{0}}}dt\] \[\rightarrow +\infty\ \ \mbox{ as }\ n\rightarrow+\infty,\] since \(\frac{b_{1}(n)}{b_{2}(n)+1}\rightarrow\infty\) as \(n\rightarrow\infty\). This completes the proof.
2310.09032
Cell-Free Massive MIMO for ISAC: Access Point Operation Mode Selection and Power Control
This paper considers a cell-free massive multipleinput multiple-output (MIMO) integrated sensing and communication (ISAC) system, where distributed MIMO access points (APs) are used to jointly serve the communication users and detect the presence of a single target. We investigate the problem of AP operation mode selection, wherein some APs are dedicated for downlink communication, while the remaining APs are used for sensing purposes. Closed-form expressions for the individual spectral efficiency (SE) and mainlobe-to-average-sidelobe ratio (MASR) are derived, which are respectively utilized to assess the communication and sensing performances. Accordingly, a maxmin fairness problem is formulated and solved, where the minimum SE of the users is maximized, subject to the per-AP power constraints as well as sensing MASR constraint. Our numerical results show that the proposed AP operation mode selection with power control can significantly improve the communication performance for given sensing requirements.
Mohamed Elfiatoure, Mohammadali Mohammadi, Hien Quoc Ngo, Michail Matthaiou
2023-10-13T11:49:45Z
http://arxiv.org/abs/2310.09032v1
# Cell-Free Massive MIMO for ISAC: Access Point Operation Mode Selection and Power Control ###### Abstract This paper considers a cell-free massive multiple-input multiple-output (MIMO) integrated sensing and communication (ISAC) system, where distributed MIMO access points (APs) are used to jointly serve the communication users and detect the presence of a single target. We investigate the problem of AP operation mode selection, wherein some APs are dedicated for downlink communication, while the remaining APs are used for sensing purposes. Closed-form expressions for the individual spectral efficiency (SE) and mainlobe-to-average-sidelobe ratio (MASR) are derived, which are respectively utilized to assess the communication and sensing performances. Accordingly, a max-min fairness problem is formulated and solved, where the minimum SE of the users is maximized, subject to the per-AP power constraints as well as sensing MASR constraint. Our numerical results show that the proposed AP operation mode selection with power control can significantly improve the communication performance for given sensing requirements. ## I Introduction ISAC has recently been envisioned as a key enabling technology for future wireless networks, aiming to efficiently utilize the congested resources for both communication and sensing [1, 2]. The radar bands set aside for sensing can be harnessed for wireless communication operation, enabling the implementation of high data-rate applications. To unify the radar and communication operations, two well-known designs, namely separated and co-located systems, were introduced in [3, 4, 5] and [6, 7], respectively. The former utilizes different devices, operating over the same frequency band, for communication and sensing, while in the latter a single device acts as radar and communication base station (BS) by simultaneously communicating with multiple downlink users and detecting radar targets. The main driving force behind the transition from the separated design to a co-located design was to reduce the complexity induced by side-information exchange among the radar and communication devices [7]. However, co-located design with a MIMO BS often suffers from a fairness problem, since the cell-boundary users are subject to inter-cell interference and significant power decay over long distances. The key feature of massive MIMO technology, i.e., inter/intra-cell interference suppression, revitalizes the interest towards separated design with multiple communication and radar devices to implement distributed ISAC architectures. In this context, cell-free massive MIMO with distributed MIMO APs can be exploited to support ISAC. In cell-free massive MIMO, all users are coherently served by all APs over the same time-frequency band. Each AP is connected to a central processing unit (CPU) via fronthaul links, and the CPU is responsible for coordination [8, 9]. The integration of ISAC into cell-free massive MIMO networks, has been recently investigated in [10, 11]. Specifically, Behdad _et al._[10] studied a cell-free massive MIMO ISAC system, consisting of a fixed number of transmit and receive APs. Users are served by the transmit APs and, at the same time, the transmitted signals are used for sensing to detect the presence of a target in a certain location. The reflected signals are received at the receive APs and then processed at the CPU. The authors proposed a power allocation algorithm to maximize the sensing signal-to-noise ratio (SNR) under signal-to-interference-plus-noise ratio (SINR) constraints at the user. Demirhan _et al._[11] studied the sensing and communication beamforming design problem in cell-free massive MIMO ISAC systems, where a joint beamforming design was proposed to maximize the sensing SNR, while satisfying the communication SINR constraints. Different from the above-mentioned works [10, 11], where the AP operation modes are fixed, we consider a novel cell-free massive MIMO ISAC network with dynamic AP operation mode selection. The APs' operation mode is designed to maximize the minimum SE of the downlink users, while satisfying the sensing requirement to detect the presence of a single target in a certain location. Relying on the long-term channel state information (CSI), the APs are divided into communication APs (C-APs) and sensing APs (S-AP) to support downlink communication and sensing operations simultaneously. The main contributions of our paper can be summarized as follows: * By leveraging the use-and-then-forget strategy, we derive closed-form expressions for the downlink SE and MRSR to evaluate the performance of the communication and sensing operation, respectively. Then, we formulate the problem of joint AP operation mode selection and power control, considering per-AP power constraints and a MASR constraint for target detection. * We propose a greedy algorithm for AP operation mode selection. Accordingly, an alternating optimization (AO) algorithm is developed to handle the coupling between the C-AP and R-AP power control coefficients' design. * Numerical results show that our proposed greedy AP
2303.11745
Poisoning Attacks in Federated Edge Learning for Digital Twin 6G-enabled IoTs: An Anticipatory Study
Federated edge learning can be essential in supporting privacy-preserving, artificial intelligence (AI)-enabled activities in digital twin 6G-enabled Internet of Things (IoT) environments. However, we need to also consider the potential of attacks targeting the underlying AI systems (e.g., adversaries seek to corrupt data on the IoT devices during local updates or corrupt the model updates); hence, in this article, we propose an anticipatory study for poisoning attacks in federated edge learning for digital twin 6G-enabled IoT environments. Specifically, we study the influence of adversaries on the training and development of federated learning models in digital twin 6G-enabled IoT environments. We demonstrate that attackers can carry out poisoning attacks in two different learning settings, namely: centralized learning and federated learning, and successful attacks can severely reduce the model's accuracy. We comprehensively evaluate the attacks on a new cyber security dataset designed for IoT applications with three deep neural networks under the non-independent and identically distributed (Non-IID) data and the independent and identically distributed (IID) data. The poisoning attacks, on an attack classification problem, can lead to a decrease in accuracy from 94.93% to 85.98% with IID data and from 94.18% to 30.04% with Non-IID.
Mohamed Amine Ferrag, Burak Kantarci, Lucas C. Cordeiro, Merouane Debbah, Kim-Kwang Raymond Choo
2023-03-21T11:12:17Z
http://arxiv.org/abs/2303.11745v1
Poisoning Attacks in Federated Edge Learning for Digital Twin 6G-enabled IoTs: An Anticipatory Study ###### Abstract Federated edge learning can be essential in supporting privacy-preserving, artificial intelligence (AI)-enabled activities in digital twin 6G-enabled Internet of Things (IoT) environments. However, we need to also consider the potential of attacks targeting the underlying AI systems (e.g., adversaries seek to corrupt data on the IoT devices during local updates or corrupt the model updates); hence, in this article, we propose an anticipatory study for poisoning attacks in federated edge learning for digital twin 6G-enabled IoT environments. Specifically, we study the influence of adversaries on the training and development of federated learning models in digital twin 6G-enabled IoT environments. We demonstrate that attackers can carry out poisoning attacks in two different learning settings, namely: centralized learning and federated learning, and successful attacks can severely reduce the model's accuracy. We comprehensively evaluate the attacks on a new cyber security dataset designed for IoT applications with three deep neural networks under the non-independent and identically distributed (Non-IID) data and the independent and identically distributed (IID) data. The poisoning attacks, on an attack classification problem, can lead to a decrease in accuracy from \(94.93\)% to \(85.08\)% with IID data and from \(94.18\)% to \(30.04\)% with Non-IID. Poisoning attack, Federated Learning, IoT, 6G, Security, Digital Twin. ## I Introduction The emergence of the digital twin (DT) paradigm with 6G wireless communication networks is being viewed to transform applications and customer services through the Internet of Things (IoT) to fully autonomous and smart systems [1]. The main idea behind DT is to build a digital replication of wireless networks' physical devices and features to achieve low latency and highly reliable connectivity while providing high performance and energy efficiency in IoT networks. The architecture of digital twin 6G-enabled IoT can be organized into four layers, namely, Physical, Networking, Services, and Applications. The physical layer refers to IoT devices equipped with sensors and actuators for collecting data from the environment. The networking layer refers to DT services, telecommunication networks (6G), IEEE 802.15.4, long-range WiFi, and communication protocols to connect IoT devices with other operators and system components. The Ditto 1 can be used as IoT middleware, supporting an IoT abstraction level for interacting IoT solutions with physical devices via the digital twin model. The services layer provides AI, storage, computation, and services to IoT devices and Edge servers [2]. The applications layer refers to IoT applications, including the Internet of Vehicles, the Internet of Sensing, and the Internet of Energy...etc. Footnote 1: [https://www.eclipse.org/ditto/](https://www.eclipse.org/ditto/) With distributed edge learning, intrusion poisoning attacks pose an even more severe challenge than the conventional machine learning environment (i.e., centralized learning). However, the threat of intrusion poisoning attacks can be challenging to overcome, as they can be difficult to identify. Nevertheless, the availability of intrusion poisoning attacks has been proven to be efficient in distributed edge learning in the following recent work: [3, 4, 5, 6, 7, 8]. Zhang et al. [4] proposed a defense mechanism at the model level, which is mainly based on the detection of a poisoned model. More precisely, the proposed mechanism uses the model's important parameter selection method based on the gradient to deliver the lowest dimensional efficient performances of the local model parameters downloaded. Venkatesan et al. [5] considered the poisoning availability attack framework, where an attacker can introduce a set of poisonous samples during training to degrade the deployed model's accuracy. Aiken et al. [6] proposed an IDS system based on an anomaly. The proposed system uses a tool for adversarial testing named Hydra, which measures the influence of the threat of adversarial evasion classifier attacks against the network intrusion detection system to reduce the detection rate of malicious network traffic. Motivated by the facts mentioned above, in this article, we propose an anticipatory study for poisoning attacks in federated edge learning for digital twin 6G-enabled IoTs. Specifically, we demonstrate that attackers who conduct poisoning attacks in two different learning modes, namely, centralized learning and federated learning, can severely reduce the model's accuracy and the detection rate of each intrusion. We comprehensively evaluate our attacks on a new cyber security dataset designed for IoT applications, the Edge-IIoT dataset. We use different deep-learning models for cyber security intrusion detection. Furthermore, federated edge learning is evaluated under two data distribution types: IID and Non-IID data. The study demonstrates that attackers who conduct poisoning attacks in two different learning modes, namely, centralized learning and federated learning, can lead to a decrease in accuracy from 94.93% to 85.98% with IID data and from 94.18% to 30.04% with Non-IID. ``` Data:\(\eta\), \(Epo\), \(Batch\), /* Client Identification algorithm based on the average loss change and backdoor attack */ \(ClientIdentification()\) /* Insert poison data to the local dataset */ \(InsertPoisonData()\) /* Assign wrong label to poison data */ \(AssignWonglabel()\) /* Starts the targeted client dropping attack using DDoS attacks */ \(DropHonestClients()\) /* Split the local dataset \(\mathcal{P}_{m}\) into Batch local data batch */ \(B\leftarrow\text{Split}(\mathcal{P}_{m},\,Batch)\) for\(i=1..,Epo\)do \(f\gets f-\eta\nabla f_{c}(x,b)\)// Local malicious client training 1 end for 2 3 end for Return \(f\) to Edge Server ``` **Algorithm 1**Poisoning Attack ``` Data:\(\eta\), \(Epo\), \(Batch\), /* Client Identification algorithm based on the average loss change and backdoor attack */ \(ClientIdentification()\) /* Insert poison data to the local dataset */ \(InsertPoisonData()\) /* Assign wrong label to poison data */ \(AssignWonglabel()\) /* Starts the targeted client dropping attack using DDoS attacks */ \(DropHonestClients()\) /* Split the local dataset \(\mathcal{P}_{m}\) into Batch local data batch */ \(B\leftarrow\text{Split}(\mathcal{P}_{m},\,Batch)\) for\(i=1..,Epo\)do for\(b\in B\)do \(f\gets f-\eta\nabla f_{c}(x,b)\)// Local malicious client training 1 end for 2 3 end for Return \(f\) to Edge Server ``` **Algorithm 2**Local updates of honest clients ## II The Anticipated Poisoning Attack on the Federated Edge Learning ### _Description of the Anticipated Poisoning Attack_ We systemize the poisoning threat models in federated edge learning based on three dimensions: Adversarial goal, Attack strategies, and Malicious client selection. #### Ii-A1 Adversarial goal An attacker can focus on two types of poisoning attacks: a machine learning attack against availability (i.e., target all classes) and a machine learning attack against a particular class. Since the targeted attacks are considerably challenging to detect, we consider a Poisoning attack as a targeted attack against cyber security intrusion detection based on federated learning. Specifically, the Normal class is considered as a particular class, which is affected by the attack rate \(\alpha=[0\%,40\%,50\%,60\%,80\%]\). The attack rate \(\alpha=60\%\) means that 60% of Normal traffic is labeled as an attack. #### Ii-A2 Attack strategies The attacker can conduct poisoning attacks (i.e., Label flipping attack, Targeted dropping attack, and Clean-label attack) in two types of learning methods: centralized learning and federated learning. In centralized learning mode, an attacker is considered to have access to the training dataset of the server using a backdoor attack. Then, it injects maliciously crafted data points containing normal \begin{table} \begin{tabular}{|c|c|} \hline **Data:**\(\eta\), \(Epo\), \(Batch\). \\ \hline **Malicious IoT device ClientUpdate {\(f\), \(k_{m}\)}:** \\ /* Client Identification algorithm based on the average loss change and backdoor attack */ \(ClientIdentification()\) /* Insert poison data to the local dataset */ \(InsertPoisonData()\) /* Assign wrong label to poison data */ \(AssignWonglabel()\) /* Starts the targeted client dropping attack using DDoS attacks */ \(DropHonestClients()\) /* Split the local dataset \(\mathcal{P}_{m}\) into Batch local data batch */ \(B\leftarrow\text{Split}(\mathcal{P}_{m},\,Batch)\) for\(i=1..,Epo\)do for\(b\in B\)do \(f\gets f-\eta\nabla f_{c}(x,b)\)// Local malicious client training 2 end for 3 4 end for Return \(f\) to Edge Server ``` **Algorithm 3**Local updates of malicious clients #### Ii-A3 \begin{table} \begin{tabular}{|c|c|} \hline **Nothing** & **Dureblations** \\ \hline \(\eta\) & The learning rate \\ \hline \(Epo\) & The number of local epochs \\ \hline \(Batch\) & The local networks are \\ \hline \(K_{H}\) & The local clients \\ \hline \(k_{H}\) & The functions of honest clients \\ \hline \(K_{H}\) & The functions of malicious clients \\ \hline \(f_{k_{h}}^{k_{h}}\) & The new local set of weights by honest client \(k_{h}\) \\ \hline \(f_{k_{h+1}}^{k_{m}}\) & The new local set of weights by malicious client \(k_{m}\) \\ \hline \(C_{h}\) & The function of honest clients \\ \hline \(C_{m}\) & The function of malicious clients \\ \hline \(n_{k_{h}}\) & The number of local weights are found client \(k_{h}\) \\ \hline \(n_{k_{m}}\) & The number of local weights are found client \(k_{m}\) \\ \hline \(n_{k_{m}}\) & The number of local weights are found client \(k_{m}\) \\ \hline \(\eta\) & The number of random variables for various clients \\ \hline \(\eta IoT traffic labeled an attack class. In the federated learning mode, some malicious clients contain maliciously crafted data points, which send malicious updates to the server to change the model's prediction. The generation of poison attacks in federated edge learning is presented in Algorithms 1, 2, 3, and can be defined by the following steps. Table I presents the notations used. * Step 1: The Edge server initializes the first model \(f_{1}\). * Step 2: The Edge server distributes the overall model \(f_{t+1}\) to honest clients \(K_{H}\) and malicious clients \(K_{M}\). * Step 3: The malicious clients \(K_{M}\) identify the best clients based on the average loss change through a backdoor attack. * Step 4: The malicious clients \(K_{M}\) generate the targeted class samples and insert poison data into the local dataset. * Step 5: The malicious clients \(K_{M}\) assign wrong label to poisoned data. * Step 6: The malicious clients \(K_{M}\) start the targeted client dropping attack using DDoS attacks. * Step 7: The malicious clients \(K_{M}\) calculate the poisoned update and it to the Edge server. #### Ii-B3 Malicious clients selection We consider an intelligent attacker for the selectively honest client (i.e., to be malicious) to attack the federated edge learning to ensure maximum attack strength while simultaneously reducing the probability of detection. Based on the backdoor attack, this attacker can access the machine learning models located at the Edge server. The attacker has the following choices to consider: * When to start the attack during the FL process? we found that the success of the poisoning attack is severe when the attackers control more than 51% of the clients. * What the knowledge of the attacker is? The threat model is a white box attack if the attacker has full knowledge of parameters, algorithms, data, and features. If he has only query access to the model, the threat model is a black box attack. We found that the success of the poisoning attack is severe under the white box attack. * Which honest clients to select to inject maliciously crafted data points containing normal IoT traffic labeled as an attack class? we found that the more attackers select early the best clients (i.e., who provides the best learning model), the more poisoning attack severe. * How much local data to drop or inject for each client to maximize the entire global model corruption at the edge server? We found that it depends on the analysis of the local data for each client. * What is the best assignment strategy for assigning wrong labels? The assignment strategy can be implemented in various ways such as dropping, shuffling, swapping, and sliding. After several experiments, we found that the assignment strategy of the wrong label based on swapping affects more the distributed edge learning compared to others strategies. We use three different deep-learning models for intrusion detection, namely, DNN, RNN, and CNN. The architectures of three deep neural models adopted by the intrusion detection are illustrated in Fig. 1. ## III Experimental Evaluation ### _Experimental setup_ We conducted an experimental analysis of poisoning attacks against intrusion detection systems based on deep learning approaches in centralized and federated learning. We choose three deep learning approaches: DNN, CNN, and RNN. The classification tasks are conducted in two modes: Binary classification and Multi-class classification. The Binary classification includes two classes (i.e., Normal or Attack). The Multi-class classification includes 15 classes (i.e., Normal or attack types). We use Google Colab with Python libraries to analyze and visualize data. To build, train, and evaluate poisoning \begin{table} \begin{tabular}{|c|c|c|} \hline & **Parameter** & **Value** \\ \hline \multirow{4}{*}{Contrusion} & Batch size & 100 \\ \cline{2-4} & \multicolumn{2}{c|}{\(1\) fold epochs} & 25 \\ \cline{2-4} & Local epochs & 3 \\ \cline{2-4} & Global epochs & 10 \\ \cline{2-4} & Robust size & 100 \\ \cline{2-4} & Total number of 1 fold epochs & 10 \\ \cline{2-4} & Linear size and 1 number of epochs & 70 \\ \cline{2-4} & Linear size and 1 number of epochs & 70 \\ \cline{2-4} & Linear size and 1 number of epochs & 12 \\ \cline{2-4} & Linear size and 1 number of epochs & 12 \\ \cline{2-4} & Linear size and 1 number of epochs & 12 \\ \hline \multirow{11}{*}{Fulibrated} & ML classification & CNN detection DNN \\ \cline{2-4} & Classification task & multi-class classification \\ \cline{2-4} & Number of classes & 1, total number of epochs \\ \cline{2-4} & Learning rate & ML classification \\ \cline{2-4} & Learning rate & ML classification \\ \cline{2-4} & Randomization technique & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Learning rate & ML classification \\ \cline{2-4} & Randomization technique & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Learning rate & ML classification \\ \cline{2-4} & Randomization technique & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Logistic classifier & ML classification \\ \cline{2-4} & Logistic classifier & ML classification \\ \cline{2-4} & Logistic classifier & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \cline{2-4} & Classification task & ML classification \\ \hline \end{tabular} \end{table} TABLE II: Settings for experimental evaluation. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Pioneing Attack**} & \multicolumn{2}{c|}{**Practice**} & \multicolumn{2}{c|}{**Recall**} & \multicolumn{2}{c|}{**In-corres**} \\ \hline **Pioneing Attack** & **Class** & **RNN** & **CNN** & **RNN** & **RNN** & **RNN** & **RNN** \\ \hline \multirow{4}{*}{No attack} & Backdoor & 72\% & 77\% & 72\% & 93\% & 93\% & 93\% & 94\% & 84\% & 81\% \\ \cline{2-11} \cline{2-11} & DDoS,JPTPT & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% \\ \cline{2-11} \cline{2-11} & DDoS,JPT & 88\% & 100\% & 91\% & 100\% & 100\% & 100\% & 100\% & 81\% & 55\% \\ \cline{2-11} \cline{2-11} & DDoS,JPT & 89\% & 100\% & 90\% & 90\% & 90\% & 90\% & 90\% & 90\% & 90\% \\ \cline{2-11} \cline{2-11} & DDoS,JPT & 90\% & 100\% & 90\% & 90\% & 90\% & 90\% & 90\% & 90\% & 90\% \\ \cline{2-11} \cline{2-11} & Recognition & 93\% & 72\% & 93\% & 93\% & 90\% & 90\% & 90\% & 90\% & 90\% \\ \cline{2-11} \cline{2-11} & MHTM & 100\% & 100\% & 100\% & 100\% & 100\% & 100\% & 100\% & 100\% & 100\% \\ \cline{2-11} \cline{2-11} & Normal & 100\% & 100\% & 100\% & 100\% & 100\% & 100\% & 100\% & 100\% & 100\% \\ \cline{2-11} \cline{2-11} & Personal & 93\% & 71\% & 71\% & 71\% & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% \\ \cline{2-11} \cline{2-11} & Pott,Scannure & 93\% & 93\% & 93\% & 93\% & 93\% & 93\% & 93\% & 93\% & 93\% \\ \cline{2-11} \cline{2-11} & Randomization & 93\% & 93\% & 93\% & 93\% & 93\% & 93\% & 93\% & 93\% & 93\% \\ \cline{2-11} \cline{2-11} & 53\% & 100\% & 100\% & 80\% & 83\% & 93\% & 93\% & 93\% & 93\% & 93\% \\ \hline \multirow{4}{*}{Pioneing Attack} & 72\% & 72\% & 72\% & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% \\ \cline{2-11} \cline{2-11} \cline{2-11} & DDoS,JPT & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% \\ \cline{2-11} \cline{2-11} \cline{2-11} & DDoS,JPT & 90\% & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% \\ \hline \multirow{4}{*}{Pioneing Attack} & **70\%** & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% & 70\% \\ \cline{2 models, we use both open-source frameworks, namely, Keras and PyTorch. We adopt both the IID and Non-IID data distribution in federated learning. Table II presents the details of settings for experimental evaluation. ### _Dataset description and pre-processing_ We use the Edge-IIoTset dataset [9], a new comprehensive, realistic cyber security dataset. The Edge-IIoTset dataset is generated using a purpose-built IoT/IIoT testbed. The testbed consists of seven interconnected layers: IoT/IIoT perception layer, edge layer, SDN layer, fog layer, Blockchain layer, NFV layer, and cloud computing layer. It contains more than 20 million total instances for normal and attacks traffic in CSV and PCAP files, with over 63 features. The pre-processing data phases are organized into the following seven steps: 1. Clean corrupted and duplicated rows. 2. Clean unnecessary columns (features), especially for avoiding overfitting. 3. Encode categorical features as a one-hot numeric array using the \(OneHotEncoder()\) function. 4. Split the dataset into random train (80%) and test (20%) subsets. 5. Standardize features using the \(StandardScaler()\) function. 6. Perform oversampling using SMOTE 2. Footnote 2: To oversample data in minority classes while avoiding overfitting, we use the Synthetic Minority Over-sampling Technique (SMOTE). 7. Give a new shape to an array without changing its data, which is used for RNN and CNN3. Footnote 3: The reason for reshaping is to provide the correct data to the CNN and RNN models using the numpy.reshape() function. ### _Performance Metrics_ In order to evaluate the performance of machine learning models, we use the following important performance metrics: * _True Positive (TP)_: correctly classified attack samples. * _False Negative (FN)_: wrongly classified attack samples. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Cosmification mode**} & \multicolumn{3}{c|}{**Clients**} & \multicolumn{8}{c|}{**Federated learning models**} \\ \cline{2-13} & \multicolumn{3}{c|}{**Homot**} & \multicolumn{3}{c|}{**Balches**} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{3} & \multicolumn{1}{c|}{4} & \multicolumn{1}{c|}{5} & \multicolumn{1}{c|}{6} & \multicolumn{1}{c|}{7} & \multicolumn{1}{c|}{8} & \multicolumn{1}{c|}{9} & \multicolumn{1}{c|}{10} \\ \hline \multirow{3}{*}{Binary classification} & \(K_{h}=10\) & \(K_{m}=0\) & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% \\ \cline{2-13} & \(K_{h}=2\) & \(K_{m}=3\) & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% \\ \cline{2-13} & \(K_{h}=2\) & \(K_{m}=3\) & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% \\ \cline{2-13} & \(K_{h}=10\) & \(K_{m}=2\) & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% \\ \hline \multirow{3}{*}{Multi classification} & \(K_{h}=10\) & \(K_{m}=0\) & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% \\ \cline{2-13} & \(K_{h}=3\) & \(K_{m}=7\) & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% \\ \hline \end{tabular} \end{table} TABLE V: Accuracy of the federated deep learning approach (CNN) for binary classification and multi-classification in federated model performance with Non-IID data. Fig. 1: Structure of three deep neural models adopted by intrusion detection. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Cosmification mode**} & \multicolumn{3}{c|}{**Clients**} & \multicolumn{8}{c|}{**Federated learning models**} \\ \cline{2-13} & \multicolumn{3}{c|}{**Homot**} & \multicolumn{3}{c|}{**Malicious**} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{2} & \multicolumn{1}{c|}{3} & \multicolumn{1}{c|}{4} & \multicolumn{1}{c|}{5} & \multicolumn{1}{c|}{6} & \multicolumn{1}{c|}{7} & \multicolumn{1}{c|}{8} & \multicolumn{1}{c|}{9} & \multicolumn{1}{c|}{10} \\ \hline \multirow{3}{*}{Binary classification} & \(K_{h}=10\) & \(K_{m}=0\) & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% \\ \cline{2-13} & \(K_{h}=2\) & \(K_{m}=3\) & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% \\ \hline \multirow{3}{*}{Multi classification} & \(K_{h}=2\) & \(K_{m}=3\) & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% \\ \cline{2-13} & \(K_{h}=2\) & \(K_{m}=3\) & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% \\ \cline{2-13} & \(K_{h}=3\) & \(K_{m}=7\) & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% \\ \hline \multirow{3}{*}{Multi classification} & \(K_{h}=10\) & \(K_{m}=0\) & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% \\ \cline{2-13} & \(K_{h}=10\) & \(K_{m}=0\) & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% & 1009\% \\ \hline \end{tabular} \end{table} TABLE IV: Accuracy of the federated deep learning approach (CNN) for binary classification and multi-classification in federated model performance with IID data and different numbers of honest clients \(K_{h}\) and malicious clients \(K_{m}\). * _True Negative (TN)_: correctly classified benign samples. * _False Positive (FP)_: wrongly classified benign samples. * _Accuracy_, given by: \(\frac{TP_{Attack}+TN_{Normal}}{TP_{Attack}+TN_{Normal}+FP_{Attack}}\) * _Precision_, given by: \(\frac{TP_{Attack}+FP_{Normal}}{TP_{Attack}+FN_{Normal}}\) * _Recall_, given by: \(\frac{TP_{Attack}+FN_{Attack}}{TP_{Attack}+FN_{Attack}}\) * \(F_{1}\)_-Score_, given by: \(2\cdot\frac{Precision+Recall}{Precision+Recall}\) * _Poisoning attack rate_: is used to measure the success of the poisoning attack for each label, which is given by: \(1-\frac{Recall_{wp}}{Recall_{wp}}\) where \(Recall_{wp}\) is the detection rate of the intrusion attack after the poisoning attack and \(Recall_{wp}\) is the detection rate of the intrusion attack before the poisoning attack. ### _Experimental Results_ #### Iv-D1 Centralized model performance Table IV presents the classification report of the accuracy of deep learning for binary classification and multi-classification under different deep learning approaches, namely, DNN, RNN, and CNN in centralized model performance with the attack rate \(\alpha\)\(=\)\([0\%,40\%,50\%,60\%]\) (i.e., Poisoning Attack). With binary classification, the accuracy of the DNN classifier is decreased from 100% to 95.50%, while with multi-classification, the accuracy is decreased from 93.01% to 21.76%. We observe that intrusion detection models based on the deep learning classifiers (i.e., CNN, RNN, DNN) are more affected by multi-classification than binary classification. Table III presents the classification report for a multi-class of different deep learning approaches, namely, DNN, RNN, and CNN in centralized model performance with the attack rate \(\alpha\)\(=\)\(60\%\) (i.e., Poisoning Attack). We observe that the DNN gives the highest precision rate without poisoning attack (i.e., \(\alpha\)\(=\)\(0\%\)) for Normal traffic and three types of attacks, namely, MITM attack 100%, Password attack 69%, and XSS attack 100%. In addition, we observe that the CNN gives the highest precision rate for Normal traffic and three types \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{**The attack rate**} & \multicolumn{1}{c|}{**Class**} & \multicolumn{1}{c|}{**Ill**} & \multicolumn{1}{c|}{**Noise-Ill**} & \multicolumn{1}{c|}{**Noise-Ill**} \\ \hline \multirow{8}{*}{No attack} & Backdoor & 852 & 55 & 885 & 755 & 945 & 825 \\ \cline{2-7} & DOR JETP & 755 & 957 & 857 & 755 & 967 & 835 \\ \cline{2-7} & DOR JETP & 1009 & 1009 & 1009 & 909 & 1005 & 1009 \\ \cline{2-7} & DOR JETP & 1009 & 1009 & 905 & 1009 & 1009 & 1009 \\ \cline{2-7} & DOR JETP & 1009 & 1009 & 1009 & 1009 & 1009 & 1009 \\ \cline{2-7} & Pavingering & 095 & 095 & 095 & 095 & 095 & 095 \\ \cline{2-7} & PMTM & 1009 & 975 & 975 & 975 & 1005 & 1009 \\ \cline{2-7} & Neural & 1009 & 1009 & 1009 & 1009 & 1009 & 1009 \\ \cline{2-7} & Paved & 575 & 371 & 574 & 874 & 874 & 874 \\ \cline{2-7} & Paved & 575 & 378 & 407 & 107 & 107 & 107 \\ \cline{2-7} & Paved & 775 & 477 & 371 & 371 & 107 & 107 \\ \cline{2-7} & Paved & 750 & 378 & 378 & 378 & 107 & 278 \\ \cline{2-7} & Paved & 750 & 378 & 378 & 378 & 107 & 278 \\ \cline{2-7} & Paved & 750 & 378 & 378 & 378 & 107 & 278 \\ \hline \multirow{8}{*}{\(K_{h}=10\ K_{m}=0\)} & Backdoor & 575 & 178 & 278 & 678 & 107 & 278 \\ \cline{2-7} & Paved & 570 & 178 & 278 & 678 & 107 & 278 \\ \cline{1-1} \cline{2-7} & Backdoor & 575 & 178 & 278 & 678 & 107 & 278 \\ \hline \multirow{8}{*}{\(K_{h}=3\ K_{m}=7\)} & Backdoor & 575 & 178 & 278 & 678 & 107 & 278 \\ \cline{1-1} \cline{2-7} & Backdoor & 575 & 178 & 278 & 678 & 107 & 278 \\ \cline{1-1} \cline{2-7} & Backdoor & 575 & 178 & 278 & 678 & 107 & 278 \\ \cline{1-1} \cline{2-7} & Backdoor & 575 & 178 & 278 & 678 & 107 & 278 \\ \cline{1-1} \cline{2-7} & Backdoor & 575 & 178 & 278 & 678 & 107 & 278 \\ \cline{1-1} \cline{2-7} & Backdoor & 575 & 178 & 278 & 678 & 107 & 278 \\ \cline{1-1} \cline{2-7} & Backdoor & 575 & 178 & 278 & 678 & 107 & 278 \\ \cline{1-1} \cline{2-7} & Backdoor & 575 & 178 & 278 & 678 & 107 & 278 \\ \cline{1-1} \cline{2-7} & Backdoor & 575 & 178 & 278 & 678 & 107 & 278 \\ \cline{1-1} \cline{2-7} & Backdoor & 575 & 178 & 278 & 678 & 107 & 278 \\ \cline{1-1} \cline{2-7} & Backdoor & 575 & 178 & 278 & 678 & 107 & 278 \\ \cline{1-1} \cline{2-7} & Backdoor & 575 & 178 & 78 & 678 & 107 & 278 \\ \cline{1-1} \cl of attacks: SQL injection attack 64%, Vulnerability scanner attack 94%, and XSS attack 100%. Figure 2 illustrates the confusion matrix of DNN for binary classification in centralized model performance without poisoning attack \(\alpha=0\%\), with Poisoning Attack \(\alpha=60\%\) and \(\alpha=80\%\), and learning rate LR = [0.1, 0.01, 0.001]. Without poisoning attack (i.e., \(\alpha=0\%\)), we observe satisfactory results under the three values of the learning rate LR = [0.1, 0.01, 0.001]. When attackers launch a Poisoning Attack with \(\alpha\) = \(60\%\) and \(\alpha=80\%\), we observe that all the DNN classifier is affected and give negative results under the three values of the learning rate LR = [0.1, 0.01, 0.001]. In addition, we observe that when the learning rate is fixed at 0.1, the DNN classifier can resist the Poisoning Attack compared to the results with the learning rate is 0.01 and 0.001. #### Iii-C2 Federated model performance Table IV presents the accuracy results of the federated deep learning approach (CNN) for binary classification and multi-classification in federated model performance with the IID data and different numbers of honest clients \(K_{h}\) and malicious clients \(K_{m}\). When the number of honest clients is higher than the number of malicious clients (i.e., [\(K_{h}=10\) and \(K_{m}=0\)], [\(K_{h}=7\) and \(K_{m}=3\)]), we can observe that in each round, the accuracy results increases until it reaches 94.93% and 100% with multi-classification and binary classification, respectively. Therefore, when the number of honest clients is less than the number of malicious clients (i.e., \(K_{h}=3\) and \(K_{m}=7\)), we can observe that the accuracy results are affected in each round and give negative results until it reaches 85.98% with multi-classification. Table V presents the accuracy results of the federated deep learning approach (CNN) for binary classification and multi-classification in federated model performance with the Non-IID data and different numbers of honest clients \(K_{h}\) and malicious clients \(K_{m}\). When the number of honest clients is less than the number of malicious clients (i.e., \(K_{h}=3\) and \(K_{m}=7\)), we can observe that in each round, the accuracy results are affected in each round and give negative results until it reaches 30.04%. Table VI presents the evaluation results of the federated deep learning approach (CNN) for multi-classification in federated model performance with the IID and Non-IID data and different numbers of honest clients \(K_{h}\) and malicious clients \(K_{m}\). with a poisoning attack (i.e., \(\alpha=60\%\)) and the number of honest clients is less than the number of malicious clients (i.e., \(K_{h}=3\) and \(K_{m}=7\)), we observe that the CNN classifier is affected and give negative results with the three performance metrics, namely, precision, recall, and F1 score. Figure 3 presents the evaluation results of poisoning attack rate(%) in two different learning modes, namely, (a) federated model performance under two data distribution types, namely, IID and Non-IID data; (b) centralized model performance with DNN, RNN, and CNN models. The numbers of honest clients \(K_{h}=3\) and malicious clients \(K_{m}=7\). With the federated edge learning setting, we observe the success of the poisoning attack up to 100% for some classes, which means that the malicious IoT devices have disturbed the privacy-preserving federated learning in both IID and Non-IID modes. The results we observe for the centralized learning settings are, as we would expect, the success of the poisoning attack up to 100% for the Normal class. ## IV Conclusions We have proposed an anticipatory study for poisoning attacks in federated edge learning for digital twin 6G-enabled IoTs. We examined the influence of adversaries on the training and development of federated learning models for digital twin 6G-enabled IoTs. We demonstrated that attackers who conduct poisoning attacks in two different learning modes, namely, centralized learning and federated learning, can severely reduce the model's accuracy. We comprehensively evaluated our attacks on a new cyber security dataset designed for IoT applications. The study demonstrates that attackers who conduct poisoning attacks can lead to a decrease in accuracy from 94.93% to 85.98% with IID data and from 94.18% to 30.04% with Non-IID data. Since we envision the possibility that attackers will use generative AI to create adversarial samples, our ongoing research agenda includes building efficient approaches to defend against such attacks. Additionally, the protection of the integrity of the data as well as the integrity of the AI model in the federated edge learning models for digital twin 6G-enabled IoTs is also on our radar.
2309.04475
Crystal Structure Prediction by Joint Equivariant Diffusion
Crystal Structure Prediction (CSP) is crucial in various scientific disciplines. While CSP can be addressed by employing currently-prevailing generative models (e.g. diffusion models), this task encounters unique challenges owing to the symmetric geometry of crystal structures -- the invariance of translation, rotation, and periodicity. To incorporate the above symmetries, this paper proposes DiffCSP, a novel diffusion model to learn the structure distribution from stable crystals. To be specific, DiffCSP jointly generates the lattice and atom coordinates for each crystal by employing a periodic-E(3)-equivariant denoising model, to better model the crystal geometry. Notably, different from related equivariant generative approaches, DiffCSP leverages fractional coordinates other than Cartesian coordinates to represent crystals, remarkably promoting the diffusion and the generation process of atom positions. Extensive experiments verify that our DiffCSP significantly outperforms existing CSP methods, with a much lower computation cost in contrast to DFT-based methods. Moreover, the superiority of DiffCSP is also observed when it is extended for ab initio crystal generation.
Rui Jiao, Wenbing Huang, Peijia Lin, Jiaqi Han, Pin Chen, Yutong Lu, Yang Liu
2023-07-30T15:46:33Z
http://arxiv.org/abs/2309.04475v2
# Crystal Structure Prediction ###### Abstract Crystal Structure Prediction (CSP) is crucial in various scientific disciplines. While CSP can be addressed by employing currently-prevailing generative models (_e.g._ diffusion models), this task encounters unique challenges owing to the symmetric geometry of crystal structures--the invariance of translation, rotation, and periodicity. To incorporate the above symmetries, this paper proposes DiffCSP, a novel diffusion model to learn the structure distribution from stable crystals. To be specific, DiffCSP jointly generates the lattice and atom coordinates for each crystal by employing a periodic-E(3)-equivariant denoising model, to better model the crystal geometry. Notably, different from related equivariant generative approaches, DiffCSP leverages fractional coordinates other than Cartesian coordinates to represent crystals, remarkably promoting the diffusion and the generation process of atom positions. Extensive experiments verify that our DiffCSP significantly outperforms existing CSP methods, with a much lower computation cost in contrast to DFT-based methods. Moreover, the superiority of DiffCSP is also observed when it is extended for ab initio crystal generation. ## 1 Introduction Crystal Structure Prediction (CSP), which returns the stable 3D structure of a compound based solely on its composition, has been a goal in physical sciences since the 1950s [1]. As crystals are the foundation of various materials, estimating their structures in 3D space determines the physical and chemical properties that greatly influence the application to various academic and industrial sciences, such as the design of drugs, batteries, and catalysts [2]. Conventional methods towards CSP mostly apply the Density Functional Theory (DFT) [3] to compute the energy at each iteration, guided by optimization algorithms (such as random search [4], Bayesian optimization [5], etc.) to iteratively search for the stable state corresponding to the local minima of the energy surface [6]. The DFT-based approaches are computationally-intensive. Recent attention has been paid to deep generative models, which directly learn the distribution from the training data consisting of stable structures [7; 8]. More recently, diffusion models, a special kind of deep generative models are employed for crystal generation [9], encouraged by their better physical interpretability and enhanced performance than other generative models. Intuitively, by conducting diffusion on stable structures, the denoising process in diffusion models acts like a force field that drives the atom coordinates towards the energy local minimum and thus is able to increase stability. Indeed, the success of diffusion models is observed in broad scientific domains, including molecular conformation generation [10], protein structure prediction [11] and protein docking [12]. However, designing diffusion models for CSP is challenging. From the perspective of physics, any E(3) transformation, including translation, rotation, and reflection, of the crystal coordinates does not change the physical law and thus keeps the crystal distribution invariant. In other words, the generation process we design should yield E(3) invariant samples. Moreover, in contrast to other types of structures such as small molecules [13] and proteins [14], CSP exhibits unique challenges, mainly incurred by the periodicity of the atom arrangement in crystals. Figure 1 displays a crystal where the atoms in a unit cell are repeated infinitely in space. We identify such unique symmetry, jointly consisting of E(3) invariance and periodicity, as _periodic E(3) invariance_ in this paper. Generating such type of structures requires not only modeling the distribution of the atom coordinates within every cell, but also inferring how their bases (_a.k.a._ lattice vectors) are placed in 3D space. Interestingly, as we will show in SS 4.1, such view offers a natural disentanglement for fulfilling the periodic E(3) invariance by separately enforcing constraints on fractional coordinates and lattice vectors, which permits a feasible implementation to encode the crystal symmetry. In this work, we introduce DiffCSP, an equivariant diffusion method to address CSP. Considering the specificity of the crystal geometry, our DiffCSP jointly generates the lattice vectors and the fractional coordinates of all atoms, by employing a proposed denoising model that is theoretically proved to generate periodic-E(3)-invariant samples. A preferable characteristic of DiffCSP is that it leverages the fractional coordinate system (defined in SS 3) other than the Cartesian system used in previous methods to represent crystals [9; 15], which encodes periodicity intrinsically. In particular, the fractional representation not only allows us to consider Wrapped Normal (WN) distribution [16] to better model the periodicity, but also facilitates the design of the denoising model via the Fourier transformation, compared to the traditional multi-graph encoder in crystal modeling [15]. CDVAE [9] is closely related with our paper. It adopts an equivariant Variational Auto-Encoder (VAE) based framework to learn the data distribution and then generates crystals in a score-matching-based diffusion process. However, CDVAE focuses mainly on ab initio crystal generation where the composition is also randomly sampled, which is distinct from the CSP task in this paper. Moreover, while CDVAE first predicts the lattice and then updates the coordinates with the fixed lattice, we jointly update the lattice and coordinates to better model the crystal geometry. Besides, CDVAE represents crystals by Cartesian coordinates upon multi-graph modeling, whereas our DiffCSP applies fractional coordinates without multi-graph modeling as mentioned above. To sum up, our contributions are as follows: * To the best of our knowledge, we are the first to apply equivariant diffusion-based methods to address CSP. The proposed DiffCSP is more insightful than current learning-based approaches as the periodic E(3) invariance has been delicately considered. * DiffCSP conducts joint diffusion on lattices and fractional coordinates, which is capable of capturing the crystal geometry as a whole. Besides, the usage of fractional coordinates in place of Cartesian coordinates used in previous methods (_e.g._ CDVAE [9]) remarkably promotes the diffusion and the generation process of atom positions. * We verify the efficacy of DiffCSP on the CSP task against learning-based and DFT-based methods, and sufficiently ablate each proposed component in DiffCSP. We further extend DiffCSP into ab initio generation and show its effectiveness against related methods. ## 2 Related Works **Crystal Structure Prediction** Traditional computation methods [4; 5; 17; 18] combine DFT [3] with optimization algorithms to search for local minima in the potential energy surface. However, DFT is computationally intensive, making it dilemmatic to balance efficiency and accuracy. With the improvement of crystal databases, machine-learning methods are applied as alternative energy predictors to DFT followed by optimization steps [19; 20; 21]. Apart from the predict-optimize paradigm, another line of approaches directly learns stable structures from data by deep generative models, which represents crystals by 3D voxels [7; 22; 23], distance matrices [8; 24; 25] or 3D coordinates [26; 27; 28]. Unfortunately, these methods are unaware of the full symmetries of the crystal structure. CDVAE [9] has taken the required symmetries into account. However, as mentioned above, the initial version of CDVAE is for different task and utilizes different generation process. **Equivariant Graph Neural Networks** Geometrically equivariant Graph Neural Networks (GNNs) that ensure E(3) symmetry are powerful tools to represent physical objects [29; 30; 31; 32; 33], and have showcased the superiority in modeling 3D structures [34; 35]. To further model the periodic materials, Xie and Grossman [15] propose the multi-graph edge construction to capture the periodicity by connecting the edges between adjacent lattices. Yan et al. [36] further introduce periodic pattern encoding into a Transformer-based backbone. In this work, we achieve the periodic invariance by introducing the Fourier transform on fractional coordinates. **Diffusion Generative Models** Motivated by the non-equilibrium thermodynamics [37], diffusion models connect the data distribution with the prior distribution via forward and backward Markov chains [38], and have made remarkable progress in the field of image generation [39; 40]. Equipped with equivariant GNNs, diffusion models are capable of generating samples from the invariant distribution, which is desirable in conformation generation [10; 13], ab initio molecule design [41], protein generation [42], and so on. Recent works extend the diffusion models onto Riemann manifolds [43; 44], and enable the generation of periodic features like torsion angles [12; 16]. ## 3 Preliminaries **Representation of crystal structures** A 3D crystal can be represented as the infinite periodic arrangement of atoms in 3D space, and the smallest repeating unit is called a _unit cell_, as shown in Figure 1. A unit cell can be defined by a triplet \(\mathcal{M}=(\mathbf{A},\mathbf{X},\mathbf{L})\), where \(\mathbf{A}=[\mathbf{a}_{1},\mathbf{a}_{2},...,\mathbf{a}_{N}]\in\mathbb{R}^{h\times N}\) denotes the list of the one-hot representations of atom types, \(\mathbf{X}=[\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{N}]\in\mathbb{R}^{3\times N}\) consists of Cartesian coordinates of the atoms, and \(\mathbf{L}=[\mathbf{l}_{1},\mathbf{l}_{2},\mathbf{l}_{3}]\in\mathbb{R}^{3\times 3}\) represents the lattice matrix containing three basic vectors to describe the periodicity of the crystal. The infinite periodic crystal structure is represented by \[\{(\mathbf{a}^{\prime}_{i},\mathbf{x}^{\prime}_{i})|\mathbf{a}^{\prime}_{i}=\mathbf{a}_{i}, \mathbf{x}^{\prime}_{i}=\mathbf{x}_{i}+\mathbf{L}\mathbf{k},\;\forall\mathbf{k}\in\mathbb{Z}^{3 \times 1}\}, \tag{1}\] where the \(j\)-th element of the integral vector \(\mathbf{k}\) denotes the integral 3D translation in units of \(\mathbf{l}_{j}\). **Fractional coordinate system** The Cartesian coordinate system \(\mathbf{X}\) leverages three standard orthogonal bases as the coordinate axes. In crystallography, the fractional coordinate system is usually applied to reflect the periodicity of the crystal structure [26; 27; 28; 45], which utilizes the lattices \((\mathbf{l}_{1},\mathbf{l}_{2},\mathbf{l}_{3})\) as the bases. In this way, a point represented by the fractional coordinate vector \(\mathbf{f}=[f_{1},f_{2},f_{3}]^{\top}\in[0,1)^{3}\) corresponds to the Cartesian vector \(\mathbf{x}=\sum_{i=1}^{3}f_{i}\mathbf{l}_{i}\). This paper employs the fractional coordinate system, and denotes the crystal by \(\mathcal{M}=(\mathbf{A},\mathbf{F},\mathbf{L})\), where the fractional coordinates of all atoms in a cell compose the matrix \(\mathbf{F}\in[0,1)^{3\times N}\). **Task definition** CSP predicts for each unit cell the lattice matrix \(\mathbf{L}\) and the fractional matrix \(\mathbf{F}\) given its chemical composition \(\mathbf{A}\), namely, learning the conditional distribution \(p(\mathbf{L},\mathbf{F}\mid\mathbf{A})\). ## 4 The Proposed Method: DiffCSP This section first presents the symmetries of the crystal geometry, and then introduces the joint equivaraint diffusion process on \(\mathbf{L}\) and \(\mathbf{F}\), followed by the architecture of the denoising function. ### Symmetries of Crystal Structure Distribution While various generative models can be utilized to address CSP, this task encounters particular challenges, including constraints arising from symmetries of crystal structure distribution. Here, we consider the three types of symmetries in \(p(\mathbf{L},\mathbf{F}\mid\mathbf{A})\): permutation invariance, \(O(3)\) invariance, and periodic translation invariance. Their detailed definitions are provided as follows. **Definition 1** (Permutation Invariance).: _For any permutation \(\mathbf{P}\in\mathrm{S}_{N}\), \(p(\mathbf{L},\mathbf{F}\mid\mathbf{A})=p(\mathbf{L},\mathbf{F}\mathbf{P}\mid\mathbf{A}\mathbf{P})\), i.e., changing the order of atoms will not change the distribution._ **Definition 2** (O(3) Invariance).: _For any orthogonal transformation \(\mathbf{Q}\in\mathbb{R}^{3\times 3}\) satisfying \(\mathbf{Q}^{\top}\mathbf{Q}=\mathbf{I}\), \(p(\mathbf{Q}\mathbf{L},\mathbf{F}\mid\mathbf{A})=p(\mathbf{L},\mathbf{F}\mid\mathbf{A})\), namely, any rotation/reflection of \(\mathbf{L}\) keeps the distribution unchanged._ **Definition 3** (Periodic Translation Invariance).: _For any translation \(\mathbf{t}\in\mathbb{R}^{3\times 1}\), \(p(\mathbf{L},w(\mathbf{F}+\mathbf{t}\mathbf{1}^{\top})\mid\mathbf{A})=p(\mathbf{L},\mathbf{F}\mid\mathbf{A})\), where the function \(w(\mathbf{F})=\mathbf{F}-[\mathbf{F}]\in[0,1)^{3\times N}\) returns the fractional part of each element in \(\mathbf{F}\), and \(\mathbf{1}\in\mathbb{R}^{3\times 1}\) is a vector with all elements set to one. It explains that any periodic translation of \(\mathbf{F}\) will not change the distribution2._ Footnote 2: Previous works (_e.g._[36]) further discuss the scaling invariance of a unit cell formed by periodic boundaries, allowing \(\mathbf{L}\to\alpha\mathbf{L},\forall\alpha\in\mathbb{N}_{+}^{3}\). In this paper, the scaling invariance is unnecessary since we apply the Niggli reduction [46] on the primitive cell as a canonical scale representation of the lattice vectors where we fix \(\alpha=(1,1,1)^{\top}\). Additionally, periodic translation invariance in our paper is equivalent to the invariance of shifting periodic boundaries in [36]. We provide more discussions in Appendix A.4. The permutation invariance is tractably encapsulated by using GNNs as the backbone for generation [47]. We mainly focus on the other two kinds of invariance (see Figure 1), since GNNs are our default choices. For simplicity, we compactly term the \(O(3)\) invariance and periodic translation invariance as _periodic E(3) invariance_ henceforth. Previous approaches (_e.g._[9; 36]) adopt Cartesian coordinates \(\mathbf{X}\) other than fractional coordinates \(\mathbf{F}\), hence their derived forms of the symmetry are different. Particularly, in Definition 2, the orthogonal transformation additionally acts on \(\mathbf{X}\); in Definition 3, the periodic translation \(w(\mathbf{F}+\mathbf{t}\mathbf{1}^{\top})\) becomes the translation along the lattice bases \(\mathbf{X}+\mathbf{L}\mathbf{t}\mathbf{1}^{\top}\); besides, \(\mathbf{X}\) should also maintain E(3) translation invariance, that is \(p(\mathbf{L},\mathbf{X}+\mathbf{t}\mathbf{1}^{\top}|\mathbf{A})=p(\mathbf{L},\mathbf{X}|\mathbf{A})\). With the help of the fractional system, the periodic E(3) invariance is made tractable by fulfilling O(3) invariance _w.r.t._ the orthogonal transformations on \(\mathbf{L}\) and periodic translation invariance _w.r.t._ the periodic translations on \(\mathbf{F}\), respectively. In this way, such approach, as detailed in the next section, facilitates the application of diffusion methods to the challenging task of CSP. ### Joint Equivariant Diffusion Our method DiffCSP addresses CSP by simultaneously diffusing the lattice \(\mathbf{L}\) and the fractional coordinate matrix \(\mathbf{F}\). Given the atom composition \(\mathbf{A}\), \(\mathcal{M}_{t}\) denotes the intermediate state of \(\mathbf{L}\) and \(\mathbf{F}\) at time step \(t\)\((0\leq t\leq T)\). DiffCSP defines two Markov processes: the forward diffusion process gradually adds noise to \(\mathcal{M}_{0}\), and the backward generation process iteratively samples from the prior distribution \(\mathcal{M}_{T}\) to recover the origin data \(\mathcal{M}_{0}\). Joining the statements in SS 4.1, the recovered distribution from \(\mathcal{M}_{T}\) should meet periodic E(3) invariance. Such requirement is satisfied if the prior distribution \(p(\mathcal{M}_{T})\) is invariant and the Markov Figure 1: (a)\(\to\)(b): The orthogonal transformation of the lattice vectors. (c)\(\to\)(d): The periodic translation of the fractional coordinates. Both cases do not change the structure. Figure 2: Overview of DiffCSP. Given the composition \(\mathbf{A}\), we denote the crystal, its lattice and fractional coordinate matrix at time \(t\) as \(\mathcal{M}_{t}\), \(\mathbf{L}_{t}\) and \(\mathbf{F}_{t}\), respectively. The terms \(\mathbf{\epsilon_{L}}\) and \(\mathbf{\epsilon_{F}}\) are Gaussian noises, \(\hat{\mathbf{\epsilon}_{L}}\) and \(\hat{\mathbf{\epsilon}_{F}}\) are predicted by the denoising model \(\phi\). transition \(p(\mathcal{M}_{t-1}\mid\mathcal{M}_{t})\) is equivariant, according to the diffusion-based generation literature [10]. Here, an equivariant transition is specified as \(p(g\cdot\mathcal{M}_{t-1}\mid g\cdot\mathcal{M}_{t})=p(\mathcal{M}_{t-1}\mid \mathcal{M}_{t})\) where \(g\cdot\mathcal{M}\) refers to any orthogonal/translational transformation \(g\) acts on \(\mathcal{M}\) in the way presented in Definitions 2-3. We separately explain the derivation details of \(\mathbf{L}\) and \(\mathbf{F}\) below. The detailed flowcharts are summarized in Algorithms 1 and 2 in Appendix B.3. **Diffusion on \(\mathbf{L}\)** Given that \(\mathbf{L}\) is a continuous variable, we exploit Denoising Diffusion Probabilistic Model (DDPM) [38] to accomplish the generation. We define the forward process that progressively diffuses \(\mathbf{L}_{0}\) towards the Normal prior \(p(\mathbf{L}_{T})=\mathcal{N}(0,\mathbf{I})\) by \(q(\mathbf{L}_{t}|\mathbf{L}_{t-1})\) which can be devised as the probability conditional on the initial distribution: \[q(\mathbf{L}_{t}|\mathbf{L}_{0})=\mathcal{N}\Big{(}\mathbf{L}_{t}|\sqrt{\bar{\alpha}_{t}} \mathbf{L}_{0},(1-\bar{\alpha}_{t})\mathbf{I}\Big{)}, \tag{2}\] where \(\beta_{t}\in(0,1)\) controls the variance, and \(\bar{\alpha}_{t}=\prod_{s=1}^{t}\alpha_{t}=\prod_{s=1}^{t}(1-\beta_{t})\) is valued in accordance to the cosine scheduler [48]. The backward generation process is given by: \[p(\mathbf{L}_{t-1}|\mathbf{M}_{t})=\mathcal{N}(\mathbf{L}_{t-1}|\mu(\mathcal{M}_{t}), \sigma^{2}(\mathcal{M}_{t})\mathbf{I}), \tag{3}\] where \(\mu(\mathcal{M}_{t})=\frac{1}{\sqrt{\alpha_{t}}}\Big{(}\mathbf{L}_{t}-\frac{\beta_ {t}}{\sqrt{1-\bar{\alpha}_{t}}}\hat{\mathbf{\epsilon}}_{\mathbf{L}}(\mathcal{M}_{t},t) \Big{)},\sigma^{2}(\mathcal{M}_{t}){=\beta_{t}\frac{1-\bar{\alpha}_{t-1}}{1- \bar{\alpha}_{t}}}\). The denoising term \(\hat{\mathbf{\epsilon}}_{\mathbf{L}}(\mathcal{M}_{t},t)\in\mathbb{R}^{3\times 3}\) is predicted by the model \(\phi(\mathbf{L}_{t},\mathbf{F}_{t},\mathbf{A},t)\). As the prior distribution \(p(\mathbf{L}_{T})=\mathcal{N}(0,\mathbf{I})\) is already O(3)-invariant, we require the generation process in Eq. (3) to be O(3)-equivariant, which is formally stated below. **Proposition 1**.: _The marginal distribution \(p(\mathbf{L}_{0})\) by Eq. (3) is O(3)-invariant if \(\hat{\mathbf{\epsilon}}_{\mathbf{L}}(\mathcal{M}_{t},t)\) is O(3)-equivariant, namely \(\hat{\mathbf{\epsilon}}_{\mathbf{L}}(\mathbf{Q}\mathbf{L}_{t},\mathbf{F}_{t},\mathbf{A},t)=\mathbf{Q}\hat {\mathbf{\epsilon}}_{\mathbf{L}}(\mathbf{L}_{t},\mathbf{F}_{t},\mathbf{A},t),\forall\mathbf{Q}^{ \top}\mathbf{Q}=\mathbf{I}\)._ To train the denoising model \(\phi\), we first sample \(\mathbf{\epsilon}_{\mathbf{L}}\sim\mathcal{N}(0,\mathbf{I})\) and reparameterize \(\mathbf{L}_{t}=\sqrt{\bar{\alpha}_{t}}\mathbf{L}_{0}+\sqrt{1-\bar{\alpha}_{t}}\mathbf{ \epsilon}_{\mathbf{L}}\) based on Eq. (2). The training objective is defined as the \(\ell_{2}\) loss between \(\mathbf{\epsilon}_{\mathbf{L}}\) and \(\hat{\mathbf{\epsilon}}_{\mathbf{L}}\): \[\mathcal{L}_{\mathbf{L}}=\mathbb{E}_{\mathbf{\epsilon}_{\mathbf{L}}\sim\mathcal{N}(0,\bm {I}),t\sim\mathcal{U}(1,T)}[\|\mathbf{\epsilon}_{\mathbf{L}}-\hat{\mathbf{\epsilon}}_{\bm {L}}(\mathcal{M}_{t},t)\|_{2}^{2}]. \tag{4}\] **Diffusion on \(\mathbf{F}\)** The domain of fractional coordinates \([0,1)^{3\times N}\) forms a quotient space \(\mathbb{R}^{3\times N}/\mathbb{Z}^{3\times N}\) induced by the crystal periodicity. It is not suitable to apply the above DDPM fashion to generate \(\mathbf{F}\), as the normal distribution used in DDPM is unable to model the cyclical and bounded domain of \(\mathbf{F}\). Instead, we leverage Score-Matching (SM) based framework [49, 50] along with Wrapped Normal (WN) distribution [43] to fit the specificity here. Note that WN distribution has been explored in generative models, such as molecular conformation generation [16]. During the forward process, we first sample each column of \(\mathbf{\epsilon}_{\mathbf{F}}\in\mathbb{R}^{3\times N}\) from \(\mathcal{N}(0,\mathbf{I})\), and then acquire \(\mathbf{F}_{t}=w(\mathbf{F}_{0}+\sigma_{t}\mathbf{\epsilon}_{\mathbf{F}})\) where the truncation function \(w(\cdot)\) is already defined in Definition 3. This truncated sampling implies the WN transition: \[q(\mathbf{F}_{t}|\mathbf{F}_{0})\propto\sum_{\mathbf{Z}\in 2^{3\times N}}\exp\Big{(}-\frac{\|\mathbf{F} _{t}-\mathbf{F}_{0}+\mathbf{Z}\|_{F}^{2}}{2\sigma_{t}^{2}}\Big{)}. \tag{5}\] Basically, this process ensures the probability distribution over \([z,z+1)^{3\times N}\) for any integer \(z\) to be the same to keep the crystal periodicity. Here, the noise scale \(\sigma_{t}\) obeys the exponential scheduler: \(\sigma_{0}=0\) and \(\sigma_{t}=\sigma_{1}(\frac{\sigma_{T}}{\sigma_{1}})\frac{\frac{\delta-1}{ \delta-1}}{\delta-1}\), if \(t>0\). Desirably, \(q(\mathbf{F}_{t}|\mathbf{F}_{0})\) is periodic translation equivariant, and approaches a uniform distribution \(\mathcal{U}(0,1)\) if \(\sigma_{T}\) is sufficiently large. For the backward process, we first initialize \(\mathbf{F}_{T}\) from the uniform distribution \(\mathcal{U}(0,1)\), which is periodic translation invariant. With the denoising term \(\hat{\epsilon}_{\mathbf{F}}\) predicted by \(\phi(\mathbf{L}_{t},\mathbf{F}_{t},\mathbf{A},t)\), we combine the ancestral predictor [38, 50] with the Langevin corrector [49] to sample \(\mathbf{F}_{0}\). We immediately have: **Proposition 2**.: _The marginal distribution \(p(\mathbf{F}_{0})\) is periodic translation invariant if \(\hat{\mathbf{\epsilon}}_{\mathbf{F}}(\mathcal{M}_{t},t)\) is periodic translation invariant, namely \(\hat{\mathbf{\epsilon}}_{\mathbf{F}}(\mathbf{L}_{t},\mathbf{F}_{t},\mathbf{A},t)=\hat{\mathbf{\epsilon}}_ {\mathbf{F}}(\mathbf{L}_{t},w(\mathbf{F}_{t}+\mathbf{t}\mathbf{1}^{\top}),\mathbf{A},t),\forall t\in \mathbb{R}^{3}\)._ The training objective for score matching is: \[\mathcal{L}_{\mathbf{F}}=\mathbb{E}_{\mathbf{F}_{t}\sim q(\mathbf{F}_{t}|\mathbf{F}_{0}),t\sim \mathcal{U}(1,T)}\big{[}\lambda_{t}\|\nabla_{\mathbf{F}_{t}}\log q(\mathbf{F}_{t}|\mathbf{F}_ {0})-\hat{\mathbf{\epsilon}}_{\mathbf{F}}(\mathcal{M}_{t},t)\|_{2}^{2}\big{]},\] where \(\lambda_{t}=\mathbb{E}_{\mathbf{F}}^{-1}[\|\nabla\mathbf{F}_{t}\log q(\mathbf{F}_{t}|\mathbf{F}_{0} )\|_{2}^{2}]\) is approximated via Monte-Carlo sampling. More details are deferred to Appendix B.1. **Extension to ab initio crystal generation** Although our method is proposed to address CSP where the composition \(\mathbf{A}\) is fixed, our method is able to be extended for the ab initio generation task by further generating \(\mathbf{A}\). We achieve this by additionally optimizing the one-hot representation \(\mathbf{A}\) with a DDPM-based approach. We provide more details in Appendix E. ### The Architecture of the Denoising Model This subsection designs the denoising model \(\phi(\mathbf{L},\mathbf{F},\mathbf{A},t)\) that outputs \(\hat{\mathbf{\epsilon}}_{\mathbf{L}}\) and \(\hat{\mathbf{\epsilon}}_{\mathbf{F}}\) satisfying the properties stated in Proposition 1 and 2. Let \(\mathbf{H}^{(s)}=[\mathbf{h}_{1}^{(s)},\cdots,\mathbf{h}_{N}^{(s)}]\) denote the node representations of the \(s\)-th layer. The input feature is given by \(\mathbf{h}_{i}^{(0)}=\rho(f_{\text{atom}}(\mathbf{a}_{i}),\,f_{\text{pos}}(t))\), where \(f_{\text{atom}}\) and \(f_{\text{pos}}\) are the atomic embedding and sinusoidal positional encoding [38; 51], respectively; \(\rho\) is a multi-layer perception (MLP). Built upon EGNN [32], the \(s\)-th layer message-passing is unfolded as follows: \[\mathbf{m}_{ij}^{(s)} =\varphi_{m}(\mathbf{h}_{i}^{(s-1)},\mathbf{h}_{j}^{(s-1)},\mathbf{L}^{\top} \mathbf{L},\psi_{\text{FT}}(\mathbf{f}_{j}-\mathbf{f}_{i})), \tag{6}\] \[\mathbf{m}_{i}^{(s)} =\sum_{j=1}^{N}\mathbf{m}_{ij}^{(s)},\] (7) \[\mathbf{h}_{i}^{(s)} =\mathbf{h}_{i}^{(s-1)}+\varphi_{h}(\mathbf{h}_{i}^{(s-1)},\mathbf{m}_{i}^{(s )}). \tag{8}\] \(\psi_{\text{FT}}(\mathbf{f})[c,k]=\sin(2\pi mf_{c})\), if \(k=2m\) (even); and \(\psi_{\text{FT}}(\mathbf{f})[c,k]=\cos(2\pi mf_{c})\), if \(k=2m+1\) (odd). \(\psi_{\text{FT}}\) extracts various frequencies of all relative fractional distances that are helpful for crystal structure modeling, and more importantly, \(\psi_{\text{FT}}\) is periodic translation invariant, namely, \(\psi_{\text{FT}}(w(\mathbf{f}_{j}+\mathbf{t})-w(\mathbf{f}_{i}+\mathbf{t}))=\psi_{\text{FT}}( \mathbf{f}_{j}-\mathbf{f}_{i})\) for any translation \(\mathbf{t}\), which is proved in Appendix A.3. After \(S\) layers of message passing conducted on the fully connected graph, the lattice noise \(\hat{\mathbf{\epsilon}}_{\mathbf{L}}\) is acquired by a linear combination of \(\mathbf{L}\), with the weights given by the final layer: \[\hat{\mathbf{\epsilon}}_{\mathbf{L}}=\mathbf{L}\varphi_{L}\Big{(}\frac{1}{N}\sum_{i=1}^{N }\mathbf{h}_{i}^{(S)}\Big{)}, \tag{9}\] where \(\varphi_{L}\) is an MLP with output shape as \(3\times 3\). The fractional coordinate score \(\hat{\mathbf{\epsilon}}_{\mathbf{F}}\) is output by: \[\hat{\mathbf{\epsilon}}_{\mathbf{F}}[:,i]=\varphi_{F}(\mathbf{h}_{i}^{(S)}), \tag{10}\] where \(\hat{\mathbf{\epsilon}}_{\mathbf{F}}[:,i]\) defines the \(i\)-th column of \(\hat{\mathbf{\epsilon}}_{\mathbf{F}}\), and \(\varphi_{F}\) is an MLP on the final representation. We apply the inner product term \(\mathbf{L}^{\top}\mathbf{L}\) in Eq. (6) to achieve O(3)-invariance, as \((\mathbf{QL})^{\top}(\mathbf{QL})=\mathbf{L}^{\top}\mathbf{L}\) for any orthogonal matrix \(\mathbf{Q}\in\mathbb{R}^{3\times 3}\). This leads to the O(3)-invariance of \(\varphi_{L}\) in Eq. (10), and we further left-multiply \(\mathbf{L}\) with \(\varphi_{L}\) to ensure the O(3)-equivariance of \(\hat{\mathbf{\epsilon}}_{\mathbf{L}}\). Therefore, the above formulation of the denoising model \(\phi(\mathbf{L},\mathbf{F},\mathbf{A},t)\) ensures the following property. **Proposition 3**.: _The score \(\hat{\mathbf{\epsilon}}_{\mathbf{L}}\) by Eq. (9) is O(3)-equivariant, and the score \(\hat{\mathbf{\epsilon}}_{\mathbf{F}}\) from Eq. (10) is periodic translation invariant. Hence, the generated distribution by DiffCSP is periodic E(3) invariant._ **Comparison with multi-graph representation** Previous methods [9; 15; 29; 52] utilize Cartesian coordinates, and usually describe crystals with multi-graph representation to encode the periodic structures. They create multiple edges to connect each pair of nodes where different edges refer to different integral cell translations. Here, we no longer require multi-graph representation, since we employ fractional coordinates that naturally encode periodicity and the Fourier transform \(\psi_{\text{FT}}\) in our message passing is already periodic translation invariant. We will ablate the benefit in Table 3. ## 5 Experiments In this section, we evaluate the efficacy of DiffCSP on a diverse range of tasks, by showing that it can generate high-quality structures of different crystals in SS 5.1, with lower time cost comparing with DFT-based optimization method in SS 5.2. Ablations in SS 5.3 exhibit the necessity of each designed component. We further showcase the capability of DiffCSP in the ab initio generation task in SS 5.4. ### Stable Structure Prediction **Dataset** We conduct experiments on three datasets with distinct levels of difficulty. **Perov-5**[54; 55] contains 18,928 perovskite materials with similar structures. Each structure has 5 atoms in a unit cell. **Carbon-24**[56] includes 10,153 carbon materials with 6\(\sim\)24 atoms in a cell. **MP-20**[57] selects 45,231 stable inorganic materials from Material Projects [57], which includes the majority of experimentally-generated materials with at most 20 atoms in a unit cell. **MPTS-52** is a more challenging extension of MP-20, consisting of 40,476 structures up to 52 atoms per cell, sorted according to the earliest published year in literature. For Perov-5, Carbon-24 and MP-20, we apply the 60-20-20 split in line with Xie et al. [9]. For MPTS-52, we split 27,380/5,000/8,096 for training/validation/testing in chronological order. **Baselines** We contrast two types of previous works. The first type follows the predict-optimize paradigm, which first trains a predictor of the target property and then utilizes certain optimization algorithms to search for optimal structures. Following Cheng et al. [21], we apply MEGNet [52] as the predictor of the formation energy. For the optimization algorithms, we choose Random Search (**RS**), Bayesian Optimization (**BO**), and Particle Swarm Optimization (**PSO**), all iterated over 5,000 steps. The second type is based on deep generative models. We follow the modification in Xie et al. [9], and leverage cG-SchNet [53] that utilizes SchNet [29] as the backbone and additionally considers the ground-truth lattice initialization for encoding periodicity, yielding a final model named **P-cG-SchNet**. Another baseline **CDVAE**[9] is a VAE-based framework for pure crystal generation, by first predicting the lattice and the initial composition and then optimizing the atom types and coordinates via annealed Langevin dynamics [49]. To adapt CDVAE into the CSP task, we replace the original normal prior for generation with a parametric prior conditional on the encoding of the given composition. More details are provided in Appendix B.2. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & \# of & \multicolumn{2}{c}{Perov-5} & \multicolumn{2}{c}{Carbon-24} & \multicolumn{2}{c}{MP-20} & \multicolumn{2}{c}{MPTS-52} \\ & samples & Match rate\(\uparrow\) & RMSE\(\downarrow\) & Match rate\(\uparrow\) & RMSE\(\downarrow\) & Match rate\(\uparrow\) & RMSE\(\downarrow\) & Match rate\(\uparrow\) & RMSE\(\downarrow\) \\ \hline \hline \multirow{2}{*}{RS [21]} & 20 & 29.22 & 0.2924 & 14.63 & 0.4041 & 8.73 & 0.2501 & 2.05 & 0.3329 \\ & 5,000 & 36.56 & 0.0886 & 14.63 & 0.4041 & 11.49 & 0.2822 & 2.68 & 0.3444 \\ \hline \multirow{2}{*}{BO [21]} & 20 & 21.03 & 0.2830 & 0.44 & 0.3653 & 8.11 & 0.2402 & 2.05 & 0.3024 \\ & 5,000 & 55.09 & 0.2037 & 12.17 & 0.4089 & 12.68 & 0.2816 & 6.69 & 0.3444 \\ \hline \multirow{2}{*}{PSO [21]} & 20 & 20.90 & 0.0836 & 6.40 & 0.4204 & 4.05 & 0.1567 & 1.06 & 0.2339 \\ & 5,000 & 21.88 & 0.0844 & 6.50 & 0.4211 & 4.35 & 0.1670 & 1.09 & 0.2390 \\ \hline \hline P-cG- & 1 & 48.22 & 0.4179 & 17.29 & 0.3846 & 15.39 & 0.3762 & 3.67 & 0.4115 \\ SchNet [53] & 20 & 97.94 & 0.3463 & 55.91 & 0.3551 & 32.64 & 0.3018 & 12.96 & 0.3942 \\ \hline \multirow{2}{*}{CDVAE [9]} & 1 & 45.31 & 0.1138 & 17.09 & 0.2969 & 33.90 & 0.1045 & 5.34 & 0.2106 \\ & 20 & 88.51 & 0.0464 & 88.37 & 0.2286 & 66.95 & 0.1026 & 20.79 & 0.2085 \\ \hline \hline \multirow{2}{*}{DiffCSP} & 1 & 52.02 & 0.0760 & 17.54 & 0.2759 & 51.49 & 0.0631 & 12.19 & 0.1786 \\ & 20 & **98.60** & **0.0128** & **88.47** & **0.2192** & **77.93** & **0.0492** & **34.02** & **0.1749** \\ \hline \hline \multirow{2}{*}{Ground Truth} & \multirow{2}{*}{Perov-5} & \multirow{2}{*}{Carbon-24} & \multirow{2}{*}{MP-20} & \multirow{2}{*}{MP-25} \\ & & & & & & & & & \\ \hline \multirow{2}{*}{DiffCSP} & \multirow{2}{*}{P-cG-SchNet} & \multirow{2}{*}{P-cG-SchNet} & \multirow{2}{*}{P-cG-SchNet} & \multirow{2}{*}{P-cG-SchNet} & \multirow{2}{*}{P-cG-SchNet} & \multirow{2}{*}{P-cG-SchNet} & \multirow{2}{*}{P-cG-SchNet} \\ & & & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Results on stable structure prediction task. Figure 3: Visualization of the predicted structures from different methods. We select the structure of the lowest RMSE over 20 candidates. We translate the same predicted atom by all methods to the origin for better comparison. Our DiffCSP accurately delivers high quality structure predictions. **Evaluation metrics** Following the common practice [9], we evaluate by matching the predicted candidates with the ground-truth structure. Specifically, for each structure in the test set, we first generate \(k\) samples of the same composition and then identify the matching if at least one of the samples matches the ground truth structure, under the metric by the StructureMatcher class in pymagen [58] with thresholds stol=0.5, angle_tol=10, ltol=0.3. The **Match rate** is the proportion of the matched structures over the test set. **RMSE** is calculated between the ground truth and the best matching candidate, normalized by \(\sqrt[3]{V/N}\) where \(V\) is the volume of the lattice, and averaged over the matched structures. For optimization methods, we select 20 structures of the lowest energy or all 5,000 structures from all iterations during testing as candidates. For generative baselines and our DiffCSP, we let \(k=1\) and \(k=20\) for evaluation. **Results** Table 1 conveys the following observations. **1.** The optimization methods encounter low Match rates, signifying the difficulty of locating the optimal structures within the vast search space. **2.** In comparison to other generative methods that construct structures atom by atom or predict the lattice and atom coordinates in two stages, our method demonstrates superior performance, highlighting the effectiveness of jointly refining the lattice and coordinates during generation. **3.** All methods struggle with performance degradation as the number of atoms per cell increases, on the datasets from Perov-5 to MPTS-52. For example, the Match rates of the optimization methods are less than 10% in MPTS-52. Even so, our method consistently outperforms all other methods. **Visualization** Figure 3 provides qualitative comparisons.DiffCSP clearly makes the best predictions. ### Comparison with DFT-based Methods We further select 10 binary and 5 ternary compounds in MP-20 testing set and compare our model with USPEX [59], a DFT-based software equipped with the evolutionary algorithm to search for stable structures. For our method, we sample 20 candidates for each compound following the setting in Table 1. For USPEX, we apply 20 generations, 20 populations for each compound, and select the best sample in each generation, leading to 20 candidates as well. We summarize the **Match rate** over the 15 compounds, the **Averaged RMSD** over the matched structures, and the **Averaged Inference Time** to generate 20 candidates for each compound in Table 6. The detailed results for each compound are listed in Appendix D. DiffCSP correctly predicts more structures with higher match rate, and more importantly, its time cost is much less than USPEX, allowing more potential for real applications. ### Ablation Studies We ablate each component of DiffCSP in Table 3, and probe the following aspects. **1.** To verify the necessity of jointly updating the lattice \(\mathbf{L}\) and fractional coordinates \(\mathbf{F}\), we construct two variants that separate the joint optimization into two stages, denoted as \(\mathbf{L}\rightarrow\mathbf{F}\) and \(\mathbf{F}\rightarrow\mathbf{L}\). Particularly, \(\mathbf{L}\rightarrow\mathbf{F}\) applies two networks to learn the reverse processes \(p_{\theta_{1}}(\mathbf{L}_{0:T-1}|\mathbf{A},\mathbf{F}_{T},\mathbf{L}_{T})\) and \(p_{\theta_{2}}(\mathbf{F}_{0:T-1}|\mathbf{A},\mathbf{F}_{T},\mathbf{L}_{0})\). During inference, we first sample \(\mathbf{L}_{T},\mathbf{F}_{T}\) from their prior distributions, acquiring \(\mathbf{L}_{0}\) via \(p_{\theta_{1}}\), and then \(\mathbf{F}_{0}\) by \(p_{\theta_{2}}\) based on \(\mathbf{L}_{0}\). \(\mathbf{F}\rightarrow\mathbf{L}\) is similarly executed but with the generation order of \(\mathbf{L}_{0}\) and \(\mathbf{F}_{0}\) exchanged. Results indicate that \(\mathbf{L}\rightarrow\mathbf{F}\) performs better than the \(\mathbf{F}\rightarrow\mathbf{L}\), but both are inferior to the joint update in DiffCSP, which endorses our design. We conjecture that the joint diffusion fashion enables \(\mathbf{L}\) and \(\mathbf{F}\) to update synergistically, which makes the generation process more tractable to learn and thus leads to better performance. **2.** We explore the necessity of preserving the O(3) invariance when generating \(\mathbf{L}\), which is ensured \begin{table} \begin{tabular}{l c c c} \hline \hline & Match rate (\%)\(\uparrow\) & Avg. RMSD\(\downarrow\) & Avg. Time\(\downarrow\) \\ \hline USPEX [59] & 53.33 & **0.0159** & 12.5h \\ DiffCSP & **73.33** & 0.0172 & **10s** \\ \hline \hline \end{tabular} \end{table} Table 2: Overall results over the 15 selected compounds. \begin{table} \begin{tabular}{l c c} \hline \hline & Match rate (\%)\(\uparrow\) & RMSE \(\downarrow\) \\ \hline DiffCSP & **51.49** & **0.0631** \\ \hline & _w/o Joint Diffusion_ & \\ \hline \hline \(\mathbf{L}\rightarrow\mathbf{F}\) & 50.03 & 0.0921 \\ \(\mathbf{F}\rightarrow\mathbf{L}\) & 36.73 & 0.0838 \\ \hline & w/o \(O(3)\) _Equivariance_ & \\ \hline _w/o_ inner product & 1.66 & 0.4002 \\ \hline \hline \multicolumn{2}{c}{_w/o Periodic Translation Invariance_} \\ \hline _w/o_ WN & 34.09 & 0.2350 \\ _w/o_ FT & 29.15 & 0.0926 \\ \hline \multicolumn{2}{c}{_MG Edge Construction_} \\ \hline \hline MG _w/_ FT & 25.85 & 0.1079 \\ MG _w/o_ FT & 28.05 & 0.1314 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation studies on MP-20. MG: **M**ulti-**G**raph edge construction [15], FT: **Fourier-**Transformation. by the inner product \(\mathbf{L}^{\top}\mathbf{L}\) in Eq. (6). When we replace it with \(\mathbf{L}\) and change the final output as \(\hat{\mathbf{\epsilon}}_{\mathbf{L}}=\varphi_{L}\big{(}\frac{1}{N}\sum_{i=1}^{N}\mathbf{h}_{ i}^{(S)}\big{)}\) in Eq. (9) to break the equivariance, the model suffers from extreme performance detriment. Only 1.66% structures are successfully matched, which obviously implies the importance of incorporating O(3) equivariance. **3.** We further assess the importance of periodic translation invariance from two perspectives. For the generation process, we generate \(\mathbf{F}\) via the score-based model with the Wrapped Normal (WN) distribution. We replace this module with DDPM under standard Gaussian as \(q(\mathbf{F}_{i}|\mathcal{M}_{0})=\mathcal{N}\big{(}\mathbf{F}_{i}|\sqrt{\bar{\alpha} _{t}}\mathbf{F}_{0},(1-\bar{\alpha}_{t})\mathbf{I}\big{)}\) similarly defined as Eq. (3). A lower match rate and higher RMSE are observed for this variant. For the model architecture, we adopt Fourier Transformation(FT) in Eq. (6) to capture periodicity. To investigate its effect, we replace \(\psi_{\text{FT}}(\mathbf{f}_{j}-\mathbf{f}_{i})\) with \(\mathbf{f}_{j}-\mathbf{f}_{i}\), and the match rate drops from 51.49% to 29.15%. Both of the two observations verify the importance of retaining the periodic translation invariance. **4.** We further change the fully-connected graph into the multi-graph approach adopted in Xie and Grossman [15]. The multi-graph approach decreases the match rate, since the multi-graphs constructed under different intermediate structures may differ vibrantly during generation, leading to substantially higher training difficulty and lower sampling stability. We will provide more discussions in Appendix C. ### Ab Initio Crystal Generation DiffCSP is extendable to ab initio crystal generation by further conducting discrete diffusion on atom types \(\mathbf{A}\). We contrast DiffCSP against five generative methods following [9]: **FTCP**[28], **Cond-DFC-VAE**[7], **G-SchNet**[60] and its periodic variant **P-G-SchNet**, and the original version of **CDVAE**[9]. Specifically for our DiffCSP, we gather the statistics of the atom numbers from the training set, then sample the number based on the pre-computed distribution similar to Hoogeboom et al. [41], which allows DiffCSP to generate structures of variable size. Following [9], we evaluate the generation performance in terms of there metrics: **Validity**, **Coverage**, and **Property statistics**, which respectively return the validity of the predicted crystals, the similarity between the test set and the generated samples, and the property calculation regarding density, formation energy, and the number of elements. The detailed definitions of the above metrics are provided in Appendix E. **Results** Table 4 show that our method achieves comparable validity and coverage rate with previous methods, and significantly outperforms the baselines on the similarity of property statistics, which indicates the high reliability of the generated samples. ## 6 Discussions **Limitation 1.** Composition generation. Our model yields slightly lower compositional validity in Table 4. We provide more discussion in Appendix E, and it is promising to propose more powerful generation methods on atom types. **2.** Experimental evaluation. Further wet-lab experiments can better verify the effectiveness of the model in real applications. \begin{table} \begin{tabular}{l l|c c c c c c c} \hline \hline \multirow{2}{*}{**Data**} & \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**Validity (\%) \(\uparrow\)**} & \multicolumn{2}{c}{**Coverage (\%) \(\uparrow\)**} & \multicolumn{2}{c}{**Property \(\downarrow\)**} \\ & & Struc. & Comp. & COV-R & COV-P & \(d_{\rho}\) & \(d_{E}\) & \(d_{\text{chem}}\) \\ \hline \multirow{2}{*}{Pervo-5} & FTCP [28] & 0.24 & 54.24 & 0.00 & 0.00 & 10.27 & 156.0 & 0.6297 \\ & Cond-DFC-VAE [7] & 73.60 & 82.95 & 73.92 & 10.13 & 2.268 & 4.111 & 0.8373 \\ & G-SchNet [60] & 99.92 & 98.79 & 0.18 & 0.23 & 1.625 & 4.746 & 0.0368 \\ & P-G-SchNet [60] & 79.63 & **99.13** & 0.37 & 0.25 & 0.2755 & 1.388 & 0.4552 \\ & CDVAE [9] & **100.0** & 98.59 & 99.45 & **98.46** & 0.1258 & 0.0264 & 0.0628 \\ & DiffCSP & **100.0** & 98.85 & **99.74** & 98.27 & **0.1110** & **0.0263** & **0.0128** \\ \hline \multirow{2}{*}{Carbon-243} & FTCP [28] & 0.08 & – & 0.00 & 0.00 & 5.206 & 19.05 & – \\ & G-SchNet [60] & 99.94 & – & 0.00 & 0.00 & 0.9427 & 1.320 & – \\ & P-G-SchNet [60] & 48.39 & – & 0.00 & 0.00 & 1.533 & 134.7 & – \\ & CDVAE [9] & **100.0** & – & 99.80 & 83.08 & 0.1407 & 0.2850 & – \\ & DiffCSP & **100.0** & – & **99.90** & **97.27** & **0.0805** & **0.0820** & – \\ \hline \multirow{2}{*}{MP-20} & FTCP [28] & 1.55 & 48.37 & 4.72 & 0.09 & 23.71 & 160.9 & 0.7363 \\ & G-SchNet [60] & 99.65 & 75.96 & 38.33 & 99.57 & 3.034 & 42.09 & 0.6411 \\ & P-G-SchNet [60] & 77.51 & 76.40 & 41.93 & 99.74 & 4.04 & 2.448 & 0.6234 \\ & CDVAE [9] & **100.0** & **86.70** & 99.15 & 99.49 & 0.6875 & 0.2778 & 1.432 \\ & DiffCSP & **100.0** & 83.25 & **99.71** & **99.76** & **0.3502** & **0.1247** & **0.3398** \\ \hline \hline \end{tabular} \end{table} Table 4: Results on ab initio generation task. The results of baseline methods are from Xie et al. [9]. **Conclusion** In this work, we present DiffCSP, a diffusion-based learning framework for crystal structure prediction, which is particularly curated with the vital symmetries existing in crystals. The diffusion is highly flexible by jointly optimizing the lattice and fractional coordinates, where the intermediate distributions are guaranteed to be invariant under necessary transformations. We demonstrate the efficacy of our approach on a wide range of crystal datasets, verifying the strong applicability of DiffCSP towards predicting high-quality crystal structures.
2307.12963
On the asymptotic expansions of various quantum invariants I: the colored Jones polynomial of twist knots at the root of unity $e^{\frac{2π\sqrt{-1}}{N+\frac{1}{2}}}$
This is the first article in a series devoted to the study of the asymptotic expansions of various quantum invariants related to the twist knots. In this paper, by using the saddle point method developed by Ohtsuki, we obtain an asymptotic expansion formula for the colored Jones polynomial of twist knots $\mathcal{K}_p$ with $p\geq 6$ at the root of unity $e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{2}}}$.
Qingtao Chen, Shengmao Zhu
2023-07-24T17:43:00Z
http://arxiv.org/abs/2307.12963v1
On the asymptotic expansions of various quantum invariants I: the colored Jones polynomial of twist knots at the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N+\frac{2}{2}}}\) ###### Abstract. This is the first article in a series devoted to the study of the asymptotic expansions of various quantum invariants related to the twist knots. In this paper, by using the saddle point method developed by Ohtsuki, we obtain an asymptotic expansion formula for the colored Jones polynomial of twist knots \(\mathcal{K}_{p}\) with \(p\geq 6\) at the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N+\frac{2}{2}}}\). ###### Contents * 1 Introduction * 2 Preliminaries * 2.1 Colored Jones polynomials * 2.2 Dilogarithm and Lobachevsky functions * 2.3 Quantum dilogrithm functions * 2.4 Saddle point method * 3 Computation of the potential function * 4 Poisson summation formula * 5 Asymptotic expansions * 5.1 Some preparations * 5.2 Fourier coefficients \(\hat{h}_{N}(m,n)\) that can be neglected * 5.3 Fourier coefficients \(\hat{h}_{N}(-1,n)\) with \(-p\leq n\leq p-2\) * 5.4 Fourier coefficients \(\hat{h}_{N}(0,n)\) with \(0\leq n\leq p-2\) * 5.5 Final proof * 6 Appendices * 6.1 Proof of Lemma 4.1 * 6.2 Proof of Lemma 5.10 * 6.3 Proof of Proposition 5.3 * 6.4 Proof of Lemma 5.4 * 6.5 Proof of Lemma 5.5 * 6.6 Proof of Proposition 5.16 * 6.7 Proof of Proposition 5.14 ## 1. Introduction In this series of articles, we study the asymptotic expansions of various quantum invariants at different roots of unit and make a connection between them. This work is motivated by the volume conjectures, let us briefly review the background. In [20], by using the quantum dilogarithm function, R. Kashaev defined a link invariant \(\langle\mathcal{L}\rangle_{N}\) for a link \(\mathcal{L}\), which depends on a positive integer \(N\). Furthermore, in [21], he conjectured that for any hyperbolic link \(\mathcal{L}\), the asymptotics at \(N\to\infty\) of \(|\langle\mathcal{L}\rangle_{N}|\) gives its volume, i.e. \[2\pi\lim_{N\to\infty}\frac{\log|\langle\mathcal{L}\rangle_{N}|}{N}=vol(S^{3} \setminus\mathcal{L}) \tag{1.1}\] where \(vol(S^{3}\setminus\mathcal{L})\) denotes the hyperbolic volume of the complement of \(\mathcal{L}\) in \(S^{3}\), and gave evidence for the conjecture. In [27], H. Murakami and J. Murakami proved that for any link \(\mathcal{L}\), Kashaev's invariant \(\langle\mathcal{L}\rangle_{N}\) is equal to \(N\)-th normalized colored Jones polynomial evaluated at the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N}}\), which is written as \(J_{N}(\mathcal{L};e^{\frac{2\pi\sqrt{-1}}{N}})\), and they extended Kashaev's conjecture as follows \[2\pi\lim_{N\to\infty}\frac{\log|J_{N}(\mathcal{L};e^{\frac{2\pi\sqrt{-1}}{N}}) |}{N}=vol(S^{3}\setminus\mathcal{L}), \tag{1.2}\] where \(vol(S^{3}\setminus\mathcal{L})\) denotes the simplicial volume of the complement of \(\mathcal{L}\) in \(S^{3}\). This is usually called the (Kashaev-Murakami-Murakami) volume conjecture. Furthermore, as a complexification of the volume conjecture, it is conjectured in [29] that, for a hyperbolic link \(\mathcal{L}\), \[2\pi\lim_{N\to\infty}\frac{\log J_{N}(\mathcal{L};e^{\frac{2\pi\sqrt{-1}}{N}}) }{N}=vol(S^{3}\setminus\mathcal{L})+\sqrt{-1}cs(S^{3}\setminus\mathcal{L}), \tag{1.3}\] for an appropriate choice of a branch of the logarithm, where \(cs\) denotes the Chern-Simons invariant [26]. From the viewpoint of the \(SL(2,\mathbb{C})\) Chern-Simons theory, S. Gukov conjectured in [16] that the asymptotic expansion of \(J_{N}(\mathcal{K};e^{\frac{2\pi\sqrt{-1}}{k}})\) of a hyperbolic knot as \(N,k\to\infty\) fixing \(u=\frac{N}{k}\) is presented by the following form, \[J_{N}(\mathcal{K};e^{\frac{2\pi\sqrt{-1}}{k}})\sim e^{N\zeta}N^{\frac{3}{2}} \omega\left(1+\sum_{i=1}^{\infty}\kappa_{i}\left(\frac{2\pi\sqrt{-1}}{N} \right)^{i}\right) \tag{1.4}\] for some scalars \(\zeta,\omega,\kappa_{i}\) depending on \(\mathcal{K}\) and \(u\), also see [10, 17]. Moreover, T. Ohtuski showed when \(\mathcal{K}\) is a hyperbolic knot with up to \(7\) crossings [31, 34, 32], the asymptotic expansions of the Kashaev invariant is presented by the following form \[\langle\mathcal{K}\rangle_{N}=e^{N\zeta}N^{\frac{3}{2}}\omega(\mathcal{K}) \left(1+\sum_{i=1}^{d}\kappa_{i}(\mathcal{K})\left(\frac{2\pi\sqrt{-1}}{N} \right)^{i}+O\left(\frac{1}{N^{d+1}}\right)\right), \tag{1.5}\] for any \(d\), where \(\omega(\mathcal{K})\) and \(\kappa_{i}(\mathcal{K})\)'s are some scalars. The volume conjecture has been rigorously proved for some particular knots and links such as torus knots [22, 11], the figure-eight knot [1], Whitehead doubles of \((2,p)\)-torus knots [44], positive iterated torus knots [37], the \(5_{2}\) knot [31], the knots with \(6\) crossings [34], the knots with \(7\) crossings [32] and some links [15, 37, 38, 43, 44], see also [28] for a review. On the other hand, it is known that the quantum invariant of a closed \(3\)-manifold at \(q=e^{\frac{2\pi\sqrt{-1}}{N}}\) is of polynomial order as \(N\to\infty\). However, the first author and T. Yang [6] observed that Reshetikhin-Turaev invariants and Turaev-Viro invariants at \(q=e^{\frac{4\pi\sqrt{-1}}{r}}\), for \(r\geq 3\) an odd, are of exponential order as \(r\to\infty\). Furthermore, they proposed the volume conjecture for Reshetikhin-Turaev invariants and Turaev-Viro invariants. In [12], Detcherry, Kalfagianni and Yang gave a formula relating the Turaev-Viro invariants of the complement of a link \(\mathcal{L}\) in \(S^{3}\) to the values of the colored Jones polynomials of \(\mathcal{L}\). By using this formula, they proved Chen-Yang's volume conjecture [6] for the figure-eight knot and Borromean rings. In addition, they proposed the following **Conjecture 1.1** ([12], Question 1.7).: _For a hyperbolic link \(\mathcal{L}\) in \(S^{3}\), we have_ \[\lim_{N\to\infty}\frac{2\pi}{N}\log|J_{N}(\mathcal{L};e^{\frac{2\pi\sqrt{-1} }{N+\frac{1}{2}}})|=vol(S^{3}\setminus\mathcal{L}). \tag{1.6}\] The asymptotic behavior of \(J_{N}(\mathcal{L};e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{2}}})\) is not predicted either by the original volume conjecture (1.2) or by its generalizations (1.4). Moreover, Conjecture 1.1 seems somewhat surprising, since a result in [14, 4] has stated that for any positive integer \(k\), \(J_{N}(\mathcal{L};e^{\frac{2\pi\sqrt{-1}}{N+k}})\) grows only polynomially in \(N\). Conjecture 1.1 has been proved for figure-eight knot and Borromean ring in [12], we also refer to [39] for an extended version of Conjecture 1.1. The purpose of this paper is to study Conjecture 1.1 for the twist knot \(\mathcal{K}_{p}\). We investigate the asymptotic expansion for the normalized \(N\)-th colored Jones polynomial \(J_{N}(\mathcal{K}_{p};e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{2}}})\) instead. Furthermore, in a subsequent paper [7], we present two asymptotic expansions for the normalized \(N\)-th colored Jones polynomials of twist knots at the more general root unity \(e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{M}}}\) with \(M\geq 2\), and at the root of unity \(e^{\frac{2\pi\sqrt{-1}}{N}}\) respectively. Moreover, the asymptotic expansion for Reshetikhin-Turaev invariants of closed hyperbolic \(3\)-manifolds obtained by integral surgery along the twist knot at the root of unity \(e^{\frac{4\pi\sqrt{-1}}{r}}\) will be given in [8]. Finally, the last article [8] in this series is devoted to the asymptotic expansion for the Turaev-Viro invariants of the complements of the twist knots in \(S^{3}\). Let \(V(p,t,s)\) be the potential function of the colored Jones polynomial for the twist knot \(\mathcal{K}_{p}\) given by formula (3.18). By Proposition 5.3, there exists a unique critical point \((t_{0},s_{0})\) of \(V(p,t,s)\). Let \(x_{0}=e^{2\pi\sqrt{-1}t_{0}}\) and \(y_{0}=e^{2\pi\sqrt{-1}s_{0}}\), we put \[\zeta(p) =V(p,t_{0},s_{0})\] \[=\pi\sqrt{-1}\left((2p+1)s_{0}^{2}-(2p+3)s_{0}-2t_{0}\right)\] \[+\frac{1}{2\pi\sqrt{-1}}\left(\text{Li}_{2}(x_{0}y_{0})+\text{Li} _{2}(x_{0}/y_{0})-3\text{Li}_{2}(x_{0})+\frac{\pi^{2}}{6}\right) \tag{1.7}\] and \[\omega(p) =\frac{\sin(2\pi s_{0})e^{2\pi\sqrt{-1}t_{0}}}{(1-e^{2\pi\sqrt{- 1}t_{0}})^{\frac{3}{2}}\sqrt{\det Hess(V)(t_{0},s_{0})}}\] \[=\frac{(y_{0}-y_{0}^{-1})x_{0}}{-4\pi(1-x_{0})^{\frac{3}{2}}\sqrt {H(p,x_{0},y_{0})}} \tag{1.8}\] with \[H(p,x_{0},y_{0}) =\left(\frac{-3(2p+1)}{\frac{1}{x_{0}}-1}+\frac{2p+1}{\frac{1}{x_{0} y_{0}}-1}+\frac{2p+1}{\frac{1}{x_{0}/y_{0}}-1}-\frac{3}{(\frac{1}{x_{0}}-1)( \frac{1}{x_{0}y_{0}}-1)}\right.\] \[\left.-\frac{3}{(\frac{1}{x_{0}}-1)(\frac{1}{x_{0}/y_{0}}-1)}+ \frac{4}{(\frac{1}{x_{0}y_{0}}-1)(\frac{1}{x_{0}/y_{0}}-1)}\right). \tag{1.9}\] Then, we have **Theorem 1.2**.: _For \(p\geq 6\), the asymptotic expansion of the colored Jones polynomial \(J_{N}(\mathcal{K}_{p};e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{2}}})\) is given by the following form_ \[J_{N}(\mathcal{K}_{p};e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{2}}}) =(-1)^{p+1}\frac{4\pi e^{\frac{1}{4}\pi\sqrt{-1}}(N+\frac{1}{2}) ^{\frac{1}{2}}}{\sin\frac{\pi}{2N+1}}\omega(p)e^{(N+\frac{1}{2})\zeta(p)}\] \[\cdot\left(1+\sum_{i=1}^{d}\kappa_{i}(p)\left(\frac{2\pi\sqrt{-1 }}{N+\frac{1}{2}}\right)^{i}+O\left(\frac{1}{(N+\frac{1}{2})^{d+1}}\right) \right), \tag{1.10}\] _for \(d\geq 1\), where \(\omega(p)\) and \(\kappa_{i}(p)\) are constants determined by \(\mathcal{K}_{p}\)._ By Lemma 5.4, we know that \[2\pi\zeta(p)=vol(S^{3}\setminus\mathcal{K}_{p})+cs(S^{3}\setminus\mathcal{K} _{p})\mod\pi^{2}\sqrt{-1}\mathbb{Z}. \tag{1.11}\] **Corollary 1.3**.: _For \(p\geq 6\), we have_ \[\lim_{N\to\infty}\frac{2\pi}{N}\log J_{N}(\mathcal{K}_{p};e^{\frac{2\pi\sqrt{- 1}}{N+\frac{1}{2}}})=vol(S^{3}\setminus\mathcal{K}_{p})+cs(S^{3}\setminus \mathcal{K}_{p})\mod\pi^{2}\sqrt{-1}\mathbb{Z}. \tag{1.12}\] Hence we proved Conjecture 1.1. **Example 1.4**.: For \(p=100\), we compute that \[t_{0} =0.8237997818-0.1280592525\sqrt{-1},\] \[s_{0} =0.5050124998-0.00001256317546\sqrt{-1},\] \[x_{0} =1.000001243-1.999752031\sqrt{-1},\] \[y_{0} =-0.9995829910-0.03149174478\sqrt{-1},\] \[2\pi\zeta(100) =3.6636144-1043.809608\sqrt{-1}. \tag{1.13}\] **Remark 1.5**.: We need the condition \(p\geq 6\) in Theorem 1.2 which makes sure the volume \(vol(S^{3}\setminus\mathcal{K}_{p})\) is not too small, so we can construct the required homotopy and verify the assumptions of the saddle point method successfully. We remark that our method can also work for the cases of \(p\leq-1\) with some exceptions. We use the saddle point method developed by Ohtsuki in a series papers [31, 32, 33, 34] to prove Theorem 1.2. An outline of the proof is follows. First, we write the colored Jones polynomial of the twist knot \(J_{N}(\mathcal{K}_{p};e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{2}}})\) as a summation of Fourier coefficients with the help of quantum dilogarithm function and the Poisson summation formula. Next, we will show that infinite terms of these Fourier coefficients can be neglected in the sense that they can be sufficiently small order at \(N\to\infty\), we obtain formula (5.69). Then we estimate the remained finite terms of Fourier coefficients by using the saddle point method. Finally, we find that only two main Fourier coefficients will contribute to the asymptotic expansion formula. Hence we finish the proof Theorem 1.2. The paper is organized as follows. In Section 2, we fix the notations and review the related materials that will be used in this paper. In Section 3, we compute the potential function for the colored Jones polynomials of the twist knot \(\mathcal{K}_{p}\) and obtain Proposition 3.1. In Section 4, we prove Proposition 4.4 which expresses the colored Jones polynomial of the twist knot \(J_{N}(\mathcal{K}_{p};e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{2}}})\) as a summation of Fourier coefficients by Poisson summation formula. In Section 5, we first show that infinite terms of the Fourier coefficients can be neglected. Then we estimate the remained finite terms of Fourier coefficients by using the saddle point method, we obtain that only two main Fourier coefficients will contribute to the final form of the asymptotic expansion. Hence we finish the proof Theorem 1.2. Section 6 is devoted to the proof of several lemmas which will be used in previous sections. **Acknowledgements.** The first author would like to thank Nicolai Reshetikhin, Kefeng Liu and Weiping Zhang for bringing him to this area and a lot of discussions during his career, thank Francis Bonahon, Giovanni Felder and Shing-Tung Yau for his continuous encouragement, support and discussions, and thank Jun Murakami and Tomotada Ohtsuki for their helpful discussions and support. He also want to thank Jorgen Ellegaard Andersen, Sergei Gukov, Thang Le, Gregor Masbaum, Rinat Kashaev, Vladimir Turaev and Hiraku Nakajima for their support, discussions and interests, and thank Yunlong Yao who built him solid analysis foundation twenty years ago. The second author would like to thank Kefeng Liu and Hao Xu for bringing him to this area when he was a graduate student at CMS of Zhejiang University, and for their constant encouragement and helpful discussions since then. ## 2. Preliminaries ### Colored Jones polynomials In this subsection, we review the definition of the colored Jones polynomials and fix the notations. Let \(M\) be an oriented \(3\)-manifold, the Kauffman bracket skeim module \(\mathcal{K}(M)\) is the free \(\mathbb{Z}[A^{\pm 1}]\)-module generated by isotopy classes of framed links in \(M\) modulo the submodule generated by the Kauffman bracket skein relation: (1) Kauffman bracket skein relation: (2.1) (2) Framing relation: (2.2) (2.3) The Kauffman bracket \(\langle\mathcal{L}\;\rangle\) of a framed link \(\mathcal{L}\) in \(S^{3}\) gives a map from \(\mathcal{K}(S^{3})\) to \(\mathbb{Z}[A^{\pm 1}]\). We use the normalization that the bracket of the empty link is \(1\). The Kauffman bracket skein module of the solid torus \(S^{1}\times D^{2}\) is given by \(\mathbb{Z}[A^{\pm 1}][z]\). Usually, we denote this skein module by \(\mathcal{B}\). Here \(z\) is given by the framed link \(S^{1}\times J\), where \(J\) is a small arc lies in the interior of \(D^{2}\), and \(z^{n}\) means \(n\)-parallel copies of \(z\). The twist map \(t:\mathcal{B}\to\mathcal{B}\) is a map induced by a full right handed twist on the solid torus. There exists a basis \(\{e_{i}\}_{i\geq 0}\) of \(\mathcal{B}\), which are eigenvectors of the twist map \(t\) (see e.g [3]). \(\{e_{i}\}_{i\geq 0}\) can be defined recursively by \[e_{0}=1,\ e_{1}=z,\ e_{i}=ze_{i-1}-e_{i-2}. \tag{2.3}\] Moreover, the \(e_{i}\) satisfies \[\langle e_{i}\rangle =(-1)^{i}\frac{A^{2(i+1)}-A^{-2(i+1)}}{A^{2}-A^{-2}} \tag{2.5}\] \[t(e_{i}) =\mu_{i}e_{i} \tag{2.4}\] where \(\mu_{i}=(-1)^{i}A^{i^{2}+2i}\) is also called the framing factor. Throughout this paper, we make the convention \[q=A^{4},\ \{n\}=q^{\frac{n}{2}}-q^{-\frac{n}{2}}\ \text{for an integer}\ \ n. \tag{2.6}\] **Definition 2.1**.: Given a knot \(\mathcal{K}\) with zero framing, the \(N\)_-th colored Jones polynomial_\(J_{N}(\mathcal{K};q)\) of \(\mathcal{K}\) is defined to be the Kauffman bracket of \(\mathcal{K}\) called by \((-1)^{N-1}e_{N-1}\), i.e. \[\bar{J}_{N}(\mathcal{K};q)=(-1)^{N-1}\langle\mathcal{K}(e_{N-1})\rangle \tag{2.7}\] where the factor of \((-1)^{N}\) is included such that for the unknot \(U\), \(\bar{J}_{N}(U;q)=[N]\). Furthermore, the _normalized \(N\)-th colored Jones polynomial_ of \(\mathcal{K}\) is defined as \[J_{N}(\mathcal{K};q)=\frac{\langle\mathcal{K}(e_{N-1})\rangle}{\langle e_{N- 1}\rangle}. \tag{2.8}\] We consider the twist knot \(\mathcal{K}_{p}\) illustrated in Figure 1, where the index \(2p\) represents \(2p\) crossings (half-twists). For example, \(\mathcal{K}_{-1}=4_{1}\), \(\mathcal{K}_{1}=3_{1}\), \(\mathcal{K}_{2}=5_{2}\). By using the Kauffman bracket skein theory [3, 24], Masbaum [25] rederived the cyclotomic expansion formula for the colored Jones polynomial of the twist knot \(\mathcal{K}_{p}\) due to Habiro [18]. **Proposition 2.2**.: _The normalized \(N\)-th colored Jones polynomial of the twist knot \(\mathcal{K}_{p}\) is given by_ \[J_{N}(\mathcal{K}_{p};q)=\sum_{k=0}^{N-1}\sum_{l=0}^{k}(-1)^{l}q^{\frac{k(k+3)}{4 }+pl(l+1)}\frac{\{k\}!\{2l+1\}}{\{k+l+1\}!\{k-l\}!}\prod_{i=1}^{k}(\{N+i\}\{N-i \}). \tag{2.10}\] ### Dilogarithm and Lobachevsky functions Let \(\log:\mathbb{C}\setminus(-\infty,0]\to\mathbb{C}\) be the standard logarithm function defined by \[\log z=\log|z|+\sqrt{-1}\arg z \tag{2.11}\] with \(-\pi<\arg z<\pi\). The dilogarithm function \(\operatorname{Li}_{2}:\mathbb{C}\setminus(1,\infty)\to\mathbb{C}\) is defined by \[\operatorname{Li}_{2}(z)=-\int_{0}^{z}\frac{\log(1-x)}{x}dx \tag{2.12}\] where the integral is along any path in \(\mathbb{C}\setminus(1,\infty)\) connecting \(0\) and \(z\), which is holomorphic in \(\mathbb{C}\setminus[1,\infty)\) and continuous in \(\mathbb{C}\setminus(1,\infty)\). The dilogarithm function satisfies the following properties \[\operatorname{Li}_{2}\left(\frac{1}{z}\right)=-\operatorname{Li}_{2}(z)- \frac{\pi^{2}}{6}-\frac{1}{2}(\log(-z))^{2}. \tag{2.13}\] In the unit disk \(\{z\in\mathbb{C}||z|<1\}\), \(\operatorname{Li}_{2}(z)=\sum_{n=1}^{\infty}\frac{z^{n}}{n^{2}}\), and on the unit circle \[\{z=e^{2\pi\sqrt{-1}t}|0\leq t\leq 1\}, \tag{2.14}\] we have \[\operatorname{Li}_{2}(e^{2\pi\sqrt{-1}t})=\frac{\pi^{2}}{6}+\pi^{2}t(t-1)+2 \pi\sqrt{-1}\Lambda(t) \tag{2.15}\] where \[\Lambda(t)=\operatorname{Re}\left(\frac{\operatorname{Li}_{2}(e^{2\pi\sqrt{- 1}t})}{2\pi\sqrt{-1}}\right)=-\int_{0}^{t}\log|2\sin\pi t|dt \tag{2.16}\] for \(t\in\mathbb{R}\). The function \(\Lambda(t)\) is an odd function which has period \(1\) and satisfies \(\Lambda(1)=\Lambda(\frac{1}{2})=0\). Furthermore, we have the follow estimation for the function \[\operatorname{Re}\left(\frac{1}{2\pi\sqrt{-1}}\operatorname{Li}_{2}\left(e^{2 \pi\sqrt{-1}(t+X\sqrt{-1})}\right)\right)\] with \(t,X\in\mathbb{R}\). **Lemma 2.3**.: _(see Lemma 2.2 in [34]) Let \(t\) be a real number with \(0<t<1\). Then there exists a constant \(C>0\) such that_ \[\begin{cases}0&(\text{if }X\geq 0)\\ 2\pi\left(t-\frac{1}{2}\right)X&(\text{if }X<0)\end{cases}-C<\text{Re} \left(\frac{1}{2\pi\sqrt{-1}}\dot{L}\dot{L}_{2}\left(e^{2\pi\sqrt{-1}(t+X \sqrt{-1})}\right)\right) \tag{2.17}\] \[<\begin{cases}0&(\text{if }X\geq 0)\\ 2\pi\left(t-\frac{1}{2}\right)X&(\text{if }X<0)\end{cases}+C.\] ### Quantum dilogrithm functions For a positive integer \(N\), we set \(\xi_{N}=e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{2}}}\). We introduce the holomorphic function \(\varphi_{N}(t)\) for \(\{t\in\mathbb{C}|0<\text{Re}t<1\}\), by the following integral \[\varphi_{N}(t)=\int_{-\infty}^{+\infty}\frac{e^{(2t-1)x}dx}{4x\sinh x\sinh \frac{x}{N+\frac{1}{2}}}. \tag{2.18}\] Noting that this integrand has poles at \(n\pi\sqrt{-1}(n\in\mathbb{Z})\), where, to avoid the poles at \(0\), we choose the following contour of the integral \[\gamma=(-\infty,-1]\cup\{z\in\mathbb{C}||z|=1,\text{Im}z\geq 0\}\cup[1,\infty). \tag{2.19}\] **Lemma 2.4**.: _The function \(\varphi_{N}(t)\) satisfies_ \[(\xi_{N})_{n} =\exp\left(\varphi_{N}\left(\frac{1}{2N+1}\right)-\varphi_{N} \left(\frac{2n+1}{2N+1}\right)\right)\ \ (0\leq n\leq N)\,, \tag{2.21}\] \[(\xi_{N})_{n} =\exp\left(\varphi_{N}\left(\frac{1}{2N+1}\right)-\varphi_{N} \left(\frac{2n+1}{2N+1}-1\right)+\log 2\right)\ \ (N<n\leq 2N)\,. \tag{2.20}\] **Lemma 2.5**.: _We have the following identities:_ \[\varphi_{N}(t)+\varphi_{N}(1-t) =2\pi\sqrt{-1}\left(-\frac{2N+1}{4}(t^{2}-t+\frac{1}{6})+\frac{1 }{12(2N+1)}\right), \tag{2.23}\] \[\varphi_{N}\left(\frac{1}{2N+1}\right) =\frac{2N+1}{4\pi\sqrt{-1}}\frac{\pi^{2}}{6}+\frac{1}{2}\log \left(\frac{2N+1}{2}\right)+\frac{\pi\sqrt{-1}}{4}-\frac{\pi\sqrt{-1}}{6(2N+1 )},\] (2.24) \[\varphi_{N}\left(1-\frac{1}{2N+1}\right) =\frac{2N+1}{4\pi\sqrt{-1}}\frac{\pi^{2}}{6}-\frac{1}{2}\log \left(\frac{2N+1}{2}\right)+\frac{\pi\sqrt{-1}}{4}-\frac{\pi\sqrt{-1}}{6(2N+1 )}. \tag{2.22}\] The function \(\varphi_{N}(t)\) is closely related to the dilogarithm function as follows. **Lemma 2.6**.: _(1)For every \(t\) with \(0<\text{Ret}<1\),_ \[\varphi_{N}(t)=\frac{N+\frac{1}{2}}{2\pi\sqrt{-1}}\text{Li}_{2}(e^{2\pi\sqrt{ -1}t})-\frac{\pi\sqrt{-1}e^{2\pi\sqrt{-1}t}}{6(1-e^{2\pi\sqrt{-1}t})}\frac{1} {2N+1}+O\left(\frac{1}{(N+\frac{1}{2})^{3}}\right). \tag{2.25}\] _(2) For every \(t\) with \(0<\text{Ret}<1\),_ \[\varphi_{N}^{\prime}(t)=-\frac{2N+1}{2}\log(1-e^{2\pi\sqrt{-1}t})+O\left( \frac{1}{N+\frac{1}{2}}\right) \tag{2.26}\] _(3) As \(N\to\infty\), \(\frac{1}{N+\frac{1}{2}}\varphi_{N}(t)\) uniformly converges to \(\frac{1}{2\pi\sqrt{-1}}\text{Li}_{2}(e^{2\pi\sqrt{-1}t})\) and \(\frac{1}{N+\frac{1}{2}}\varphi_{N}^{\prime}(t)\) uniformly converges to \(-\log(1-e^{2\pi\sqrt{-1}t})\) on any compact subset of \(\{t\in\mathbb{C}|0<\text{Re}z<1\}\)._ See the literature, such as [31, 5, 40] for the proof of Lemma 2.4, 2.5, 2.6. ### Saddle point method We need to use the following version of saddle point method as illustrated in [33]. **Proposition 2.7** ([33], Proposition 3.1).: _Let \(A\) be a non-singular symmetric complex \(2\times 2\) matrix, and let \(\Psi(z_{1},z_{2})\) and \(r(z_{1},z_{2})\) be holomorphic functions of the forms,_ \[\Psi(z_{1},z_{2}) =\mathbf{z}^{T}A\mathbf{z}+r(z_{1},z_{2}),\] \[r(z_{1},z_{2}) =\sum_{i,j,k}b_{ijk}z_{i}z_{j}z_{k}+\sum_{i,j,k,l}c_{ijkl}z_{i}z_ {j}z_{k}z_{l}+\cdots \tag{2.27}\] _defined in a neighborhood of \(\mathbf{0}\in\mathbb{C}\). The restriction of the domain_ \[\{(z_{1},z_{2})\in\mathbb{C}^{2}|\text{Re}\Psi(z_{1},z_{2})<0\} \tag{2.28}\] _to a neighborhood of \(\mathbf{0}\in\mathbb{C}^{2}\) is homotopy equivalent to \(S^{1}\). Let \(D\) be an oriented disk embeded in \(\mathbb{C}^{2}\) such that \(\partial D\) is included in the domain (2.28) whose inclusion is homotopic to a homotopy equivalence to the above \(S^{1}\) in the domain (2.28). Then we have the following asymptotic expansion_ \[\int_{D}e^{N\psi(z_{1},z_{2})}dz_{1}dz_{2}=\frac{\pi}{N\sqrt{\det(-A)}}\left( 1+\sum_{i=1}^{d}\frac{\lambda_{i}}{N^{i}}+O(\frac{1}{N^{d+1}})\right), \tag{2.29}\] _for any \(d\), where we choose the sign of \(\sqrt{\det(-A)}\) as explained in Proposition [31], and \(\lambda_{i}\)'s are constants presented by using coefficients of the expansion \(\Psi(z_{1},z_{2})\), such presentations are obtained by formally expanding the following formula,_ \[1+\sum_{i=1}^{\infty}\frac{\lambda_{i}}{N^{i}}=\exp\left(Nr\left(\frac{ \partial}{\partial w_{1}},\frac{\partial}{\partial w_{2}}\right)\right)\exp \left(-\frac{1}{4N}(w_{1},w_{2})A^{-1}\binom{w_{1}}{w_{2}}\right)|_{w_{1}=w_{2 }=0}. \tag{2.30}\] For a proof of the Proposition 2.7, see [31]. **Remark 2.8** ([33], Remark 3.2).: As mentioned in Remark 3.6 of [31], we can extend Proposition 2.7 to the case where \(\Psi(z_{1},z_{2})\) depends on \(N\) in such a way that \(\Psi(z_{1},z_{2})\) is of the form \[\Psi(z_{1},z_{2})=\Psi_{0}(z_{1},z_{2})+\Psi_{1}(z_{1},z_{2})\frac{1}{N}+R(z_ {1},z_{2})\frac{1}{N^{2}}. \tag{2.31}\] where \(\Psi_{i}(z_{1},z_{2})\)'s are holomorphic functions independent of \(N\), and we assume that \(\Psi_{0}(z_{1},z_{2})\) satisfies the assumption of the Proposition and \(|R(z_{1},z_{2})|\) is bounded by a constant which is independent of \(N\). ## 3. Computation of the potential function This section is devoted to the computation of potential function for the colored Jones polynomial \(J_{N}(\mathcal{K}_{p};q)\) at the root of unity \(\xi_{N}\). We introduce the following \(q\)-Pochhammer symbol \[(q)_{n}=\prod_{i=1}^{n}(1-q^{i}). \tag{3.1}\] Then we have \[\{n\}!=(-1)^{n}q^{\frac{-n(n+1)}{4}}(q)_{n}. \tag{3.2}\] By formula (2.10), we obtain \[J_{N}(\mathcal{K}_{p};q) =\sum_{k=0}^{N-1}\sum_{l=0}^{k}(-1)^{k+l}q^{pl(l+1)+\frac{l(l-1)}{2} -Nk+\frac{k(k+1)}{2}+k}\] \[\quad\cdot\frac{(1-q^{2l+1})}{(1-q^{N})}\frac{(q)_{k}(q)_{N+k}}{( q)_{k+l+1}(q)_{k-l}(q)_{N-k-1}}. \tag{3.3}\] Computing at the root of unity \(\xi_{N}\), we get \[J_{N}(\mathcal{K}_{p};\xi_{N})= \sum_{k=0}^{N-1}\sum_{l=0}^{k}\frac{(-1)^{k+l+1}\sin\frac{2\pi(2l +1)}{2N+1}}{\sin\frac{\pi}{2N+1}}\] \[\quad\cdot q^{(p+\frac{1}{2})l^{2}+(p+\frac{1}{2})l+\frac{k^{2}}{ 2}+2k+\frac{3}{4}}\frac{(q)_{k}(q)_{N+k}}{(q)_{k+l+1}(q)_{k-l}(q)_{N-k-1}}|_{q =\xi_{N}}. \tag{3.4}\] Now, we study the following term \[(-1)^{l-k-1}q^{(p+\frac{1}{2})l^{2}+(p+\frac{1}{2})l+\frac{k^{2}}{2}+2k+\frac {3}{4}}\frac{(q)_{k}(q)_{N+k}}{(q)_{k+l+1}(q)_{k-l}(q)_{N-k-1}}|_{q=\xi_{N}}. \tag{3.5}\] By using Lemma 2.4, we obtain \[\frac{(\xi_{N})_{k}(\xi_{N})_{N+k}}{(\xi_{N})_{k+l+1}(\xi_{N})_{k -l}(\xi_{N})_{N-k-1}}\] \[=\exp\left(\varphi_{N}\left(\frac{2(k+l+1)+1}{2N+1}\right)+ \varphi_{N}\left(\frac{2(k-l)+1}{2N+1}\right)+\varphi_{N}\left(1-\frac{2(k+1) }{2N+1}\right)\right.\] \[\left.-\varphi_{N}\left(\frac{2k+1}{2N+1}\right)-\varphi_{N}\left( \frac{2k}{2N+1}\right)-\varphi_{N}\left(\frac{1}{2N+1}\right)+\log 2\right)\] \[=\exp\left(\varphi_{N}\left(\frac{2k+2l+3}{2N+1}\right)+\varphi_{ N}\left(\frac{2k-2l+1}{2N+1}\right)-\varphi_{N}\left(\frac{2k+1}{2N+1}\right)\right.\] \[\left.-\varphi_{N}\left(\frac{2k}{2N+1}\right)-\varphi_{N}\left( \frac{2k+2}{2N+1}\right)+\frac{(2N+1)\pi\sqrt{-1}}{24}-\frac{\pi\sqrt{-1}}{4} -\frac{1}{2}\log\frac{2N+1}{2}\right.\] \[\left.+\pi\sqrt{-1}\left(\frac{1}{3(2N+1)}-\frac{2(k+1)^{2}}{2N+1 }+(k+1)-\frac{2N+1}{12}\right)+\log 2\right), \tag{3.6}\] for \(0<k+l+1\leq N\), and \[\frac{(\xi_{N})_{k}(\xi_{N})_{N+k}}{(\xi_{N})_{k+l+1}(\xi_{N})_{k-l} (\xi_{N})_{N-k-1}}\] \[=\exp\left(\varphi_{N}\left(\frac{2(k+l+1)+1}{2N+1}-1\right)+ \varphi_{N}\left(\frac{2(k-l)+1}{2N+1}\right)\right.\] \[\left.+\varphi_{r}\left(1-\frac{2(k+1)}{2N+1}\right)-\varphi_{N} \left(\frac{2k+1}{2N+1}\right)-\varphi_{N}\left(\frac{2k}{2N+1}\right)-\varphi _{N}\left(\frac{1}{2N+1}\right)\right)\] \[=\exp\left(\varphi_{N}\left(\frac{2k+2l+3}{2N+1}-1\right)+ \varphi_{N}\left(\frac{2k-2l+1}{2N+1}\right)-\varphi_{N}\left(\frac{2k+1}{2N +1}\right)\right.\] \[\left.-\varphi_{N}\left(\frac{2k}{2N+1}\right)-\varphi_{N}\left( \frac{2k+2}{2N+1}\right)+\frac{(2N+1)\pi\sqrt{-1}}{24}-\frac{\pi\sqrt{-1}}{4 }-\frac{1}{2}\log\frac{2N+1}{2}\right.\] \[\left.+\pi\sqrt{-1}\left(\frac{1}{3(2N+1)}-\frac{2(k+1)^{2}}{2N+1 }+(k+1)-\frac{2N+1}{12}\right)\right), \tag{3.7}\] for \(N<k+l+1\leq 2N\). Therefore, we obtain \[(-1)^{l-k-1}\xi_{N}^{(p+\frac{1}{2})(l^{2}+l)+\frac{k^{2}}{2}+2k+ \frac{3}{4}}\frac{(\xi_{N})_{k}(\xi_{N})_{N+k}}{(\xi_{N})_{k+l+1}(\xi_{N})_{k -l}(\xi_{N})_{N-k-1}}\] \[=2\exp(N+\frac{1}{2})\left(\pi\sqrt{-1}\left((2p+1)\left(\frac{2 l+1}{2N+1}\right)^{2}+\frac{4(2k+1)}{(2N+1)^{2}}-\frac{6p+7}{3(2N+1)^{2}}\right.\right.\] \[\left.\left.+\frac{2l+1}{2N+1}-\frac{3}{2(2N+1)}\right)+\frac{1} {N+\frac{1}{2}}\varphi_{N}\left(\frac{2k+2l+3}{2N+1}\right)+\frac{1}{N+\frac{ 1}{2}}\varphi_{N}\left(\frac{2k-2l+1}{2N+1}\right)\right.\] \[\left.-\frac{1}{N+\frac{1}{2}}\varphi_{N}\left(\frac{2k+1}{2N+1} \right)-\frac{1}{N+\frac{1}{2}}\varphi_{N}\left(\frac{2k}{2N+1}\right)-\frac{ 1}{N+\frac{1}{2}}\varphi_{N}\left(\frac{2k+2}{2N+1}\right)\right.\] \[\left.-\frac{\pi\sqrt{-1}}{12}-\frac{1}{2N+1}\log\frac{2N+1}{2}\right) \tag{3.8}\] for \(0<k+l+1\leq N\), and \[(-1)^{l-k-1}\xi_{N}^{(p+\frac{1}{2})(l^{2}+l)+\frac{k^{2}}{2}+2k+ \frac{3}{4}}\frac{(\xi_{N})_{k}(\xi_{N})_{N+k}}{(\xi_{N})_{k+l+1}(\xi_{N})_{k -l}(\xi_{N})_{N-k-1}}\] \[=\exp(N+\frac{1}{2})\left(\pi\sqrt{-1}\left((2p+1)\left(\frac{2l +1}{2N+1}\right)^{2}+\frac{4(2k+1)}{(2N+1)^{2}}-\frac{6p+7}{3(2N+1)^{2}}\right.\right.\] \[\left.\left.+\frac{2l+1}{2N+1}-\frac{3}{2(2N+1)}\right)+\frac{1} {N+\frac{1}{2}}\varphi_{N}\left(\frac{2k+2l+3}{2N+1}-1\right)\right.\] \[\left.+\frac{1}{N+\frac{1}{2}}\varphi_{N}\left(\frac{2k-2l+1}{2N +1}\right)-\frac{1}{N+\frac{1}{2}}\varphi_{N}\left(\frac{2k+1}{2N+1}\right)- \frac{1}{N+\frac{1}{2}}\varphi_{N}\left(\frac{2k}{2N+1}\right)\right.\] \[\left.-\frac{1}{N+\frac{1}{2}}\varphi_{N}\left(\frac{2k+2}{2N+1} \right)-\frac{\pi\sqrt{-1}}{12}-\frac{1}{2N+1}\log\frac{2N+1}{2}\right) \tag{3.9}\] for \(N<k+l+1\leq 2N\). Now we set \[t=\frac{2k+1}{2N+1},s=\frac{2l+1}{2N+1} \tag{3.10}\] and define the functions \(\tilde{V}_{N}(p,t,s)\) and \(\delta(t,s)\) as follows. (1) If \(0<s<1\), \(0<t\pm s<1\), then \[\tilde{V}_{N}(p,t,s) =\pi\sqrt{-1}\left((2p+1)s^{2}+s+\frac{4}{2N+1}t-\frac{6p+7}{3(2N+ 1)^{2}}-\frac{3}{2(2N+1)}\right)\] \[+\frac{1}{N+\frac{1}{2}}\varphi_{N}\left(t+s+\frac{1}{2N+1} \right)+\frac{1}{N+\frac{1}{2}}\varphi_{N}\left(t-s+\frac{1}{2N+1}\right)\] \[-\frac{1}{N+\frac{1}{2}}\varphi_{N}\left(t\right)-\frac{1}{N+ \frac{1}{2}}\varphi_{N}\left(t-\frac{1}{2N+1}\right)-\frac{1}{N+\frac{1}{2}} \varphi_{N}\left(t+\frac{1}{2N+1}\right)\] \[-\frac{\pi\sqrt{-1}}{12}-\frac{1}{2N+1}\log\frac{2N+1}{2},\] and \(\delta(t,s)=2\). (2) If \(0<t<1\), \(0<t-s<1\) and \(1<t+s<2\), then \[\tilde{V}_{N}(p,t,s) =\pi\sqrt{-1}\left((2p+1)s^{2}+s+\frac{4}{2N+1}t-\frac{6p+7}{3(2N+ 1)^{2}}-\frac{3}{2(2N+1)}\right)\] \[+\frac{1}{N+\frac{1}{2}}\varphi_{N}\left(t+s+\frac{1}{2N+1}-1 \right)+\frac{1}{N+\frac{1}{2}}\varphi_{N}\left(t-s+\frac{1}{2N+1}\right)\] \[-\frac{1}{N+\frac{1}{2}}\varphi_{N}\left(t\right)-\frac{1}{N+ \frac{1}{2}}\varphi_{N}\left(t-\frac{1}{2N+1}\right)-\frac{1}{N+\frac{1}{2}} \varphi_{N}\left(t+\frac{1}{2N+1}\right)\] \[-\frac{\pi\sqrt{-1}}{12}-\frac{1}{2N+1}\log\frac{2N+1}{2},\] and \(\delta(t,s)=1\). Based on the above calculations, we obtain \[J_{N}(\mathcal{K}_{p};\xi_{N}) =\sum_{k=0}^{N-1}\sum_{l=0}^{k}\frac{\sin\frac{2\pi(2l+1)}{2N+1}} {\sin\frac{\pi}{2N+1}}\delta\left(\frac{2k+1}{2N+1},\frac{2l+1}{2N+1}\right)e ^{(N+\frac{1}{2})\tilde{V}_{N}\left(\frac{2k+1}{2N+1},\frac{2l+1}{2N+1}\right)}\] \[=\sum_{k=0}^{N-1}\sum_{l=0}^{k}\frac{\sin\frac{2\pi(2l+1)}{2N+1}} {\sin\frac{\pi}{2N+1}}\delta\left(\frac{2k+1}{2N+1},\frac{2l+1}{2N+1}\right)\] \[\cdot e^{(N+\frac{1}{2})\left(\tilde{V}_{N}\left(\frac{2k+1}{2N+1 },\frac{2l+1}{2N+1}\right)-2\pi\sqrt{-1}\frac{2k}{2N+1}-2(p+2)\pi\sqrt{-1} \frac{2l}{2N+1}\right)}. \tag{3.11}\] For convenience, we introduce the function \(V_{N}(p,x,y)\) which is determined by the following formula \[\tilde{V}_{N}(p,t,s)-2\pi\sqrt{-1}(t-\frac{1}{2N+1})-2(p+2)\pi \sqrt{-1}(s-\frac{1}{2N+1})\] \[=V_{N}(p,t,s)+\pi\sqrt{-1}\frac{4p+9}{2(2N+1)}-\frac{1}{2N+1} \log\frac{2N+1}{2}. \tag{3.12}\] Note that the functions \(\tilde{V}_{N}(p,t,s)\) and \(V_{N}(p,t,s)\) are defined on the region \[D=\{(t,s)\in\mathbb{R}^{2}|0<t<1,0<s<1,0<t-s<1\}. \tag{3.13}\] From formula (3.11), we finally obtain **Proposition 3.1**.: _The normalized \(N\)-th colored Jones polynomial of the twist \(\mathcal{K}_{p}\) at the root of unity \(\xi_{N}\) can be computed as_ \[J_{N}(\mathcal{K}_{p};\xi_{N})=\sum_{k=0}^{N-1}\sum_{l=0}^{k}g_{N}(k,l) \tag{3.14}\] _with_ \[g_{N}(k,l) =(-1)^{p}e^{\frac{\pi\sqrt{-1}}{4}}\frac{1}{\sqrt{(N+\frac{1}{2} )}\sin\frac{\pi}{2N+1}}\sin\frac{2\pi(2l+1)}{2N+1}\] \[\cdot\delta\left(\frac{2k+1}{2N+1},\frac{2l+1}{2N+1}\right)e^{(N +\frac{1}{2})V_{N}\left(p,\frac{2k+1}{2N+1},\frac{2l+1}{2N+1}\right)}, \tag{3.15}\] _where the function \(\delta(t,s)\) and the function \(V_{N}(p,t,s)\) are given by_ _(1) If \(0<t<1\), \(0<t\pm s<1\), then \(\delta(t,s)=2\) and_ \[V_{N}(p,t,s) =\pi\sqrt{-1}\left((2p+1)s^{2}-(2p+3)s+\left(\frac{4}{2N+1}-2 \right)t-\frac{6p+7}{3(2N+1)^{2}}\right)\] \[+\frac{1}{N+\frac{1}{2}}\varphi_{N}\left(t+s+\frac{1}{2N+1} \right)+\frac{1}{N+\frac{1}{2}}\varphi_{N}\left(t-s+\frac{1}{2N+1}\right)\] \[-\frac{1}{N+\frac{1}{2}}\varphi_{N}\left(t\right)-\frac{1}{N+ \frac{1}{2}}\varphi_{N}\left(t-\frac{1}{2N+1}\right)\] \[-\frac{1}{N+\frac{1}{2}}\varphi_{N}\left(t+\frac{1}{2N+1}\right) -\frac{\pi\sqrt{-1}}{12}. \tag{3.16}\] _(2) If \(0<t<1\), \(0<t-s<1\) and \(1<t+s<2\), then \(\delta(t,s)=1\) and_ \[V_{N}(p,t,s) =\pi\sqrt{-1}\left((2p+1)s^{2}-(2p+3)s+\left(\frac{4}{2N+1}-2 \right)t-\frac{6p+7}{3(2N+1)^{2}}\right)\] \[+\frac{1}{N+\frac{1}{2}}\varphi_{N}\left(t+s+\frac{1}{2N+1}-1 \right)+\frac{1}{N+\frac{1}{2}}\varphi_{N}\left(t-s+\frac{1}{2N+1}\right)\] \[-\frac{1}{N+\frac{1}{2}}\varphi_{N}\left(t\right)-\frac{1}{N+ \frac{1}{2}}\varphi_{N}\left(t-\frac{1}{2N+1}\right)\] \[-\frac{1}{N+\frac{1}{2}}\varphi_{N}\left(t+\frac{1}{2N+1}\right) -\frac{\pi\sqrt{-1}}{12}. \tag{3.17}\] We introduce the potential function for the twist knot \(\mathcal{K}_{p}\) as \[V(p,t,s)=\lim_{N\to\infty}V_{N}(p,t,s)=\pi\sqrt{-1}\left((2p+1)s ^{2}-(2p+3)s-2t\right)\] \[+\frac{1}{2\pi\sqrt{-1}}\left(\text{Li}_{2}(e^{2\pi\sqrt{-1}(t+s )})+\text{Li}_{2}(e^{2\pi\sqrt{-1}(t-s)})-3\text{Li}_{2}(e^{2\pi\sqrt{-1}t})+ \frac{\pi^{2}}{6}\right). \tag{3.18}\] ## 4. Poisson summation formula In this section, with the help of Poisson summation formula, we write the formula (3.14) as a sum of integrals. First, according to formulas (2.10) and (3.14), we have \[g_{N}(k,l) =(-1)^{l}q^{\frac{k(k+3)}{4}+pl(l+1)}\frac{\{k\}!\{2l+1\}}{\{k+l+1 \}!\{k-l\}!}\prod_{i=1}^{k}(\{N+i\}\{N-i\})\] \[=(-1)^{l}q^{\frac{k(k+3)}{4}+pl(l+1)}\frac{\{2l+1\}}{\{N\}}\frac{\{ k\}!\{N+k\}!}{\{k+l+1\}!\{k-l\}!\{N-k-1\}!}. \tag{4.1}\] By Lemmas 2.4, 2.5, 2.6 and formula (2.16), we obtain \[\log|\{n\}!|=-(N+\frac{1}{2})\Lambda\left(\frac{2n+1}{2N+1}\right)+O(\log(2N+ 1)) \tag{4.2}\] for any integer \(0<n<2N+1\) and at \(q=\xi_{N}=e^{\frac{2\pi\sqrt{-1}}{N+\frac{1}{2}}}\). So we have \[\log|g_{N}(k,l)|\] \[=-(N+\frac{1}{2})\Lambda\left(\frac{2k+1}{2N+1}\right)-(N+\frac{ 1}{2})\Lambda\left(\frac{2N+2k+1}{2N+1}\right)+(N+\frac{1}{2})\Lambda\left( \frac{2(k+l+1)+1}{2N+1}\right)\] \[+(N+\frac{1}{2})\Lambda\left(\frac{2(k-l)+1}{2N+1}\right)+(N+ \frac{1}{2})\Lambda\left(\frac{2(N-k-1)+1}{2N+1}\right)+O(\log(2N+1))\] \[=-(N+\frac{1}{2})\Lambda\left(\frac{2k+1}{2N+1}\right)-(N+\frac{ 1}{2})\Lambda\left(\frac{2k}{2N+1}\right)+(N+\frac{1}{2})\Lambda\left(\frac{2 k+2l+3}{2N+1}\right)\] \[+(N+\frac{1}{2})\Lambda\left(\frac{2k-2l+1}{2N+1}\right)-(N+ \frac{1}{2})\Lambda\left(\frac{2k+2}{2N+1}\right)+O(\log(2N+1)), \tag{4.3}\] where in the second "\(=\)" we have used the properties of the function \(\Lambda(t)\). We put \[v_{N}(t,s) =\Lambda\left(t+s+\frac{1}{2N+1}\right)+\Lambda\left(t-s+\frac{1} {2N+1}\right)\] \[-\Lambda\left(t-\frac{1}{2N+1}\right)-\Lambda\left(t\right)- \Lambda\left(t+\frac{1}{2N+1}\right), \tag{4.4}\] then we obtain \[|g_{N}(k,l)|=e^{(N+\frac{1}{2})v_{N}\left(\frac{2k+1}{2N+1},\frac{2l+1}{2N+1} \right)+O(\log(2N+1))}. \tag{4.5}\] We define the function \[v(t,s)=\Lambda(t+s)+\Lambda(t-s)-3\Lambda\left(t\right). \tag{4.6}\] Note that \(\left(\frac{2k+1}{2N+1},\frac{2l+1}{2N+1}\right)\in D=\{(t,s)\in\mathbb{R}^{2} |1<t+s<2,0<t-s<1,\frac{1}{2}<t<1\}\) for \(0\leq k,l\leq N-1\). So we may assume the function \(v(t,s)\) is defined on the region \(D\). We set \[D^{\prime}_{0}=\{0.02\leq t-s\leq 0.7,1.02\leq t+s\leq 1.7,0.2\leq s\leq 0.8,0.5 \leq t\leq 0.909\}. \tag{4.7}\] Let \(\zeta_{\mathbb{R}}(p)\) be the real part of the critical value \(V(p,t_{0},s_{0})\), see formula (5.20) for its precise definition. Then we have **Lemma 4.1**.: _The following domain_ \[\left\{(t,s)\in D|v(t,s)>\frac{3.509}{2\pi}\right\} \tag{4.8}\] _is included in the region \(D^{\prime}_{0}\)._ Proof.: See Appendix 6.1 for a proof. **Remark 4.2**.: We can take \(\varepsilon>0\) small enough (such as \(\varepsilon=0.00001\)), and set \[D^{\prime}_{\varepsilon}=\left\{0.02+\varepsilon\leq t-s\leq 0.7- \varepsilon,1.02+\varepsilon\leq t+s\leq 1.7-\varepsilon,\right.\] \[\left.0.2+\varepsilon\leq s\leq 0.8-\varepsilon,0.5+ \varepsilon\leq t\leq 0.909-\varepsilon\right\}, \tag{4.9}\] then the region (4.8) can also be included in the region \(D^{\prime}_{\varepsilon}\). **Proposition 4.3**.: _For \(p\geq 6\) and \((\frac{2k+1}{2N+1},\frac{2l+1}{2N+1})\in D\setminus D^{\prime}_{0}\), we have_ \[|g_{N}(k,l)|<O\left(e^{(N+\frac{1}{2})(\mathbb{R}(p)-\epsilon)}\right) \tag{4.10}\] _for some sufficiently small \(\epsilon>0\)._ Proof.: For \((\frac{2k+1}{2N+1},\frac{2l+1}{2N+1})\in D\setminus D^{\prime}_{0}\), since \(v_{N}(t,s)\) converges uniformly to \(v(t,s)\), by Lemma 4.1, we obtain \[|v_{N}\left(\frac{2k+1}{2N+1},\frac{2l+1}{2N+1}\right)|\leq\frac{3.509}{2\pi}< \frac{1}{2\pi}\left(v_{8}-\frac{49\pi^{2}}{64p^{2}}\right)<\zeta_{\mathbb{R}} (p)-\epsilon \tag{4.11}\] for some small \(\epsilon>0\), where in the second " \(<\) " of (4.11), we have used \(p\geq 7\), and in the third " \(<\) " of (4.11), we have used Lemma 5.5. In particular, for \(p=6\) we have \(|v_{N}\left(\frac{2k+1}{2N+1},\frac{2l+1}{2N+1}\right)|\leq\frac{3.509}{2\pi}< \zeta_{\mathbb{R}}(6)-\epsilon\) for some small \(\epsilon>0\), since by a straightforward computation, \(\zeta_{\mathbb{R}}(6)=\frac{3.5889}{2\pi}\). For a sufficiently small \(\varepsilon\), we take a smooth bump function \(\psi\) on \(\mathbb{R}^{2}\) such that \(\psi(t,s)=1\) on \((t,s)\in D^{\prime}_{\varepsilon}\), \(0<\psi(t,s)<1\) on \((t,s)\in D^{\prime}_{0}\setminus D^{\prime}_{\varepsilon}\), \(\psi(t,s)=0\) for \((t,s)\notin D^{\prime}_{0}\). Let \[h_{N}(k,l)=\psi\left(\frac{2k+1}{2N+1},\frac{2l+1}{2N+1}\right)g_{N}(k,l). \tag{4.12}\] Then by Proposition 4.3, for \(p\geq 6\), we have \[J_{N}(\mathcal{K}_{p};\xi_{N})=\sum_{(k,l)\in\mathbb{Z}^{2}}h_{N}(k,l)+O\left( e^{(N+\frac{1}{2})(\zeta_{\mathbb{R}}(p)-\epsilon)}\right). \tag{4.13}\] We recall the Poisson summation formula [35] in 2-dimensional case which states that for any function \(h\) in the Schwartz space on \(\mathbb{R}^{2}\), we have \[\sum_{(k,l)\in\mathbb{Z}^{2}}h(k,l)=\sum_{(m,n)\in\mathbb{Z}^{2}}\hat{h}(m,n) \tag{4.14}\] where \[\hat{h}(m,n)=\int_{\mathbb{R}^{2}}h(u,v)e^{-2\pi\sqrt{-1}mu-2\pi\sqrt{-1}nv} dudv. \tag{4.15}\] Note that \(h_{N}\) is \(C^{\infty}\)-smooth and equals zero outside \(D_{0}^{\prime}\), it is in the Schwartz space on \(\mathbb{R}^{2}\). The Poisson summation formula (4.14) holds for \(h_{N}\). By using change of variables \(t=\frac{2k+1}{2N+1},s=\frac{2l+1}{2N+1}\), we compute the Fourier coefficient \(\hat{h}_{N}(m,n)\) as follows \[\int_{\mathbb{R}^{2}}h_{N}(k,l)e^{-2\pi\sqrt{-1}mk-2\pi\sqrt{-1} nl}dkdl\] \[=(-1)^{m+n}\left(N+\frac{1}{2}\right)^{2}\] \[\cdot\int_{\mathbb{R}^{2}}h_{N}\left((N+\frac{1}{2})t-\frac{1}{2 },(N+\frac{1}{2})s-\frac{1}{2}\right)e^{-2\pi\sqrt{-1}\frac{(2N+1)mt}{2}-2\pi \sqrt{-1}\frac{(2N+1)ns}{2}}dtds\] \[=(-1)^{m+n}\left(N+\frac{1}{2}\right)^{2}\frac{(-1)^{p}e^{\frac{ \pi\sqrt{-1}}{4}}}{\sqrt{N+\frac{1}{2}\sin\frac{\pi}{2N+1}}}\] \[\cdot\int_{\mathbb{R}^{2}}\psi(t,s)\sin(2\pi s)e^{(N+\frac{1}{2} )\left(V_{N}(p,t,s)-2\pi\sqrt{-1}mt-2\pi\sqrt{-1}ns\right)}dtds, \tag{4.16}\] Therefore, applying the Poisson summation formula (4.14) to (4.13), we obtain **Proposition 4.4**.: _For \(p\geq 6\), the normalized \(N\)-th colored Jones polynomial of the twist knot \(\mathcal{K}_{p}\) is given by_ \[J_{N}(\mathcal{K}_{p};\xi_{N})=\sum_{(m,n)\in\mathbb{Z}^{2}}\hat{h}_{N}(m,n)+O \left(e^{(N+\frac{1}{2})(\mathbb{Q}_{p})-\epsilon)}\right), \tag{4.17}\] _where_ \[\hat{h}_{N}(m,n)= (-1)^{p+m+n}e^{\frac{\pi\sqrt{-1}}{4}}\frac{\left(N+\frac{1}{2} \right)^{\frac{3}{2}}}{\sin\frac{\pi}{2N+1}}\int_{D_{0}^{\prime}}\psi(t,s)\sin (2\pi s)e^{(N+\frac{1}{2})(V_{N}(p,t,s;m,n))}dtds \tag{4.18}\] _with_ \[V_{N}\left(p,t,s;m,n\right)=V_{N}\left(p,t,s\right)-2\pi\sqrt{-1}mt-2\pi\sqrt {-1}ns, \tag{4.19}\] _and_ \[V_{N}(p,t,s)\] \[=\pi\sqrt{-1}\left((2p+1)s^{2}-(2p+3)s+\left(\frac{4}{2N+1}-2 \right)t-\frac{6p+7}{3(2N+1)^{2}}-\frac{1}{12}\right)\] \[+\frac{1}{2N+1}\varphi_{N}\left(t+s+\frac{1}{2N+1}-1\right)+\frac {1}{2N+1}\varphi_{N}\left(t-s+\frac{1}{2N+1}\right)\] \[-\frac{1}{2N+1}\varphi_{N}\left(t\right)-\frac{1}{2N+1}\varphi_{N }\left(t-\frac{1}{2N+1}\right)-\frac{1}{2N+1}\varphi_{N}\left(t+\frac{1}{2N+1 }\right). \tag{4.20}\] **Lemma 4.5**.: _The following identity holds_ \[V_{N}(p,t,1-s;m,n)=V_{N}(p,t,s;m,-n-2)-2\pi\sqrt{-1}(n+1). \tag{4.21}\] Proof.: By a straightforward computation, we obtain the following identity \[\pi\sqrt{-1}\left((2p+1)(1-s)^{2}-(2p+2n+3)(1-s)+\left(\frac{4}{2N+1 }-2m-2\right)t-\frac{1}{12}\right)\] \[=\pi\sqrt{-1}\left((2p+1)s^{2}-(2p+2(-n-2)+3)s\right)+\left(\frac{ 4}{2N+1}-2m-2\right)t-\frac{1}{12}\right)\] \[-2\pi\sqrt{-1}(n+1). \tag{4.22}\] which immediately gives the formula (4.21). **Proposition 4.6**.: _For any \(m,n\in\mathbb{Z}\), we have_ \[\hat{h}_{N}(m,-n-2)=(-1)^{n}\hat{h}_{N}(m,n). \tag{4.23}\] Proof.: Since \[\psi(t,s)\sin(2\pi s)\exp\left((N+\frac{1}{2})V_{N}\left(p,t,1-s;m,n\right)\right)\] \[=\psi(t,s)\sin(2\pi s)\exp\left((N+\frac{1}{2})\left(V_{N}\left(p,t,s;m,-n-2\right)-2\pi\sqrt{-1}(n+1)\right)\right)\] \[=\psi(t,s)\sin(2\pi s)\exp\left((N+\frac{1}{2})V_{N}\left(p,t,s; m,-n-2\right)\right)(-1)^{n+1}, \tag{4.24}\] and we remark that we can choose the bump function \(\psi(t,s)\) satisfying \(\psi(t,s)=\psi(t,1-s)\) since the region \(D^{\prime}_{0}\) is symmetric with respect to the line \(s=\frac{1}{2}\). Then we have \[\int_{D^{\prime}_{0}}\psi(t,s)\sin(2\pi s)\exp\left((N+\frac{1}{2} )V_{N}\left(p,t,s;m,-n-2\right)\right)dtds\] \[=(-1)^{n+1}\int_{D^{\prime}_{0}}\psi(t,s)\sin(2\pi s)\exp\left((N +\frac{1}{2})V_{N}\left(p,t,1-s;m,n\right)\right)dtds\] \[=(-1)^{n}\int_{D^{\prime}_{0}}\psi(t,\tilde{s})\sin(2\pi\tilde{s })\exp\left((N+\frac{1}{2})V_{N}\left(p,t,\tilde{s};m,n\right)\right)dtd\tilde {s}, \tag{4.25}\] where in the third "\(=\)", we have let \(\tilde{s}=1-s\). It follows that \[\hat{h}_{N}(m,-n-2)=(-1)^{n}\hat{h}_{N}(m,n). \tag{4.26}\] **Corollary 4.7**.: _We have_ \[\hat{h}_{N}(m,-1)=0, \tag{4.27}\] _and_ \[\hat{h}_{N}(m,-2)=\hat{h}_{N}(m,0). \tag{4.28}\] **Remark 4.8**.: The case (4.27) is the big cancellation. The first situation of such phenomenon of "Big cancellation" happened in quantum invariants is discovered in the Volume Conjecture of the Turaev-Viro invariants by Chen-Yang [6]. The hidden reason behind that was found and described as a precise statement of symmetric property of asymptotics of quantum 6j-symbol which is on the Poisson Summation level by Chen-Murakami which is Conjecture 3 in [5]. To the best of our knowledge, this is the first time that such a phenomenon of big cancellation on the Poisson Summation level on the case of colored Jones polynomial is proved. ## 5. Asymptotic expansions The goal of this section is to estimate each Fourier coefficients \(\hat{h}_{N}(m,n)\) appearing in Proposition 4.4. In Section 5.1, we establish some results which will be used in the later subsections. In Section 5.2 we estimate the Fourier coefficients \(\hat{h}_{N}(m,n)\) that can be neglected. We remark that this subsection actually is equivalent to the corresponding subsection devoted to the verification of the assumption of the Poisson summation formula in Ohtsuki's original papers such as [31, 33]. The final result we obtain in this subsection is the formula (5.69). In Sections 5.3 and 5.4, we estimate the remained Fourier coefficients and find out that only two terms will contribute to the final form of the asymptotic expansion. At last, we finish the proof of Theorem 1.2 in Section 5.5. ### Some preparations The aim of this subsection is to make some preparations that will be used in the later subsections. We consider the following potential function for the twist knot \(\mathcal{K}_{p}\) \[V(p,t,s;m,n)=\pi\sqrt{-1}\left((2p+1)s^{2}-(2p+3+2n)s-(2m+2)t\right)\] \[+\frac{1}{2\pi\sqrt{-1}}\left(\text{Li}_{2}(e^{2\pi\sqrt{-1}(t+s )})+\text{Li}_{2}(e^{2\pi\sqrt{-1}(t-s)})-3\text{Li}_{2}(e^{2\pi\sqrt{-1}t})+ \frac{\pi^{2}}{6}\right). \tag{5.1}\] We define the function \[f(p,t,X,s,Y;m,n)=ReV(p,t+X\sqrt{-1},s+Y\sqrt{-1};m,n), \tag{5.2}\] which will also be denoted by \(f(X,Y;m,n)\) for brevity in the following. We have \[\frac{\partial f}{\partial X} =Re\left(\sqrt{-1}\frac{\partial}{\partial t}V(p,t+X\sqrt{-1},s+ Y\sqrt{-1};m,n)\right)\] \[=-Im(-(2m+2)\pi\sqrt{-1}+3\log(1-x)-\log(1-xy)-\log(1-x/y))\] \[=-3arg(1-x)+arg(1-xy)+arg(1-x/y)+(2m+2)\pi, \tag{5.3}\] and \[\frac{\partial f}{\partial Y} =Re\left(\sqrt{-1}\frac{\partial}{\partial s}V(p,t+X\sqrt{-1},s+ Y\sqrt{-1};m,n)\right)\] \[=-Im(-(2p+3+2n)\pi\sqrt{-1}+(4p+2)\pi\sqrt{-1}s\] \[-\log(1-xy)+\log(1-x/y))\] \[=arg(1-xy)-arg(1-x/y)+(2p+3+2n)\pi-(4p+2)\pi s, \tag{5.4}\] where we put \(x=e^{2\pi\sqrt{-1}(t+X\sqrt{-1})}\) and \(y=e^{2\pi\sqrt{-1}(s+\sqrt{-1}Y)}\). Since \(\frac{dx}{dX}=-2\pi x\), we compute \[\frac{\partial^{2}f}{\partial X^{2}} =-Im\frac{\partial}{\partial X}\left(3\log(1-x)-\log(1-xy)-\log(1- x/y)\right)\] \[=-Im\left(\left(-\frac{3}{1-x}+\frac{y}{1-xy}+\frac{1/y}{1-x/y} \right)\frac{\partial x}{\partial X}\right)\] \[=2\pi Im\left(-\frac{3x}{1-x}+\frac{xy}{1-xy}+\frac{x/y}{1-x/y}\right)\] \[=2\pi Im\left(-\frac{3}{1-x}+\frac{1}{1-xy}+\frac{1}{1-x/y}\right). \tag{5.5}\] Furthermore, we have \[\frac{\partial^{2}f}{\partial X\partial Y}=2\pi Im\left(\frac{1}{1-xy}-\frac{ 1}{1-x/y}\right) \tag{5.6}\] and \[\frac{\partial^{2}f}{\partial Y^{2}}=2\pi Im\left(\frac{1}{1-xy}+\frac{1}{1-x /y}\right). \tag{5.7}\] Therefore, the Hessian matrix of \(f\) is presented by \[2\pi\begin{pmatrix}3a+b+c&b-c\\ b-c&b+c\end{pmatrix} \tag{5.8}\] where we put \[a=-Im\frac{1}{1-x},\ b=Im\frac{1}{1-xy},\ c=Im\frac{1}{1-x/y}. \tag{5.9}\] More precisely, by direct computations, we obtain \[3a =-\frac{3\sin 2\pi t}{e^{2\pi X}+e^{-2\pi X}-2\cos(2\pi t)},\] \[b =\frac{\sin(2\pi(t+s))}{e^{2\pi(X+Y)}+e^{-2\pi(X+Y)}-2\cos(2\pi(t+ s))},\] \[c =\frac{\sin(2\pi(t-s))}{e^{2\pi(X-Y)}+e^{-2\pi(X-Y)}-2\cos(2\pi(t- s))}. \tag{5.10}\] So if \(\frac{1}{2}<t<1\), \(1<t+s<\frac{3}{2}\) and \(0<t-s<\frac{1}{2}\), we have that \[a>0,\ b>0,c>0, \tag{5.11}\] which implies that the Hessian matrix of \(f\) with respect to \(X,Y\) is positive definite, i.e. we obtain **Lemma 5.1**.: _On the region \(D_{H}=\{(t,s)\in\mathbb{R}^{2}|\frac{1}{2}<t<1,1<t+s<\frac{3}{2},0<t-s<\frac{1 }{2}\}\), the Hessian matrix of \(f\) is positive definite._ **Lemma 5.2**.: _For any \(L>0\), in the region_ \[\{(t,s)\in\mathbb{C}^{2}|(Re(t),Re(s))\in D^{\prime}_{0},|\text{Im }t|<L,|\text{Im }s|<L\} \tag{5.12}\] _we have_ \[V_{N}(p,t,s;m,n) =V(p,t,s;m,n)-\frac{1}{2N+1}\left(\log(1-e^{2\pi\sqrt{-1}(t+s)})\right.\] \[\left.+\log(1-e^{2\pi\sqrt{-1}(t-s)})-4\pi\sqrt{-1}t\right)+\frac {w_{N}(t,s)}{(2N+1)^{2}}, \tag{5.13}\] _with \(\left|w_{N}(t,s)\right|\) bounded from above by a constant independent of \(N\)._ Proof.: By using Taylor expansion, together with Lemma 2.6, we have \[\varphi_{N}\left(t+s-1+\frac{1}{2N+1}\right)\] \[=\varphi_{N}(t+s-1)+\varphi_{N}^{\prime}(t+s-1)\frac{1}{2N+1}\] \[+\frac{\varphi_{N}^{\prime\prime}(t+s-1)}{2}\frac{1}{(2N+1)^{2}} +O\left(\frac{1}{(2N+1)^{2}}\right)\] \[=\frac{N+\frac{1}{2}}{2\pi\sqrt{-1}}\mathrm{Li}_{2}(e^{2\pi\sqrt {-1}(t+s)})-\frac{\pi\sqrt{-1}}{6(2N+1)}\frac{e^{2\pi\sqrt{-1}(t+s)}}{1-e^{2 \pi\sqrt{-1}(t+s)}}\] \[-\frac{1}{2}\log(1-e^{2\pi\sqrt{-1}(t+s)})+\frac{\pi\sqrt{-1}}{2 (2N+1)}\frac{e^{2\pi\sqrt{-1}(t+s)}}{1-e^{2\pi\sqrt{-1}(t+s)}}+O\left(\frac{1} {(2N+1)^{2}}\right) \tag{5.14}\] Then, we expand \(\varphi_{N}\left(t-s+\frac{1}{2N+1}\right)\), \(\varphi_{N}\left(t-\frac{1}{2N+1}\right)\) and \(\varphi_{N}\left(t+\frac{1}{2N+1}\right)\) similarly, we obtain \[V_{N}(p,t,s;m,n)=V(p,t,s;m,n)\] \[-\frac{1}{2N+1}\left(\log(1-e^{2\pi\sqrt{-1}(t+s)})+\log(1-e^{2 \pi\sqrt{-1}(t-s)})-4\pi\sqrt{-1}t\right)\] \[-\frac{\pi\sqrt{-1}}{3(2N+1)^{2}}\left(-2\frac{e^{2\pi\sqrt{-1}( t+s)}}{1-e^{2\pi\sqrt{-1}(t+s)}}-2\frac{e^{2\pi\sqrt{-1}(t-s)}}{1-e^{2\pi \sqrt{-1}(t-s)}}+3\frac{e^{2\pi\sqrt{-1}t}}{1-e^{2\pi\sqrt{-1}t}}+6p+7\right)\] \[+O\left(\frac{1}{(2N+1)^{3}}\right). \tag{5.15}\] Finally, we let \[w_{N}(t,s)\] \[=-\frac{\pi\sqrt{-1}}{3}\left(-2\frac{e^{2\pi\sqrt{-1}(t+s)}}{1-e ^{2\pi\sqrt{-1}(t+s)}}-2\frac{e^{2\pi\sqrt{-1}(t-s)}}{1-e^{2\pi\sqrt{-1}(t-s) }}+3\frac{e^{2\pi\sqrt{-1}t}}{1-e^{2\pi\sqrt{-1}t}}+6p+7\right)\] \[+O\left(\frac{1}{(2N+1)^{3}}\right), \tag{5.16}\] and we finish the proof of Lemma 5.2. We consider the critical point of \(V(p,t,s)\), which is given by the solution of the following equations \[\frac{\partial V(p,t,s)}{\partial t} =-2\pi\sqrt{-1}+3\log(1-e^{2\pi\sqrt{-1}t})\] \[-\log(1-e^{2\pi\sqrt{-1}(t+s)})-\log(1-e^{2\pi\sqrt{-1}(t-s)})=0, \tag{5.17}\] \[\frac{\partial V(p,t,s)}{\partial s} =(4p+2)\pi\sqrt{-1}s-(2p+3)\pi\sqrt{-1}\] \[-\log(1-e^{2\pi\sqrt{-1}(t+s)})+\log(1-e^{2\pi\sqrt{-1}(t-s)})=0. \tag{5.18}\] **Proposition 5.3**.: _The critical point equations (5.17), (5.18) has a unique solution \((t_{0},s_{0})=(t_{0R}+X_{0}\sqrt{-1},s_{0R}+Y_{0}\sqrt{-1})\) with \((t_{0R},s_{0R})\) lies in the region \(D^{\prime}_{0}\)._ Proof.: See Appendix 6.3 for a proof. Now we set \(\zeta(p)\) to be the critical value of the potential function \(V(p,t,s)\), i.e. \[\zeta(p)=V(p,t_{0},s_{0}), \tag{5.19}\] and set \[\zeta_{\mathbb{R}}(p)=Re\zeta(p)=ReV(p,t_{0},s_{0}). \tag{5.20}\] **Lemma 5.4**.: _For \(p\geq 2\), we have the following formula_ \[2\pi\zeta(p)=vol(S^{3}\setminus\mathcal{K}_{p})+\sqrt{-1}cs(\mathcal{S}^{3} \setminus\mathcal{K}_{p})-(p+5)\pi^{2}\sqrt{-1}. \tag{5.21}\] Proof.: See Appendix 6.4 for a proof. **Lemma 5.5**.: _When \(p\geq 6\), we have the following estimation for \(\zeta_{\mathbb{R}}(p)\)_ \[2\pi\zeta_{\mathbb{R}}(p)\geq v_{8}-\frac{49\pi^{2}}{64}\frac{1}{p^{2}}, \tag{5.22}\] _where \(v_{8}\) denotes the volume of the ideal regular octahedron, i.e. \(v_{8}\approx 3.66386\)._ Proof.: See Appendix 6.5 for a proof. We compute the Hessian matrix of the potential function \(V(p,t,s)\) as follows. By formulas (5.17) and (5.18), we obtain \[\frac{\partial^{2}V}{\partial t^{2}}=6\pi\sqrt{-1}\frac{-e^{2\pi\sqrt{-1}t}}{ 1-e^{2\pi\sqrt{-1}t}}+2\pi\sqrt{-1}\frac{e^{2\pi\sqrt{-1}(t+s)}}{1-e^{2\pi \sqrt{-1}(t+s)}}+2\pi\sqrt{-1}\frac{e^{2\pi\sqrt{-1}(t-s)}}{1-e^{2\pi\sqrt{-1 }(t-s)}}, \tag{5.23}\] \[\frac{\partial^{2}V}{\partial s^{2}}=(4p+2)\pi\sqrt{-1}+2\pi\sqrt{-1}\frac{e ^{2\pi\sqrt{-1}(t+s)}}{1-e^{2\pi\sqrt{-1}(t+s)}}+2\pi\sqrt{-1}\frac{e^{2\pi \sqrt{-1}(t-s)}}{1-e^{2\pi\sqrt{-1}(t-s)}}, \tag{5.24}\] \[\frac{\partial^{2}V}{\partial t\partial s}=2\pi\sqrt{-1}\frac{e^{2\pi\sqrt{- 1}(t+s)}}{1-e^{2\pi\sqrt{-1}(t+s)}}-2\pi\sqrt{-1}\frac{e^{2\pi\sqrt{-1}(t-s)} }{1-e^{2\pi\sqrt{-1}(t-s)}}. \tag{5.25}\] Let \(x=e^{2\pi\sqrt{-1}t}\) and \(y=e^{2\pi\sqrt{-1}s}\), then we obtain \[\det(Hess(V))\] \[=\frac{\partial^{2}V}{\partial t^{2}}\frac{\partial^{2}V}{ \partial s^{2}}-\left(\frac{\partial^{2}V}{\partial t\partial s}\right)^{2}\] \[=(2\pi\sqrt{-1})^{2}\left(\frac{-3(2p+1)}{\frac{1}{x}-1}+\frac{2p +1}{\frac{1}{xy}-1}+\frac{2p+1}{\frac{1}{x/y}-1}-\frac{3}{(\frac{1}{x}-1)( \frac{1}{xy}-1)}\right.\] \[\left.-\frac{3}{(\frac{1}{x}-1)(\frac{1}{x/y}-1)}+\frac{4}{( \frac{1}{xy}-1)(\frac{1}{x/y}-1)}\right). \tag{5.26}\] For convience, we let \[H(p,x,y) =\frac{-3(2p+1)}{\frac{1}{x}-1}+\frac{2p+1}{\frac{1}{xy}-1}+\frac {2p+1}{\frac{1}{x/y}-1}-\frac{3}{(\frac{1}{x}-1)(\frac{1}{xy}-1)}\] \[-\frac{3}{(\frac{1}{x}-1)(\frac{1}{x/y}-1)}+\frac{4}{(\frac{1}{ xy}-1)(\frac{1}{x/y}-1)}. \tag{5.27}\] ### Fourier coefficients \(\hat{h}_{N}(m,n)\) that can be neglected Motivated by Lemma 2.3, we introduce the following function for \((t,s)\in D\). \[F(X,Y;m,n)=\begin{cases}0&\text{(if $X+Y\geq 0$)}\\ \left((t+s)-\frac{3}{2}\right)(X+Y)&\text{(if $X+Y<0$)}\end{cases}\] \[+\begin{cases}0&\text{(if $X-Y\geq 0$)}\\ \left((t-s)-\frac{1}{2}\right)(X-Y)&\text{(if $X-Y<0$)}\end{cases}\] \[+\begin{cases}0&\text{(if $X\geq 0$)}\\ \left(\frac{3}{2}-3t\right)X&\text{(if $X<0$)}\end{cases}+\left(p+\frac{3}{2}+n-(2p+ 1)s\right)Y+(m+1)X \tag{5.28}\] where we have used \(t+s-\frac{3}{2}\) instead of \(t+s-\frac{1}{2}\) in the first summation since in our situation \(1<t+s<2\). Note that \(F(X,Y;m,n)\) is a piecewise linear function, we subdivide the plane \(\{(X,Y)\in\mathbb{R}^{2}\}\) into eight regions to discuss the asymptotic property of this function. We study the conditions such that \(F(X,Y,m,n)\) has the following property: \[F(X,Y;m,n)\to\infty\text{ as }X^{2}+Y^{2}\to+\infty. \tag{5.29}\] (I) \(X\geq Y\geq 0\), then \[F(X,Y;m,n)=\left(\left(p+\frac{3}{2}+n\right)-(2p+1)s\right)Y+(m+1)X. \tag{5.30}\] For \(m\leq-2\), then \(F(X,Y;m,n)\to-\infty\) as \(X\to+\infty\). For \(m=-1\), when \(s<\frac{p+\frac{3}{2}+n}{2p+1}\), then the property (5.29) holds. For \(m\geq 0\), when \(s<\frac{p+\frac{3}{2}+n+(m+1)}{2p+1}\), then the property (5.29) holds. (II) \(Y\geq X\geq 0\), then \[F(X,Y;m,n)=\left(t-s+\frac{1}{2}+m\right)X+(p+2+n-2ps-t)Y. \tag{5.31}\] When \(t+2ps<p+2+n\), we have \[F(X,Y;m,n) =\left(t-s+\frac{1}{2}+m\right)X+(p+2+n-2ps-t)Y\] \[\geq\left(p+\frac{5}{2}+m+n-(2p+1)s\right)X. \tag{5.32}\] Hence, the property (5.29) holds in this case if \(\left(p+\frac{5}{2}+m+n\right)-(2p+1)s>0\). (III) \(X+Y\geq 0\) and \(X\leq 0\), then \[F(X,Y;m,n)=\left(2-2t-s+m\right)X+(p+2+n-2ps-t)Y. \tag{5.33}\] When \(t+2ps<p+2+n\), then \[F(X,Y;m,n) \geq\left(2-2t-s+m\right)X+(p+2+n-2ps-t)(-X)\] \[=(p+n-m+t-(2p-1)s)(-X), \tag{5.34}\] if \(p+n-m+t-(2p-1)s>0\), we obtain that the property (5.29) holds in this case. (IV) \(X+Y\leq 0\) and \(Y\geq 0\), then \[F(X,Y;m,n)=\left(\frac{1}{2}-t+m\right)X+\left(p+\frac{1}{2}+n-(2p-1)s\right)Y. \tag{5.35}\] Since \(\frac{1}{2}<t<1\), if \(m\geq 1\), then \(\frac{1}{2}-t+m>\frac{1}{2}\), by \(-X\geq Y\geq 0\), we obtain \(F(X,Y;m,n)\to-\infty\) when \(X\to-\infty\). If \(m\leq 0\), then \[F(X,Y;m,n) =\left(-\frac{1}{2}+t-m\right)(-X)+\left(p+\frac{1}{2}+n-(2p-1)s \right)Y\] \[\geq(p+n-m+t-(2p-1)s)Y. \tag{5.36}\] when \(p+n-m+t-(2p-1)s>0\), we can see that the property (5.29) holds in this case. (V) \(X-Y\leq 0\) and \(Y\leq 0\), then \[F(X,Y;m,n)=\left(\frac{1}{2}-t+m\right)X+\left(p+\frac{1}{2}+n-(2p-1)s\right)Y. \tag{5.37}\] Since \(\frac{1}{2}<t<1\), if \(m\geq 1\), then \(\frac{1}{2}-t+m>\frac{1}{2}\), by \(-X\geq-Y\geq 0\), we obtain \(F(X,Y;m,n)\to-\infty\) when \(X\to-\infty\). If \(m\leq 0\), then \[F(X,Y;m,n) =\left(-\frac{1}{2}+t-m\right)(-X)+\left(p+\frac{1}{2}+n-(2p-1)s \right)Y\] \[\geq\left(-\frac{1}{2}+t-m\right)(-Y)+\left(p+\frac{1}{2}+n-(2p-1 )s\right)Y\] \[=(-p-1-m-n+t+(2p-1)s)(-Y), \tag{5.38}\] if \(t+(2p-1)s-p-1-m-n>0\), it follows that the property (5.29) holds in this case. (VI) \(X-Y\geq 0\) and \(X\leq 0\), then \[F(X,Y;m,n)=\left(1+m+s-2t\right)X+\left(p+n+t-2ps\right)Y, \tag{5.39}\] when \(p+n+t-2ps<0\), and since \(-Y\geq-X\geq 0\), \[F(X,Y;m,n) \geq\left(1+m+s-2t\right)X-\left(p+n-2ps+t\right)(-X)\] \[=(-p-1-m-n+t+(2p-1)s)(-X), \tag{5.40}\] if \(t+(2p-1)s-p-1-m-n>0\), it follows that the property (5.29) holds in this case. (VII) \(X+Y\leq 0\) and \(X\geq 0\), then \[F(X,Y;m,n)=\left(t+s-\frac{1}{2}+m\right)X+\left(p+n-2ps+t\right)Y. \tag{5.41}\] When \(m=0\), then \(t+s-\frac{1}{2}>0\), if \(p+n-2ps+t\leq 0\), by \(Y\leq 0\), it follows that the property (5.29) holds in this case, if \(p+n-2ps+t>0\), by \(-Y\geq X\geq 0\), if follows that \(F(X,Y;m,n)\to-\infty\), when \(Y\to-\infty\). When \(m=-1\), \(F(X,Y;-1,n)=\left(t+s-\frac{3}{2}\right)X+\left(p+n-2ps+t\right)Y\), if \(p+n-2ps+t<0\), then \[F(X,Y;-1,n) \geq\left(t+s-\frac{3}{2}\right)X-\left(p+n-2ps+t\right)X\] \[=\left((2p+1)s-p-\frac{3}{2}-n\right)X, \tag{5.42}\] it follows that if \((2p+1)s-p-\frac{3}{2}-n>0\), then the property (5.29) holds in this case. (VIII) \(X+Y\geq 0\) and \(Y\leq 0\), then \[F(X,Y;m,n)=(m+1)X+\left(p+\frac{3}{2}+n-(2p+1)s\right)Y, \tag{5.43}\] When \(m\leq-2\), then \(F(X,Y;m,n)\to-\infty\) as \(X\to+\infty\). When \(m=-1\), if \(p+\frac{3}{2}+n-(2p+1)s>0\), then \(F(X,Y)\to-\infty\) as \(Y\to-\infty\). If \(p+\frac{3}{2}+n-(2p+1)s<0\), then \(F(X,Y;m,n)\to+\infty\) as \(Y\to-\infty\). When \(m\geq 0\), by \(X\geq-Y\) \[F(X,Y;m,n) \geq(m+1)(-Y)+\left(p+\frac{3}{2}+n-(2p+1)s\right)Y\] \[=\left(m-p-\frac{1}{2}-n+(2p+1)s\right)(-Y), \tag{5.44}\] if \(m-p-\frac{1}{2}-n+(2p+1)s>0\), we obtain that the property (5.29) holds in this case. We obtain **Lemma 5.6**.: _For any \((t,s)\in D^{\prime}_{0}\),_ _(i) when \(m\leq-2\), for a fixed \(Y\in\mathbb{R}\), \(f(X,Y;m,n)\) is a decreasing function with respect to \(X\), and_ \[f(X,Y;m,n)\to-\infty\text{ as }\ X\to+\infty. \tag{5.45}\] _(ii) when \(m\geq 1\), for a fixed \(Y\in\mathbb{R}\), \(f(X,Y;m,n)\) is an increasing function with respect to \(X\), and_ \[f(X,Y;m,n)\to-\infty\text{ as }X\to-\infty. \tag{5.46}\] Proof.: From the above discussions, more precisely, by case (I), we obtain that, for \(m\leq-2\), then \(F(X,Y;m,n)\to-\infty\) as \(X\to+\infty\). By Lemma 2.3, we have \[2\pi F(X,Y;m,n)-C<f(X,Y;m,n)<2\pi F(X,Y;m,n)+C \tag{5.47}\] for some constant \(C\). Hence, we obtain (i). Similarly, for \(m\geq 1\), by case (IV), we obtain (ii). **Proposition 5.7**.: _When \(m\leq-2\) or \(m\geq 1\), then for any \(n\in\mathbb{Z}\), we have_ \[\hat{h}_{N}(m,n) =(-1)^{p+m+n}e^{\frac{\pi\sqrt{-1}}{4}}\frac{\left(N+\frac{1}{2} \right)^{\frac{3}{2}}}{\sin\frac{\pi}{2N+1}}\int_{D^{\prime}_{0}}\psi(t,s) \sin(2\pi s)e^{(N+\frac{1}{2})V_{N}(p,t,s;m,n)}dtds\] \[=O\left(e^{(N+\frac{1}{2})(\zeta_{\mathbb{R}}(p)-\epsilon)}\right). \tag{5.48}\] Proof.: We note that \(V_{N}(p,t,s;m,n)\) uniformly converges to \(V(p,t,s;m,n)\), we show the existence of a homotopy \(D^{\prime}_{\delta}\)\((0\leq\delta\leq\delta_{0})\) between \(D^{\prime}_{0}\) and \(D^{\prime}_{\delta_{0}}\) and such that \[D^{\prime}_{\delta_{0}} \subset\{(t,s)\in\mathbb{C}^{2}|ReV(p,t,s;m,n)<\zeta_{\mathbb{R} }(p)-\epsilon\}, \tag{5.50}\] \[\partial D^{\prime}_{\delta} \subset\{(t,s)\in\mathbb{C}^{2}|ReV(p,t,s;m,n)<\zeta_{\mathbb{R} }(p)-\epsilon\}, \tag{5.49}\] For each fixed \((t,s)\in D^{\prime}_{0}\), we move \((X,Y)\) from \((0,0)\) along the flow \((-\frac{\partial f}{\partial X},0)\). Then by Lemma 5.6, the value of \(ReV(p,t+X\sqrt{-1},s+Y\sqrt{-1};m,n)\) monotonically decreases and it goes to \(-\infty\). As for (5.50), since \(\partial D^{\prime}_{0}\subset\{(t,s)\in\mathbb{C}^{2}|ReV(p,t,s)<\zeta_{ \mathbb{R}}(p)-\epsilon\}\) and the value of \(ReV\) monotonically decreases, hence (5.50) holds. As for (5.49), since the value of \(ReV\) uniformly goes to \(-\infty\) by Lemma 5.6, (5.49) holds for sufficiently large \(\delta_{0}\). Therefore, such a required homotopy exists. **Lemma 5.8**.: _For any \((t,s)\in D^{\prime}_{0}\), when \(m=-1\), if \(s>\frac{p+\frac{3}{2}+n}{2p+1}\), we have_ \[f(X,Y;-1,n)\to-\infty\text{ as }Y\to+\infty. \tag{5.51}\] _if \(s<\frac{p+\frac{3}{2}+n}{2p+1}\), we have_ \[f(X,Y;-1,n)\to-\infty\text{ as }Y\to-\infty. \tag{5.52}\] Proof.: For \(m=-1\), if \(s>\frac{p+\frac{3}{2}+n}{2p+1}\), by case (I), we obtain \[f(X,Y,-1,n)\to-\infty\text{ as }Y\to+\infty. \tag{5.53}\] if \(s<\frac{p+\frac{3}{2}+n}{2p+1}\), by case (VIII), we obtain \[f(X,Y,-1,n)\to-\infty\text{ as }Y\to-\infty. \tag{5.54}\] **Proposition 5.9**.: _When \(m=-1\), then for any \(n\geq p-1\) or \(n\leq-(p+1)\), we have_ \[\hat{h}_{N}(m,n) =(-1)^{p+m+n}e^{\frac{\pi\sqrt{-1}}{4}}\frac{\left(N+\frac{1}{2} \right)^{\frac{3}{2}}}{\sin\frac{\pi}{2N+1}}\int_{D_{0}^{\prime}}\psi(t,s)\sin (2\pi s)e^{(N+\frac{1}{2})V_{N}(p,t,s;m,n)}dtds\] \[=O\left(e^{(N+\frac{1}{2})(\zeta_{\mathbb{R}}(p)-\epsilon)}\right). \tag{5.55}\] Proof.: Since \((t,s)\in D_{0}^{\prime}\), we have \(0.2<s<0.8\). For \(n\geq p-1\), we have that \[\frac{p+\frac{3}{2}+n}{2p+1}>\frac{2p+\frac{1}{2}}{2p+1}>s, \tag{5.56}\] since \(p\geq 6\). by Lemma 5.8, we obtain \[f(X,Y;-1,n)\to-\infty\text{ as }Y\to+\infty. \tag{5.57}\] For \(n\leq-(p+1)\), we have that \(\frac{p+\frac{3}{2}+n}{2p+1}<\frac{\frac{1}{2}}{2p+1}<s\) since \(p\geq 6\). by Lemma 5.8, we obtain \[f(X,Y;-1,n)\to-\infty\text{ as }Y\to-\infty. \tag{5.58}\] Then, for each fixed \((t,s)\in D_{0}^{\prime}\), we move \((X,Y)\) from \((0,0)\) along the flow \((0,-\frac{\partial f}{\partial Y})\). The value of \(ReV(p,t+X\sqrt{-1},s+Y\sqrt{-1};m,n)\) monotonically decreases and it goes to \(-\infty\). So we can construct the homotopy similar to the proof of Proposition 5.7 and finish the proof of Proposition 5.9. Now let us consider the Fourier coefficients with \(m=0\). After the discussion of asymptotic behavior for the function \(F(X,Y;-1,n)\), we introduce the following region for \(n\in\mathbb{Z}\), \[U_{n}=\left\{(t,s)\in D_{0}^{\prime}|\frac{p+n+1-t}{2p-1}<s<\frac{p+n+t}{2p-1 },\frac{p+n+t}{2p}<s<\frac{p+2+n-t}{2p}\right\}. \tag{5.59}\] We have **Lemma 5.10**.: _For any \((t,s)\in D_{0}^{\prime}\),_ _(i) when \((t,s)\in U_{n}\), we have_ \[f(X,Y;n)\to\infty\text{ as }\ X^{2}+Y^{2}\to+\infty. \tag{5.60}\] _(ii) when \((t,s)\notin U_{n}\),_ \[f(X,Y;n)\to-\infty\text{ in some directions of }X^{2}+Y^{2}\to+\infty. \tag{5.61}\] Proof.: See Appendix 6.2 for a proof. **Lemma 5.11**.: _(i) The top point of \(U_{n}\) is given by \((\frac{3p-2-n}{4p-1},\frac{1}{2}+\frac{5+4n}{2(4p-1)})\), the bottom point of \(U_{n}\) is given by \((\frac{3p+n}{4p-1},\frac{1}{2}+\frac{3+4n}{2(4p-1)})\),_ _(ii) For \(p\geq 6\), \(U_{0}\subset D_{H}\)._ Proof.: Solving the linear equations \[\begin{cases}s=\frac{p+n+t}{2p-1}\\ s=\frac{p+2+n-t}{2p}\end{cases} \tag{5.62}\] and \[\begin{cases}s=\frac{p+n+1-t}{2p-1}\\ s=\frac{p+n+t}{2p}\end{cases} \tag{5.63}\] respectively, we obtain (i). Solving the equations \[\begin{cases}s=\frac{p+2-t}{2p}\\ \qquad t+s=\frac{3}{2}\end{cases} \tag{5.64}\] we obtain \(t=\frac{2p-2}{2p-1}=1-\frac{1}{2p-1}>0.909\) since \(p\geq 6\). Hence \(U_{0}\subset D_{H}\). **Proposition 5.12**.: _When \(m=0\), then for any \(n\geq p-1\) or \(n\leq-(p+1)\), we have_ \[\hat{h}_{N}(m,n) =(-1)^{p+m+n}e^{\frac{\pi\sqrt{-1}}{4}}\frac{\left(N+\frac{1}{2} \right)^{\frac{3}{2}}}{\sin\frac{\pi}{2N+1}}\int_{D_{0}^{\prime}}\psi(t,s)\sin (2\pi s)e^{(N+\frac{1}{2})V_{N}(p,t,s;m,n)}dtds\] \[=O\left(e^{(N+\frac{1}{2})(\zeta_{\mathbb{R}}(p)-\epsilon)} \right). \tag{5.65}\] Proof.: Since \((t,s)\in D_{0}^{\prime}\), we have \(0<s<1\). For \(n\geq p-1\), we have \[\frac{1}{2}+\frac{3+4n}{2(4p-1)}\geq 1>s, \tag{5.66}\] and for \(n\leq-(p+1)\), we have \[\frac{1}{2}+\frac{5+4n}{2(4p-1)}\leq 0<s. \tag{5.67}\] So by Lemma 5.11, we know that \(U_{n}=\emptyset\). Therefore, by Lemma 5.10, for \((t,s)\in D_{0}^{\prime}\), we have \[f(X,Y;n)\to-\infty\text{ in some directions of }\ X^{2}+Y^{2}\to+\infty. \tag{5.68}\] Now, similar to the proof of Proposition 5.7, we can finish the proof of Proposition 5.12 **Remark 5.13**.: Proposition 5.7, Proposition 5.9 and Proposition 5.12 show that the Fourier coefficients \(\hat{h}_{N}(m,n)\) with \(m,n\) satisfying the conditions in those Propositions can be neglected when we study the asymptotic expansion. We can also prove Proposition 5.7 and Proposition 5.9 by using the method as showed in Ohtsuki's original paper [31, 33] (cf. the section of verifying the assumption of the Poisson summation formula for \(V\)). Hence, by formula (4.17), we obtain \[J_{N}(\mathcal{K}_{p};\xi_{N})=\sum_{n=-p}^{p-2}\hat{h}_{N}(-1,n)+\sum_{n=-p}^{p -2}\hat{h}_{N}(0,n)+O\left(e^{(N+\frac{1}{2})(\mathbb{G}(p)-\epsilon)}\right). \tag{5.69}\] So in the following, we only need to investigate the Fourier coefficients \(\hat{h}_{N}(-1,n)\) and \(\hat{h}_{N}(0,n)\) with \(-p\leq n\leq p-2\). Note that, in Ohtsuki's work [31, 32, 34, 33], after verifying the assumption of the Poisson summation formula, only one Fourier coefficient (or two Fourier coefficients in [33]) remains to be considered. ### Fourier coefficients \(\hat{h}_{N}(-1,n)\) with \(-p\leq n\leq p-2\) For a fixed constant \(c\in\mathbb{R}\), we introduce the subset \[D^{\prime}_{0}(c)=\{(t,s)\in D^{\prime}_{0}|s=c\}. \tag{5.70}\] we have **Proposition 5.14**.: _For any \(c\in[0.2,0.8]\) and \(n\in\mathbb{Z}\), there is a constant \(C\) independent of \(c\), such that_ \[|\int_{D^{\prime}_{0}(c)}e^{(N+\frac{1}{2})V_{N}(p,t,c-1,n)}dt|<Ce^{(N+\frac{1 }{2})(\mathbb{G}_{\mathbb{R}}(p)-\epsilon)}. \tag{5.71}\] Proof.: See Appendix 6.7 for a proof. **Proposition 5.15**.: _When \(m=-1\), for \(-p\leq n\leq p-2\), we have_ \[\int_{D^{\prime}_{0}}\psi(t,s)\sin(2\pi s)e^{(N+\frac{1}{2})V_{N}(p,t,s;-1,n)} dtds=O\left(e^{(N+\frac{1}{2})(\mathbb{G}_{\mathbb{R}}(p)-\epsilon)}\right), \tag{5.72}\] _for some sufficiently small \(\epsilon>0\)._ Proof.: We note that, \(V_{N}(p,t,s;-1,n)\) uniformly converges to \(V(p,t,s;-1,n)\) on \(D^{\prime}_{0}\). So we only need to estimate the integral of \(V(p,t,s;-1,n)\) as follow: \[|\int_{D^{\prime}_{0}}\psi(t,s)\sin(2\pi s)e^{(N+\frac{1}{2})V(p,t,s;-1,n)} dtds|\] \[=|\int_{0.2}^{0.8}\int_{D^{\prime}_{0}(c)}\psi(t,c)\sin(2\pi c)e^ {(N+\frac{1}{2})V(p,t,c;-1,n)}dtdc|\] \[\leq\int_{0.2}^{0.8}|\int_{D^{\prime}_{0}(c)}\psi(t,c)\sin(2\pi c )e^{(N+\frac{1}{2})V(p,t,c;-1,n)}dt|dc. \tag{5.73}\] By Proposition 5.14, we obtain \[|\int_{D^{\prime}_{0}(c)}\psi(t,c)\sin(2\pi c)e^{(N+\frac{1}{2})V(p,t,c;-1,n)} dt|<Ce^{(N+\frac{1}{2})(\mathbb{G}_{\mathbb{R}}(p)-\epsilon)} \tag{5.74}\] where \(C\) is a constant independent of \(c\) and \(n\). Then formula (5.73) implies Proposition 5.9. We remark that the above proof holds for any \(n\in\mathbb{Z}\) ### Fourier coefficients \(\hat{h}_{N}(0,n)\) with \(0\leq n\leq p-2\) First, we have **Proposition 5.16**.: _For \(n\in\mathbb{Z}\) and for \(c_{upper}(p)\leq c\leq 1\) or \(0\leq c\leq 1-c_{upper}(p)\), there exists a constant \(C\) independent of \(c\), such that_ \[|\int_{D^{\prime}_{0}(c)}e^{(N+\frac{1}{2})V_{N}(p,t,c;0,n)}dt|<Ce^{(N+\frac{1 }{2})(\zeta_{\mathbb{R}}(p)-\epsilon)}. \tag{5.75}\] Proof.: See Appendix 6.6 for a proof. We introduce the quantity \[c_{upper}(p)=\frac{1}{\pi}(v_{8}-2\pi\zeta_{\mathbb{R}}(p))^{\frac{1}{2}}+ \frac{1}{2}, \tag{5.76}\] and set \[c_{0}(p)=\frac{7}{8p}+\frac{1}{2}. \tag{5.77}\] By Lemma 5.5, we have \(2\pi\zeta_{\mathbb{R}}(p)>v_{8}-\frac{49\pi^{2}}{64}\frac{1}{p^{2}}\). Then, by formula (5.76), we obtain \[c_{upper}(p)\leq c_{0}(p). \tag{5.78}\] We introduce the region \[D^{\prime\prime}_{0}=\{(t,s)\in D^{\prime}_{0}|1-c_{0}(p)\leq s\leq c_{0}(p)\}. \tag{5.79}\] #### 5.4.1. Fourier coefficients \(\hat{h}_{N}(0,n)\) with \(1\leq n\leq p-2\) **Lemma 5.17**.: _For \(n\geq 1\), we have_ \[U_{n}\cap D^{\prime\prime}_{0}=\emptyset. \tag{5.80}\] Proof.: By Lemma 5.11, the bottom point of \(U_{n}\) is given by \((\frac{3p+n}{4p-1},\frac{1}{2}+\frac{3+4n}{2(4p-1)})\), clearly \[\frac{1}{2}+\frac{3+4n}{2(4p-1)}>c_{0}(p)=\frac{1}{2}+\frac{7}{8p}, \tag{5.81}\] for \(n\geq 1\). Hence \(U_{n}\cap D^{\prime\prime}_{0}=\emptyset\) for \(n\geq 1\). **Proposition 5.18**.: _For \(1\leq n\leq p-2\), we have_ \[\int_{D^{\prime}_{0}}\psi(t,s)\sin(2\pi s)e^{(N+\frac{1}{2})V_{N}(p,t,s;0,n)} dtds=O\left(e^{(N+\frac{1}{2})(\zeta_{\mathbb{R}}(p)-\epsilon)}\right), \tag{5.82}\] _for some sufficiently small \(\epsilon>0\)._ Proof.: By formula (5.78) and Proposition 5.16, we have \[\int_{D^{\prime\prime}_{0}}\psi(t,s)\sin(2\pi s)e^{(N+\frac{1}{2 })V_{N}(p,t,s;0,n)}dtds\] \[=\int_{D^{\prime\prime}_{0}}\sin(2\pi s)e^{(N+\frac{1}{2})V_{N}(p,t,s;0,n)}dtds+O(e^{(N+\frac{1}{2})(\zeta_{\mathbb{R}}(p)-\epsilon)}). \tag{5.83}\] So we only need to estimate the following integral \[\int_{D^{\prime\prime}_{0}}\psi(t,s)\sin(2\pi s)e^{(N+\frac{1}{2})V_{N}(p,t,s ;0,n)}dtds. \tag{5.84}\] Note that \(V_{N}(p,t,s;0,n)\) uniformly converges to \(V(p,t,s;0,n)\) on \(D_{0}^{\prime\prime}\) by Lemma 2.6. We show that there is a homotopy \(D_{\delta}^{\prime\prime}\) (\(0\leq\delta\leq\delta_{0}\)) between \(D_{0}^{\prime\prime}\) and \(D_{\delta_{0}}^{\prime\prime}\) such that \[D_{\delta_{0}}^{\prime\prime}\subset\{(t,s)\in\mathbb{C}^{2}| ReV(p;t,s;0,n)<\zeta_{\mathbb{R}}(p)-\epsilon\}, \tag{5.86}\] \[\int_{\partial D_{\delta}^{\prime\prime}}\sin(2\pi s)e^{(N+\frac {1}{2})V(p,t,s;0,n)}dtds=O(e^{(N+\frac{1}{2})(\zeta_{\mathbb{R}}(p)-\epsilon)}), \tag{5.85}\] for some sufficiently small \(\epsilon>0\). In the fiber of the projection \(\mathbb{C}^{2}\to\mathbb{R}^{2}\) at \((t,s)\in D_{0}^{\prime\prime}\), we consider the flow from \((X,Y)=(0,0)\) determined by the vector field \(\left(-\frac{\partial f}{\partial X},-\frac{\partial f}{\partial Y}\right)\), we have known that for \((t,s)\notin U_{n}\), the flow goes infinity. Hence by Lemma 5.17, we obtain that, for \(n\geq 1\), the flow goes to infinity. Then, the value of \(ReV(p;t,s;0,n)\) monotonically decreases and goes to \(-\infty\). So we can choose \(\delta_{0}\) large enough to make sure formula (5.85) holds. As for (5.86), note that the boundary \(\partial D_{0}^{\prime\prime}\) consists of \(D_{0}^{\prime}(c_{0}(p))\), \(D_{0}^{\prime}(1-c_{0}(p))\) and the partial boundaries of \(D_{0}^{\prime}\), denoted by \(D_{0b}^{\prime}\). Hence \(\partial D_{\delta}^{\prime\prime}\) consists of three parts denoted by \[\partial D_{\delta}^{\prime\prime}=A_{1}\cup A_{2}\cup B, \tag{5.87}\] where \(A_{1}\) and \(A_{2}\) comes from the flows start at \((t,s)\in D_{0}^{\prime}(c_{0}(p))\) and \((t,s)\in D_{0}^{\prime}(1-c_{0}(p))\) respectively, while \(B\) comes from the flows start at \((t,s)\in D_{0b}^{\prime}\). By its definition, \(D_{0b}^{\prime}\subset\partial D_{0}^{\prime}\subset\{(t,s)\in\mathbb{C}^{2}| ReV(p,t,s;0,n)<\zeta_{\mathbb{R}}(p)-\epsilon\}\), and the function \(ReV(p,t,s;0,n)\) decreases under the flow, so we have \[\int_{B}\sin(2\pi s)e^{(N+\frac{1}{2})V(p,t,s;0,n)}dtds=O(e^{(N+ \frac{1}{2})(\zeta_{\mathbb{R}}(p)-\epsilon)}). \tag{5.88}\] By Proposition 5.16, the integral on \(D_{0}^{\prime}(c_{0}(p))\) and \(D_{0}^{\prime}(1-c_{0}(p))\) is also of order \(O(e^{(N+\frac{1}{2})(\zeta_{\mathbb{R}}(p)-\epsilon)})\). By applying the saddle point method to the slices of the region \(A_{1}\cup A_{2}\) as shown in Appendix 6.6, we can prove that \[\int_{A_{1}\cup A_{2}}\sin(2\pi s)e^{(N+\frac{1}{2})V(p,t,s;0,n)}dtds=O(e^{(N+ \frac{1}{2})(\zeta_{\mathbb{R}}(p)-\epsilon)}). \tag{5.89}\] Combining formulas (5.88) and (5.89) together, we prove (5.86). Hence, the required homotopy exists. #### 5.4.2. Fourier coefficient \(\hat{h}_{N}(0,0)\) **Lemma 5.19**.: _For \(p\geq 6\), we have_ \[U_{0}\subset D_{0}^{\prime\prime}\subset D_{H}. \tag{5.90}\] Proof.: By Lemma 5.11, the top and bottom points of the region \(U_{0}\) are given by \((\frac{3p-2}{4p-1},\frac{1}{2}+\frac{5}{2(4p-1)})\) and \(U_{n}\) is given by \((\frac{3p}{4p-1},\frac{1}{2}+\frac{3}{2(4p-1)})\) respectively. Then we have \[c_{0}(p)=\frac{1}{2}+\frac{7}{8p}>\frac{1}{2}+\frac{5}{2(4p-1)}. \tag{5.91}\] Hence \(U_{0}\subset D_{0}^{\prime\prime}\). The intersection point of the lines \(s=c_{0}(p)=\frac{1}{2}+\frac{7}{8p}\) and \(t+s=\frac{3}{2}\) is given by \((1-\frac{7}{8p},\frac{1}{2}+\frac{7}{8p})\), clearly, for \(p\geq 10\), \(1-\frac{7}{8p}>0.909\). Therefore, \(D_{0}^{\prime\prime}\subset D_{H}\) for \(p\geq 10\). In particular, for \(p=6\), by formula (5.76), we have \(c_{upper}(6)=0.5871\). Let \(c_{0}(6)=c_{upper}(6)\), then the formula (5.91) also holds for \(p=6\), hence \(U_{0}\subset D_{0}^{\prime\prime}\). The intersection point of the lines \(s=c_{0}(6)=0.5871\) and \(t+s=\frac{3}{2}\) is given by \((0.9129,0.5871)\), clearly, \(0.9129>0.909\). Therefore, \(D_{0}^{\prime\prime}\subset D_{H}\) also holds for \(p=6\). Similarly, we can also show \(D_{0}^{\prime\prime}\subset D_{H}\) also holds for \(p=7,8,9\). Hence we finish the proof of Lemma 5.19. **Proposition 5.20**.: _For \(p\geq 6\), we have_ \[\hat{h}_{N}(0,0) =\frac{(-1)^{p}e^{\frac{1}{4}\pi\sqrt{-1}}(N+\frac{1}{2})^{\frac{ 3}{2}}}{\sin\frac{\pi}{2N+1}}\int_{D_{0}^{\prime}}\psi(t,s)\sin(2\pi s)e^{(N+ \frac{1}{2})V_{N}(p,t,s)}dtds\] \[=\frac{(-1)^{p+1}2\pi e^{\frac{1}{4}\pi\sqrt{-1}}(N+\frac{1}{2}) ^{\frac{1}{2}}}{\sin\frac{\pi}{2N+1}}\omega(p)e^{(N+\frac{1}{2})\zeta_{\mathbb{ R}}(p)}\] \[\cdot\left(1+\sum_{i=1}^{d}\kappa_{i}(p)\left(\frac{2\pi\sqrt{-1} }{N+\frac{1}{2}}\right)^{i}+O\left(\frac{1}{(N+\frac{1}{2})^{d+1}}\right) \right), \tag{5.92}\] _for \(d\geq 1\), where \(\omega(p)\) and \(\kappa_{i}(p)\) are constants determined by \(\mathcal{K}_{p}\)._ Proof.: By Proposition 5.16, we have \[\hat{h}_{N}(0,0) =\frac{(-1)^{p}e^{\frac{1}{4}\pi\sqrt{-1}}(N+\frac{1}{2})^{\frac{ 3}{2}}}{\sin\frac{\pi}{2N+1}}\int_{D_{0}^{\prime}}\psi(t,s)\sin(2\pi s)e^{(N+ \frac{1}{2})V_{N}(p,t,s)}dtds\] \[=\frac{(-1)^{p}e^{\frac{1}{4}\pi\sqrt{-1}}(N+\frac{1}{2})^{\frac{ 3}{2}}}{\sin\frac{\pi}{2N+1}}\int_{D_{0}^{\prime\prime}}\sin(2\pi s)e^{(N+ \frac{1}{2})V_{N}(p,t,s)}dtds\] \[+O\left(e^{(N+\frac{1}{2})(\zeta_{\mathbb{R}}(p)-\epsilon)}\right), \tag{5.93}\] where \(D_{0}^{\prime\prime}\) is given in (5.79). We will verify the conditions of Proposition 2.7 for saddle point method in Proposition 5.21. By Lemma 5.2 and Remark 2.8, we can apply the Proposition 2.7 to the above integral (5.93). Let \((t_{0},s_{0})\) be the critical point of \(V(p,t,s)\), we obtain that \[\int_{D_{0}^{\prime\prime}}\sin(2\pi s)\exp\left((N+\frac{1}{2})V _{N}(p,t,s)\right)dtds\] \[=\frac{2\pi}{2N+1}\frac{2\alpha(t_{0},s_{0})}{\sqrt{\det Hess(V)( t_{0},s_{0})}}e^{(N+\frac{1}{2})\zeta_{\mathbb{R}}(p)}\] \[\left(1+\sum_{i=1}^{d}\kappa_{i}(p)\left(\frac{2\pi\sqrt{-1}}{N+ \frac{1}{2}}\right)^{i}+O\left(\frac{1}{(N+\frac{1}{2})^{d+1}}\right)\right) \tag{5.94}\] where the function \[\alpha(t,s)= \psi(t,s)\sin(2\pi s)\] \[\cdot e^{-\frac{1}{2}\left(\log(1-e^{2\pi\sqrt{-1}(t+s)})+\log(1 -e^{2\pi\sqrt{-1}(t-s)})-4\pi\sqrt{-1}t\right)}. \tag{5.95}\] and the determinant of the Hessian matrix at \((t_{0},s_{0})\) is given by the formula (5.26) \[\det Hess(V)(t_{0},s_{0})=(2\pi\sqrt{-1})^{2}H(p,x_{0},y_{0}) \tag{5.96}\] where \[H(p,x_{0},y_{0}) =\left(\frac{-3(2p+1)}{\frac{1}{x_{0}}-1}+\frac{2p+1}{\frac{1}{x_{0 }y_{0}}-1}+\frac{2p+1}{\frac{1}{x_{0}/y_{0}}-1}-\frac{3}{(\frac{1}{x_{0}}-1)( \frac{1}{x_{0}y_{0}}-1)}\right.\] \[\left.-\frac{3}{(\frac{1}{x_{0}}-1)(\frac{1}{x_{0}/y_{0}}-1)}+ \frac{4}{(\frac{1}{x_{0}y_{0}}-1)(\frac{1}{x_{0}/y_{0}}-1)}\right), \tag{5.97}\] with \(x_{0}=e^{2\pi\sqrt{-1}t_{0}}\) and \(y_{0}=e^{2\pi\sqrt{-1}s_{0}}\). Since \((t_{0},s_{0})\) is the critical point of \(V(p,t,s)\), then it satisfies the identity \[-\frac{1}{2}\left(\log(1-e^{2\pi\sqrt{-1}(t_{0}+s_{0})})+\log(1-e ^{2\pi\sqrt{-1}(t_{0}-s_{0})})-4\pi\sqrt{-1}t_{0}\right)\] \[=\pi\sqrt{-1}-\frac{3}{2}\log(1-e^{2\pi\sqrt{-1}t_{0}})+2\pi \sqrt{-1}t_{0}, \tag{5.98}\] we obtain \[\alpha(t_{0},s_{0})=\frac{\sin(2\pi s_{0})e^{2\pi\sqrt{-1}t_{0}}}{(1-e^{2\pi \sqrt{-1}t_{0}})^{\frac{3}{2}}}. \tag{5.99}\] Therefore, we have \[\hat{h}_{N}(0,0) =\frac{(-1)^{p+1}2\pi e^{\frac{1}{4}\pi\sqrt{-1}}(N+\frac{1}{2}) ^{\frac{1}{2}}}{\sin\frac{\pi}{2N+1}}\omega(p)e^{(N+\frac{1}{2})V(p,t_{0},s_{ 0})}\] \[\left(1+\sum_{i=1}^{d}\kappa_{i}(p)\left(\frac{2\pi\sqrt{-1}}{N+ \frac{1}{2}}\right)^{i}+O\left(\frac{1}{(N+\frac{1}{2})^{d+1}}\right)\right), \tag{5.100}\] where \[\omega(p) =\frac{\sin(2\pi s_{0})e^{2\pi\sqrt{-1}t_{0}}}{(1-e^{2\pi\sqrt{-1 }t_{0}})^{\frac{3}{2}}\sqrt{\det Hess(V)(t_{0},s_{0})}}\] \[=\frac{(y_{0}-y_{0}^{-1})x_{0}}{-4\pi(1-x_{0})^{\frac{3}{2}}\sqrt {H(p,x_{0},y_{0})}}. \tag{5.101}\] **Proposition 5.21**.: _When we apply Proposition 2.7 (saddle point method) to the integral (5.93), the assumptions of Proposition 2.7 holds._ Proof.: We note that, by Lemma 2.6, \(V_{N}(p,t,s)\) uniformly converges to the \(V(p,t,s)\) on \(D_{0}^{\prime\prime}\) as \(N\to\infty\). Hence, we only need to verify the assumptions of the saddle point method for \(V(p,t,s)\). We show that there exists a homotopy \(D_{\delta}^{\prime\prime}\) (\(0\leq\delta\leq 1\)) between \(D_{0}^{\prime\prime}\) and \(D_{1}^{\prime\prime}\) such that \[(t_{0},s_{0})\in D_{1}^{\prime\prime}, \tag{5.103}\] \[D_{1}^{\prime\prime}-\{(t_{0},s_{0})\}\subset\{(t,s)\in\mathbb{ C}^{2}|ReV(p,t,s)<\zeta_{\mathbb{R}}(p)\},\] (5.104) \[\int_{\partial D_{\delta}^{\prime\prime}}\sin(2\pi s)e^{(N+\frac {1}{2})V(p,t,s)}dtds=O\left(e^{(N+\frac{1}{2})(\zeta_{\mathbb{R}}(p)-\epsilon) }\right). \tag{5.102}\] In the fiber of the projection \(\mathbb{C}^{2}\to\mathbb{R}^{2}\) at \((t,s)\in D^{\prime\prime}_{0}\), we consider the flow from \((X,Y)=(0,0)\) determined by the vector field \((-\frac{\partial f}{\partial X},-\frac{\partial f}{\partial Y})\). By Lemma 5.19, together with Lemma 5.10 and Lemma 5.1, the convex neighborhood \(U_{0}\) of \((t_{0},s_{0})\) satisfies the following holds. 1. If \((t,s)\in U_{0}\), then \(f\) has a unique minimal point, and the flow goes there. 2. If \((t,s)\in D^{\prime\prime}_{0}\setminus U_{0}\), then the flow goes to infinity. We put \(\mathbf{g}(t,s)=(g_{1}(t,s),g_{2}(t,s))\) to be the minimal point of (1). In particular, \(|\mathbf{g}(t,s)|\to\infty\) as \((t,s)\) goes to \(\partial U_{0}\). Further, for a sufficiently large \(R>0\), we stop the flow when \(|\mathbf{g}(t,s)|=R\). We construct the revised flow \(\hat{\mathbf{g}}(t,s)\), by putting \(\hat{\mathbf{g}}(t,s)=\mathbf{g}(t,s)\) for \((t,s)\in U_{0}\) with \(|\mathbf{g}(t,s)|<R\), otherwise, by putting \(|\hat{\mathbf{g}}(t,s)|=(R,R)\). We define the ending of the homotopy by \[D^{\prime\prime}_{1}=\{(t,s)+\hat{\mathbf{g}}(t,s)\sqrt{-1}|(t,s)\in D^{\prime \prime}_{0}\}. \tag{5.105}\] Further, we define the internal part of the homotopy by setting it along the flow from \((t,s)\) determined by the vector field \(\left(-\frac{\partial f}{\partial X},-\frac{\partial f}{\partial Y}\right)\). We show (5.102) and (5.103) as follows. We consider the function \[h(t,s)=ReV(t,s,\hat{\mathbf{g}}(t,s)). \tag{5.106}\] If \((t,s)\notin U\), by (2), \(-h(t,s)\) is sufficiently large (because we let \(R\) be sufficiently large), hence (5.103) holds in this case. Otherwise, \((t,s)\in U\), in this case, \(\hat{\mathbf{g}}(t,s)=\mathbf{g}(t,s)\). It is shown from the definition of \(\hat{g}(t,s)\) that \[\frac{\partial ReV}{\partial X}=\frac{\partial ReV}{\partial Y}=0\text{ at }(X,Y)=\mathbf{g}(t,s), \tag{5.107}\] which implies \[Im\frac{\partial V}{\partial t}=Im\frac{\partial V}{\partial s}=0\text{ at }(t,s)+\mathbf{g}(t,s)\sqrt{-1}. \tag{5.108}\] On the other hand, \[\frac{\partial h}{\partial t}=Re\frac{\partial V}{\partial t},\ \frac{\partial h}{ \partial s}=Re\frac{\partial V}{\partial s}\text{ at }(t,s)+\mathbf{g}(t,s)\sqrt{-1}. \tag{5.109}\] Therefore, when \((t,s)+\mathbf{g}(t,s)\sqrt{-1}\) is a critical point of \(V\), \((t,s)\) is a critical point of \(h(t,s)\). Hence by Proposition 5.3, \(h(t,s)\) has a unique maximal point at \((t_{0},s_{0})\). Therefore, (5.102) and (5.103) holds. We show (5.104) as follows. Note that the boundary of \(\partial D^{\prime\prime}_{0}\) consists of \(D^{\prime}_{0}(c_{0}(p))\), \(D^{\prime}_{0}(1-c_{0}(p))\) and the partial boundaries of \(D^{\prime}_{0}\), denoted by \(D^{\prime}_{0b}\). Hence \(\partial D^{\prime\prime}_{\delta}\) consists of three parts denoted by \[\partial D^{\prime\prime}_{\delta}=A_{1}\cup A_{2}\cup B, \tag{5.110}\] where \(A_{1}\) and \(A_{2}\) comes from the flows start at \((t,s)\in D^{\prime}_{0}(c_{0}(p))\) and \((t,s)\in D^{\prime}_{0}(1-c_{0}(p))\) respectively, while \(B\) comes from the flows start at \((t,s)\in D^{\prime}_{0b}\). By its definition, \(D^{\prime}_{0b}\subset\partial D^{\prime}_{0}\subset\{(t,s)\in\mathbb{C}^{2}| ReV(p,t,s;0,n)<\zeta_{\mathbb{R}}(p)-\epsilon\}\), and the function \(ReV(p,t,s;0,n)\) decreases under the flow, so we have \[\int_{B}\sin(2\pi s)e^{(N+\frac{1}{2})V(p,t,s;0,n)}dtds=O(e^{(N+\frac{1}{2})( \zeta_{\mathbb{R}}(p)-\epsilon)}). \tag{5.111}\] By Proposition 5.16, the integral on \(D_{0}^{\prime}(c_{0}(p))\) and \(D_{0}^{\prime}(1-c_{0}(p))\) is also of order \(O(e^{(N+\frac{1}{2})(\zeta_{\mathbb{R}}(p)-\epsilon)})\). By applying the saddle point method to the slices of the region \(A_{1}\cup A_{2}\) as shown in Appendix 6.6, we can prove that \[\int_{A_{1}\cup A_{2}}\sin(2\pi s)e^{(N+\frac{1}{2})V(p,t,s;0,n)}dtds=O(e^{(N +\frac{1}{2})(\zeta_{\mathbb{R}}(p)-\epsilon)}). \tag{5.112}\] Combining formulas (5.111) and (5.112) together, we prove (5.86). By (5.102) (5.103) and (5.104), the required homotopy exists. Hence the assumptions of Proposition 2.7 holds when we apply the saddle point method to the integral (5.93). ### Final proof Now we can finish the proof of Theorem 1.2 as follows. Proof.: By using formula (5.69), together Proposition 5.15, Proposition 5.18 and Proposition 5.20 together, we obtain \[J_{N}(\mathcal{K}_{p};\xi_{N}) =2\hat{h}_{N}(0,0)+O(e^{(N+\frac{1}{2})(\zeta_{\mathbb{R}}(p)- \epsilon)})\] \[=(-1)^{p+1}\frac{4\pi e^{\frac{1}{4}\pi\sqrt{-1}}(N+\frac{1}{2})^ {\frac{1}{2}}}{\sin\frac{\frac{5}{N+\frac{1}{2}}}{N+\frac{1}{2}}}\omega(p)e^{( N+\frac{1}{2})\zeta(p)}\] \[\cdot\left(1+\sum_{i=1}^{d}\kappa_{i}(p)\left(\frac{2\pi\sqrt{-1} }{N+\frac{1}{2}}\right)^{i}+O\left(\frac{1}{(N+\frac{1}{2})^{d+1}}\right) \right), \tag{5.113}\] for \(d\geq 1\), where \(\omega(p)\) and \(\kappa_{i}(p)\) are constants determined by \(\mathcal{K}_{p}\). ## 6. Appendices ### Proof of Lemma 4.1 We define the function \[v(t,s)=\Lambda(t+s)+\Lambda(t-s)-3\Lambda\left(t\right). \tag{6.1}\] We set \[D =\{(t,s)\in\mathbb{R}^{2}|1<t+s<2,0<t-s<1,\frac{1}{2}<t<1\},\] \[D_{0}^{\prime} =\{0.02\leq t-s\leq 0.7,1.02\leq t+s\leq 1.7,0.2\leq s\leq 0.8,0.5 \leq t\leq 0.909\}. \tag{6.2}\] Then we have **Lemma 6.1**.: _The following domain_ \[\left\{(t,s)\in D|v(t,s)>\frac{3.509}{2\pi}\right\} \tag{6.3}\] _is included in the region \(D_{0}^{\prime}\)._ Proof.: Recall the definition of the function \(\Lambda(t)\): \[\Lambda(t)=Re\left(\frac{1}{2\pi\sqrt{-1}}\text{Li}_{2}(e^{2\pi\sqrt{-1}t}) \right). \tag{6.4}\] We have \(\Lambda^{\prime}(t)=-\log 2\sin(\pi t)\) and \(\Lambda^{\prime\prime}(t)=-\pi\cot(\pi t)\) for \(t\in[0,1]\). In the following, we only describe how to obtain the upper bound \(0.909\) of \(t\) in the definition of \(D_{0}^{\prime}\), the other bounds can be derived similarly. For a fixed \(t\in(\frac{1}{2},1)\), we regard \(v(t,s)\) as function of \(s\), it follows that \[v_{s}(t,s)=\Lambda^{\prime}(t+s)-\Lambda^{\prime}(t-s)=\log\frac{\sin(\pi(t-s))} {\sin(\pi(t+s-1))}. \tag{6.5}\] The critical point is given by \(s=\frac{1}{2}\). Furthermore, \(v_{ss}(t,\frac{1}{2})<0\), i.e. for any fixed \(t\in(\frac{1}{2},1)\), as a function of \(s\), \(v(t,s)\) takes the maximal value at \(s=\frac{1}{2}\). As a function of \(t\), \[v(t,\frac{1}{2})=\Lambda(t+\frac{1}{2})+\Lambda(t-\frac{1}{2})-3\Lambda(t)=2 \Lambda(t+\frac{1}{2})-3\Lambda(t). \tag{6.6}\] On can compute that there is a point \(t_{0}\approx 0.8\) which is a maximal point of \(v(t,\frac{1}{2})\), and such that \(v(t,\frac{1}{2})\) increases (resp. decreases) on the interval \((\frac{1}{2},t_{0})\) (resp. the interval \((t_{0},1)\)). We compute that \(2\pi v(0.909,\frac{1}{2})=3.4589<3.509\). By above analysis, if \((t,s)\in D\) such that \(v(t,s)>\frac{3.509}{2\pi}\), then \(t<0.909\). ### Proof of Lemma 5.10 Based on Lemma 2.3, in order to study the asymptotic behaviour of the function \[f(X,Y;n)=ReV(p,t+X\sqrt{-1},s+Y\sqrt{-1}), \tag{6.7}\] we introduce the following function \[F(X,Y;n)=\begin{cases}0&\text{(if $X+Y\geq 0$)}\\ \left((t+s)-\frac{3}{2}\right)(X+Y)&\text{(if $X+Y<0$)}\end{cases}\] \[+\begin{cases}0&\text{(if $X-Y\geq 0$)}\\ \left((t-s)-\frac{1}{2}\right)(X-Y)&\text{(if $X-Y<0$)}\end{cases}\] \[+\begin{cases}0&\text{(if $X\geq 0$)}\\ \left(\frac{3}{2}-3t\right)X&\text{(if $X<0$)}\end{cases}+\left(p+\frac{3}{2}+n-(2p +1)s\right)Y+X \tag{6.8}\] where we use \(t+s-\frac{3}{2}\) instead of \(t+s-\frac{1}{2}\) in the first summation since in our situation \(1<t+s<2\). Note that \(F(X,Y;n)\) is a piecewise linear function, we subdivide the plane \(\{(X,Y)\in\mathbb{R}^{2}\}\) into eight regions to discuss this function. We study the conditions such that \(F(X,Y,n)\) has the following property: \[F(X,Y;n)\to\infty\text{ as }X^{2}+Y^{2}\to\infty. \tag{6.9}\] (I) \(X\geq Y\geq 0\), then \[F(X,Y;n) =\left(\left(p+\frac{3}{2}+n\right)-(2p+1)s\right)Y+X\] \[\geq\left(\left(p+\frac{5}{2}+n\right)-(2p+1)s\right)Y. \tag{6.10}\] When \(s<\frac{p+\frac{5}{2}+n}{2p+1}\), then the property (6.9) holds. (II) \(Y\geq X\geq 0\), then \[F(X,Y;n)=\left(t-s+\frac{1}{2}\right)X+(p+2+n-2ps-t)Y. \tag{6.11}\] When \(t+2ps<p+2+n\), we have \[F(X,Y;n) =\left(t-s+\frac{1}{2}\right)X+(p+2+n-2ps-t)Y\] \[\geq\left(p+\frac{5}{2}+n-(2p+1)s\right)X. \tag{6.12}\] Hence, the property (6.9) holds in this case if \(\left(p+\frac{5}{2}+n\right)-(2p+1)s>0\). (III) \(X+Y\geq 0\) and \(X\leq 0\), then \[F(X,Y;n)=\left(2-2t-s\right)X+(p+2+n-2ps-t)Y. \tag{6.13}\] When \(t+2ps<p+2+n\), then \[F(X,Y;n) \geq\left(2-2t-s\right)X+(p+2+n-2ps-t)(-X)\] \[=(p+n+t-(2p-1)s)(-X). \tag{6.14}\] If \(p+n+t-(2p-1)s>0\), we obtain that the property (6.9) holds in this case. (IV) \(X+Y\leq 0\) and \(Y\geq 0\), then \[F(X,Y;n) =\left(\frac{1}{2}-t\right)X+\left(p+\frac{1}{2}+n-(2p-1)s\right)Y\] \[=\left(-\frac{1}{2}+t\right)(-X)+\left(p+\frac{1}{2}+n-(2p-1)s \right)Y\] \[\geq(p+n+t-(2p-1)s)Y. \tag{6.15}\] When \(p+n+t-(2p-1)s>0\), we can see that the property (6.9) holds in this case. (V) \(X-Y\leq 0\) and \(Y\leq 0\), then \[F(X,Y;n)=\left(\frac{1}{2}-t\right)X+\left(p+\frac{1}{2}+n-(2p-1)s\right)Y. \tag{6.16}\] Since \(\frac{1}{2}<t<1\), \[F(X,Y;n) =\left(-\frac{1}{2}+t\right)(-X)+\left(p+\frac{1}{2}+n-(2p-1)s \right)Y\] \[\geq\left(-\frac{1}{2}+t\right)(-Y)+\left(p+\frac{1}{2}+n-(2p-1) s\right)Y\] \[=(-p-1-n+t+(2p-1)s)(-Y), \tag{6.17}\] if \(t+(2p-1)s-p-1-n>0\), it follows that the property (6.9) holds in this case. (VI) \(X-Y\geq 0\) and \(X\leq 0\), then \[F(X,Y;n)=\left(1+s-2t\right)X+\left(p+n+t-2ps\right)Y. \tag{6.18}\] When \(p+n+t-2ps<0\), and since \(-Y\geq-X\geq 0\), \[F(X,Y;n) \geq(1+s-2t)\,X-(p+n-2ps+t)\,(-X)\] \[=(-p-1-n+t+(2p-1)s)(-X), \tag{6.19}\] if \(t+(2p-1)s-p-1-n>0\), it follows that the property (6.9) holds in this case. (VII) \(X+Y\leq 0\) and \(X\geq 0\), then \[F(X,Y;n)=\left(t+s-\frac{1}{2}\right)X+(p+n-2ps+t)\,Y, \tag{6.20}\] since \(t+s-\frac{1}{2}>0\), if \(p+n-2ps+t<0\), by \(Y\leq 0\), it follows that the property (6.9) holds in this case. (VIII) \(X+Y\geq 0\) and \(Y\leq 0\), then \[F(X,Y;n)=X+\left(p+\frac{3}{2}+n-(2p+1)s\right)Y. \tag{6.21}\] By \(X\geq-Y\), \[F(X,Y;n) \geq(-Y)+\left(p+\frac{3}{2}+n-(2p+1)s\right)Y\] \[=\left(-p-\frac{1}{2}-n+(2p+1)s\right)(-Y), \tag{6.22}\] if \(-p-\frac{1}{2}-n+(2p+1)s>0\), we obtain that the property (6.9) holds in this case. Given \(n\in\mathbb{Z}\), we introduce the region \(U_{n}\) as follows. \[U_{n}=\left\{(t,s)\in D^{\prime}_{0}\Big{|}\frac{p+n+1-t}{2p-1}<s<\frac{p+n+t }{2p-1},\frac{p+n+t}{2p}<s<\frac{p+2+n-t}{2p}\right\}. \tag{6.23}\] **Remark 6.2**.: For \(n\geq p-1\) or \(n\leq-(p+1)\), \(U_{n}=\emptyset\). Indeed, for \(n\geq p-1\), we have \(\frac{p+n+1-t}{2p-1}>\frac{p+2+n-t}{2p}\) on \(D^{\prime}_{0}\), and for \(n\leq-(p+1)\), we have \(\frac{p+n+t}{2p}>\frac{p+n+t}{2p-1}\) on \(D^{\prime}_{0}\). Hence \(U_{n}=\emptyset\). From above discussions, together with Lemma 2.3, we have **Lemma 6.3**.: _For any \((t,s)\in D^{\prime}_{0}\),_ _(i) when \((t,s)\in U_{n}\), we have_ \[f(X,Y;n)\to\infty\text{ as }\ X^{2}+Y^{2}\to+\infty. \tag{6.24}\] _(ii) when \((t,s)\notin U_{n}\),_ \[f(X,Y;n)\to-\infty\text{ in some directions of }X^{2}+Y^{2}\to+\infty \tag{6.25}\] Proof.: When \(-p\leq n\leq p-2\), \(U_{n}\neq\emptyset\). For any \((t,s)\in U_{n}\), it satisfies \(\frac{p+\frac{1}{2}+n}{2p+1}<s<\frac{p+\frac{5}{2}+n}{2p+1}\). It follows that if \((t,s)\in U_{n}\), then all the inequalities in the eight region of the above analysis for the function \(F(X,Y;n)\) are satisfied. Hence, for \((t,s)\in U_{n}\), we have \(F(X,Y;n)\to\infty\) as \(\ X^{2}+Y^{2}\to+\infty.\) So Lemma 2.3, we obtain (i). As to (ii), if \((t,s)\notin U_{n}\), then \((t,s)\) doesn't satisfy at least one of the inequality in the definition of \(U_{n}\). For example, suppose \(s\geq\frac{p+1+n-t}{2p-1}\), then \(p+1+n-t-(2p-1)s\geq 0\) In the following, we will show there exists a direction of \(X^{2}+Y^{2}\to+\infty\) such that \(f(X,Y;n)\to-\infty\). For \(p+1+n-t-(2p-1)s>0\), by the case (V) in the previous discuss for the function \(F(X,Y;n)\), when \(X-Y\leq 0\) and \(Y\leq 0\), we have \[F(X,Y;n)=\left(\frac{1}{2}-t\right)X+\left(p+\frac{1}{2}+n-(2p-1)s\right)Y. \tag{6.26}\] Let \(X=Y\to-\infty\), then \(F(Y,Y;n)=(p+1+n-t-(2p-1)s)Y\to-\infty\) when \(p+1+n-t-(2p-1)s>0\). So by Lemma 2.3, we obtain \(f(X,Y;n)\to-\infty\) in the direction \(X=Y\to-\infty\). For \(p+1+n-t-(2p-1)s=0\), we consider the case (VI), when \(X-Y\geq 0\) and \(X\leq 0\), we have \[F(X,Y;n) =(1-2t+s)X+(p+n+t-2s)Y\] \[=(1-2t)X+(p+n+t)Y+s(X-2pY)\] \[<(1-2t)X+(p+n+t)Y+\frac{p+1+n-t}{2p-1}(X-2pY)\] \[=(2t-1-\frac{p+n+1-t}{2p-1})(Y-X)<0. \tag{6.27}\] So we have proved (ii) for the case \(s\geq\frac{p+1+n-t}{2p-1}\). It is easy to check the other cases in this way similarly and we finish the proof. ### Proof of Proposition 5.3 In this section, we give the proof of Proposition 5.3. We establish some Lemmas first. **Lemma 6.4**.: _Suppose \((t_{0},s_{0})\) is a critical point of \(V(p,t,s)\) with \((Re(t_{0}),Re(s_{0}))\in D^{\prime}_{0}\), then we have \((Re(t_{0}),Re(s_{0}))\in U_{0}\)._ Proof.: Suppose \(t_{0}=t_{0R}+X_{0}\sqrt{-1}\) and \(s_{0}=s_{0R}+Y_{0}\sqrt{-1}\) is a critical point of \(V(t,s)\) with \((t_{0R},s_{0R})\in D^{\prime}_{0}\), i.e. \[\begin{cases}\frac{\partial V}{\partial t}(t_{0},s_{0})=0,\\ \frac{\partial V}{\partial s}(t_{0},s_{0})=0.\end{cases} \tag{6.28}\] We will prove that \((t_{0R},s_{0R})\in U_{0}\). Note that \[\frac{\partial f}{\partial X} =\frac{\partial ReV(t+X\sqrt{-1},s+Y\sqrt{-1})}{\partial X}\] \[=Re\left(\sqrt{-1}\frac{\partial V(t+X\sqrt{-1},s+Y\sqrt{-1})}{ \partial t}\right)\] \[=-Im\left(\frac{\partial V}{\partial t}\right), \tag{6.29}\] similarly, we have \[\frac{\partial f}{\partial Y}=-Im\left(\frac{\partial V}{\partial s}\right). \tag{6.30}\] According to the equation (6.28), we obtain \[\frac{\partial f}{\partial X}(t_{0R}+X_{0}\sqrt{-1},s_{0R}+Y_{0} \sqrt{-1}) =0,\] \[\frac{\partial f}{\partial Y}(t_{0R}+X_{0}\sqrt{-1},s_{0R}+Y_{0} \sqrt{-1}) =0. \tag{6.31}\] By a straightforward computation, we obtain \[\frac{\partial f}{\partial X} =-3arg(1-e^{2\pi\sqrt{-1}(t+X\sqrt{-1})})+arg(1-e^{2\pi\sqrt{-1} \left(t+s+(X+Y)\sqrt{-1}\right)})\] \[+arg(1-e^{2\pi\sqrt{-1}\left(t-s+(X-Y)\sqrt{-1}\right)})+2\pi, \tag{6.32}\] \[\frac{\partial f}{\partial Y} =arg(1-e^{2\pi\sqrt{-1}\left(t+s+(X+Y)\sqrt{-1}\right)})-arg(1-e^ {2\pi\sqrt{-1}\left(t-s+(X-Y)\sqrt{-1}\right)})\] \[+(2p+3)\pi-(4p+2)\pi s. \tag{6.33}\] Since \(\frac{1}{2}<t<1\), we have \[0<arg(1-e^{2\pi\sqrt{-1}(t+X\sqrt{-1})})<2\pi\left(t-\frac{1}{2}\right) \tag{6.34}\] which implies \[2\pi\left(-3t+\frac{3}{2}\right)<-3arg(1-e^{2\pi\sqrt{-1}(t+X\sqrt{-1})})<0. \tag{6.35}\] For \(0<t+s-1<\frac{1}{2}\), then \[2\pi\left(t+s-\frac{3}{2}\right)<arg(1-e^{2\pi\sqrt{-1}(t+s+\sqrt{-1}(X+Y))}) <0, \tag{6.36}\] and for \(\frac{1}{2}<t+s-1<1\), then \[0<arg(1-e^{2\pi\sqrt{-1}(t+s+\sqrt{-1}(X+Y))})<2\pi\left(t+s-\frac{3}{2}\right), \tag{6.37}\] in particular, if \(t+s-1=\frac{1}{2}\), then \(arg(1-e^{2\pi\sqrt{-1}(t+s+\sqrt{-1}(X+Y))})=0\). Furthermore, for \(0<t-s<\frac{1}{2}\), then \[2\pi\left(t-s-\frac{1}{2}\right)<arg(1-e^{2\pi\sqrt{-1}\left(t-s+(X-Y)\sqrt{- 1}\right)})<0 \tag{6.38}\] and for \(\frac{1}{2}<t-s<1\), then \[0<arg(1-e^{2\pi\sqrt{-1}\left(t-s+(X-Y)\sqrt{-1}\right)})<2\pi\left(t-s-\frac {1}{2}\right) \tag{6.39}\] in particular, if \(t-s=\frac{1}{2}\), then \(arg(1-e^{2\pi\sqrt{-1}\left(t-s+(X-Y)\sqrt{-1}\right)})=0\). For the convenience, we introduce the following three regions: 1. \(\frac{1}{2}<t<0.909\), \(0<t+s-1<\frac{1}{2}\), \(0<t-s<\frac{1}{2}\), 2. \(\frac{3}{4}<t<0.909\), \(\frac{1}{2}\leq t+s-1<1\), \(0<t-s<\frac{1}{2}\), 3. \(\frac{3}{4}<t<0.909\), \(0<t+s-1<\frac{1}{2}\), \(\frac{1}{2}\leq t-s<1\). If \((t_{0R},s_{0R})\) lies in the region (ii), we have \[\frac{\partial f}{\partial X}>2\pi\left(-2t_{0R}-s_{0R}+2\right),\ \frac{\partial f}{ \partial X}<2\pi\left(t_{0R}+s_{0R}-\frac{3}{2}\right) \tag{6.40}\] and \[\frac{\partial f}{\partial Y}>2\pi\left(p+\frac{3}{2}-(2p+1)s_{0R}\right),\ \frac{ \partial f}{\partial Y}<2\pi\left(p+\frac{1}{2}-(2p-1)s_{0R}\right) \tag{6.41}\] Therefore, if the equation (6.28) has a solution \((t_{0},s_{0})\) with \((t_{0R},s_{0R})\) in region (ii), then \((t_{0R},s_{0R})\) satisfies \[s_{0R}>\frac{p+\frac{3}{2}}{2p+1}\ \text{and}\ s_{0R}<\frac{p+\frac{1}{2}}{2p-1}. \tag{6.42}\] since on region (ii), \(t_{0R}+s_{0R}\geq\frac{3}{2}\), then \[t_{0R}\geq\frac{3}{2}-s_{0R}>\frac{3}{2}-\frac{p+\frac{1}{2}}{2p-1}=0.909 \tag{6.43}\] which contradicts that \((t_{0R},s_{0R})\) lies in region (ii). If \((t_{0R},s_{0R})\) lies in the region (ii), we have \[\frac{\partial f}{\partial X}>2\pi\left(-2t_{0R}+s_{0R}+1\right),\ \frac{ \partial f}{\partial X}<2\pi\left(t_{0R}-s_{0R}-\frac{1}{2}\right) \tag{6.44}\] and \[\frac{\partial f}{\partial Y}>2\pi\left(p+\frac{1}{2}-(2p-1)s_{0R}\right),\ \frac{ \partial f}{\partial Y}<2\pi\left(p+\frac{3}{2}-(2p+1)s_{0R}\right). \tag{6.45}\] Hence, if the equation (6.28) has a solution \((t_{0},s_{0})\) with \((t_{0R},s_{0R})\) in region (iii), then \((t_{0R},s_{0R})\) satisfies \[s_{0R}<\frac{p+\frac{3}{2}}{2p+1}\ \text{and}\ s_{0R}>\frac{p+\frac{1}{2}}{2p-1} \tag{6.46}\] which is impossible since \(\frac{p+\frac{3}{2}}{2p+1}<\frac{p+\frac{1}{2}}{2p-1}\). Therefore, we obtain that if \((t_{0R},s_{0R})\in D^{\prime}_{0}\), then \((t_{0R},s_{0R})\) lies in region (i), hence lies in \(D_{H}\). Then \((t_{0R},s_{0R})\) must lie in \(U_{0}\). Since if \((t_{0R},s_{0R})\in D_{H}\setminus U_{0}\), by Lemma 5.1, the Hessian matrix of \(f(X,Y)\) is positive definite on \((t_{0R},s_{0R})\), and \((X_{0},Y_{0})\) is a critical point of \(f(X,Y)\), hence \((X_{0},Y_{0})\) is a minimal point of \(f(X,Y)\). On the other hand, by Lemma 5.10, we have that \(f(X,Y)\) goes to \(-\infty\) in some directions of \(X^{2}+Y^{2}\to+\infty\). It is a contradiction. Hence, we prove that \((t_{0R},s_{0R})\in U_{0}\). **Lemma 6.5**.: _Let \(A\) and \(B\) be two real symmetric matrices. If \(\det(A)\neq 0\), then we have_ \[\det(A+\sqrt{-1}B)\neq 0. \tag{6.47}\] Proof.: Since \(\det(A)\neq 0\), we have \[\det(A+\sqrt{-1}B)=\det(A)\cdot\det(I+\sqrt{-1}A^{-1}B). \tag{6.48}\] Moreover, \(A^{-1}B\) is a real symmetric matrix which has only real eigenvalues, it follows that \(\det(I+\sqrt{-1}A^{-1}B)\neq 0\) **Lemma 6.6**.: _Suppose \(V(z,w)\) is an analytic function on \(z=t+\sqrt{-1}X\) and \(w=s+\sqrt{-1}Y\). Define the function_ \[f(t,X,s,Y)=ReV(z,w). \tag{6.49}\] _Then we have_ \[f_{tt}=ReV_{zz},\ f_{ts}=ReV_{zw},\ f_{st}=ReV_{wz},\ f_{ss}=ReV_{ww}\] \[f_{XX}=-ReV_{zz},\ f_{XY}=-ReV_{zw},\ f_{YX}=-ReV_{wz},\ f_{YY}=- ReV_{ww}\] \[f_{tX}=-ImV_{zz},\ f_{tY}=-ImV_{zw},\ f_{Xt}=-ImV_{zz},\ f_{Yt}=- ImV_{wz}\] \[f_{sX}=-ImV_{zw},\ f_{sY}=-ImV_{ww},\ f_{Xs}=-ImV_{wz},\ f_{Ys}=- ImV_{ww}. \tag{6.50}\] Proof.: Let \(f(t,X,s,Y)=ReV(z,w),g(t,X,s,Y)=ImV(z,w)\). Then the Cauchy-Riemann equation gives \[\frac{\partial V}{\partial\bar{z}}=0,\ \frac{\partial V}{\partial\bar{w}}=0. \tag{6.51}\] It follows that \[f_{t}=g_{X},\ f_{X}=-g_{t},\ f_{s}=g_{Y},\ f_{Y}=-g_{s}. \tag{6.52}\] Therefore, \[f_{tt}=-f_{XX}=g_{tX},\ f_{ss}=-f_{YY}=g_{sY}\] \[g_{tt}=-g_{XX}=-f_{tX},\ g_{ss}=-g_{YY}=-f_{sY}. \tag{6.53}\] As an example, we compute \[V_{zz} =f_{zz}+ig_{zz}\] \[=\frac{1}{4}\left((\partial_{t}-\sqrt{-1}\partial_{X})^{2}f+\sqrt{ -1}(\partial_{t}-\sqrt{-1}\partial_{X})^{2}g\right)\] \[=\frac{1}{4}\left((f_{tt}-f_{XX}+2g_{tX})+\sqrt{-1}(g_{tt}-g_{XX} -2f_{tX})\right)\] \[=f_{tt}-\sqrt{-1}f_{tX}. \tag{6.54}\] Hence, we obtain \(ReV_{zz}=f_{tt}\) and \(ImV_{zz}=-f_{tX}\). The other identities can be proved similarly. **Lemma 6.7**.: _Let \(Hess(f)\) be the Hessian matrix of \(f\), if_ \[\begin{vmatrix}f_{XX}&f_{XY}\\ f_{XY}&f_{YY}\end{vmatrix}\neq 0, \tag{6.55}\] _then we have_ \[\det(Hess(f))=\begin{vmatrix}f_{tt}&f_{ts}&f_{tX}&f_{tY}\\ f_{st}&f_{ss}&f_{sX}&f_{sY}\\ f_{Xt}&f_{Xs}&f_{XX}&f_{XY}\\ f_{Yt}&f_{Ys}&f_{YX}&f_{YY}\end{vmatrix}>0. \tag{6.56}\] Proof.: By Lemma 6.6, we have \[Hess(f) =\begin{pmatrix}f_{tt}&f_{ts}&f_{tX}&f_{tY}\\ f_{st}&f_{ss}&f_{sX}&f_{sY}\\ f_{Xt}&f_{Xs}&f_{XX}&f_{XY}\\ f_{Yt}&f_{Ys}&f_{YX}&f_{YY}\end{pmatrix}\] \[=\begin{pmatrix}ReV_{zz}&ReV_{zw}&-ImV_{zz}&-ImV_{zw}\\ ReV_{zw}&ReV_{ww}&-ImV_{zw}&-ImV_{ww}\\ -ImV_{zz}&-ImV_{zw}&-ReV_{zz}&-ReV_{zw}\\ -ImV_{zw}&-ImV_{ww}&-ReV_{zw}&-ReV_{ww}\end{pmatrix}\] \[=\begin{pmatrix}\overline{V_{zz}}&\overline{V_{zw}}&0&0\\ \overline{V_{zw}}&\overline{V_{ww}}&0&0\\ -ImV_{zz}&-ImV_{zw}&-V_{zz}&-V_{zw}\\ -ImV_{zw}&-ImV_{ww}&-V_{zw}&-V_{ww}.\end{pmatrix} \tag{6.57}\] Therefore, we obtain \[\det(Hess(f))=\det\begin{pmatrix}\overline{V_{zz}}&\overline{V_{zw}}\\ \overline{V_{zw}}&\overline{V_{ww}}\end{pmatrix}\cdot\det\begin{pmatrix}V_{ zz}&V_{zw}\\ V_{zw}&V_{ww}\end{pmatrix}=|\det\begin{pmatrix}V_{zz}&V_{zw}\\ V_{zw}&V_{ww}\end{pmatrix}|^{2} \tag{6.58}\] Furthermore, \[\begin{pmatrix}V_{zz}&V_{zw}\\ V_{zw}&V_{ww}\end{pmatrix} =\begin{pmatrix}ReV_{zz}+\sqrt{-1}ImV_{zz}&ReV_{zw}+\sqrt{-1}ImV_ {zw}\\ ReV_{zw}+\sqrt{-1}ImV_{zw}&ReV_{ww}+\sqrt{-1}ImV_{ww}\end{pmatrix}\] \[=\begin{pmatrix}ReV_{zz}&ReV_{zw}\\ ReV_{zw}&ReV_{ww}\end{pmatrix}+\sqrt{-1}\begin{pmatrix}ImV_{zz}&ImV_{zw}\\ ImV_{zw}&ImV_{ww}\end{pmatrix} \tag{6.59}\] Let \[A=-\begin{pmatrix}f_{XX}&f_{XY}\\ f_{XY}&f_{YY}\end{pmatrix}=\begin{pmatrix}ReV_{zz}&ReV_{zw}\\ ReV_{zw}&ReV_{ww}\end{pmatrix},\ B=\begin{pmatrix}ImV_{zz}&ImV_{zw}\\ ImV_{zw}&ImV_{ww}\end{pmatrix}, \tag{6.60}\] then both \(A\) and \(B\) are real positive matrices, and \(\det(A)\neq 0\). Hence by Lemma 6.5, we obtain \[\det\begin{pmatrix}V_{zz}&V_{zw}\\ V_{zw}&V_{ww}\end{pmatrix}\neq 0. \tag{6.61}\] Finally, by formula (6.58), we obtain \[\det(Hess(f))>0. \tag{6.62}\] Suppose \(X(t,s)\) and \(Y(t,s)\) be a solution to the following equations \[\begin{cases}\dfrac{\partial f}{\partial X}(t,X(t,s),s,Y(t,s))=0,\\ \dfrac{\partial f}{\partial Y}(t,X(t,s),s,Y(t,s))=0.\end{cases}\] **Lemma 6.8**.: _Let \(h(t,s)=f(t,X(t,s),s,Y(t,s))\), then we have_ \[h_{tt}=\frac{\begin{vmatrix}f_{tt}&f_{tX}&f_{tY}\\ f_{Xt}&f_{XX}&f_{XY}\\ f_{Yt}&f_{YX}&f_{YY}\end{vmatrix}}{\begin{vmatrix}f_{tX}&f_{tY}\\ f_{Xt}&f_{XX}&f_{XY}\\ f_{Yt}&f_{YX}&f_{YY}\end{vmatrix}}=\frac{\begin{vmatrix}-f_{XX}&f_{tX}&f_{tY}\\ f_{Xt}&f_{XX}&f_{XY}\\ f_{YT}&f_{YY}\end{vmatrix}}{\begin{vmatrix}f_{XX}&f_{tY}\\ f_{Xt}&f_{XX}&f_{XY}\\ f_{XY}&f_{YY}\end{vmatrix}}, \tag{6.63}\] _and_ \[h_{tt}h_{ss}-h_{ts}^{2}=\frac{\begin{vmatrix}f_{tt}&f_{ts}&f_{tX}&f_{tY}\\ f_{st}&f_{ss}&f_{sX}&f_{sY}\\ f_{Xt}&f_{Xs}&f_{XX}&f_{XY}\\ f_{YT}&f_{ys}&f_{YX}&f_{YY}\end{vmatrix}}{\begin{vmatrix}f_{tX}&f_{tY}\\ f_{st}&f_{ss}&f_{sX}&f_{sY}\\ f_{Xt}&f_{Xs}&f_{XX}&f_{XY}\\ f_{YT}&f_{YX}&f_{YY}\end{vmatrix}}. \tag{6.64}\] Proof.: Since \(f_{X}(t,X(t,s),s,Y(t,s))=0\) and \(f_{Y}(t,X(t,s),s,Y(t,s))=0\), we have \[h_{t}=f_{t}+f_{X}X_{t}+f_{Y}Y_{t}=f_{t},\] \[h_{s}=f_{s}+f_{X}X_{s}+f_{Y}Y_{s}=f_{s}, \tag{6.65}\] and then \[h_{tt}=f_{tt}+f_{tX}X_{t}+f_{tY}Y_{t},\] \[h_{ts}=f_{ts}+f_{tX}X_{t}+f_{tY}Y_{s},\] \[h_{ss}=f_{ss}+f_{sX}X_{s}+f_{sY}Y_{s}. \tag{6.66}\] Moreover, from \(f_{X}(t,X(t,s),s,Y(t,s))=f_{Y}(t,X(t,s),s,Y(t,s))=0\), we obtain \[f_{Xt}+f_{XX}X_{t}+f_{XY}Y_{t}=0,\] \[f_{Yt}+f_{YX}X_{t}+f_{YY}Y_{t}=0. \tag{6.67}\] It follows that \[X_{t}=\frac{f_{XY}f_{tY}-f_{YY}f_{tX}}{\begin{vmatrix}f_{XX}&f_{XY}\\ f_{XY}&f_{YY}\end{vmatrix}},\ Y_{t}=\frac{f_{XY}f_{tY}-f_{XX}f_{tX}}{\begin{vmatrix} f_{XX}&f_{XY}\\ f_{XY}&f_{YY}\end{vmatrix}}. \tag{6.68}\] Similarly, \[X_{s}=\frac{f_{XY}f_{sY}-f_{YY}f_{sX}}{\begin{vmatrix}f_{XX}&f_{XY}\\ f_{XY}&f_{YY}\end{vmatrix}},\ Y_{s}=\frac{f_{XY}f_{sY}-f_{XX}f_{sX}}{\begin{vmatrix} f_{XX}&f_{XY}\\ f_{XY}&f_{YY}\end{vmatrix}}. \tag{6.69}\] Combining formulas (6.66), (6.68) and (6.69) together, we obtain (6.63) and (6.64) **Lemma 6.9**.: _On the region_ \[D_{H}=\{(t,s)\in\mathbb{R}^{2}|\frac{1}{2}<t<1,1<t+s<\frac{3}{2},0<t-s<\frac{1} {2}\}, \tag{6.70}\] _we have_ \[h_{tt}<0. \tag{6.71}\] Proof.: By formula (6.63), we only need to show that \[\begin{vmatrix}-f_{XX}&f_{tX}&f_{tY}\\ f_{Xt}&f_{XX}&f_{XY}\\ f_{Yt}&f_{YX}&f_{YY}\end{vmatrix}<0. \tag{6.72}\] By Lemma 5.1, on the region \(D_{H}\), we have \(f_{XX}>0\) and \(\begin{vmatrix}f_{XX}&f_{XY}\\ f_{YX}&f_{YY}\end{vmatrix}>0\). For convenience, we set \[\Delta=\begin{pmatrix}f_{XX}&f_{XY}\\ f_{YX}&f_{YY}\end{pmatrix},\ I_{2}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\text{ and }\alpha=(f_{tX},f_{tY}). \tag{6.73}\] By using the identity \[\begin{pmatrix}-f_{XX}&\alpha\\ \alpha^{T}&\Delta\end{pmatrix}=\begin{pmatrix}1&0\\ 0&\Delta\end{pmatrix}\cdot\begin{pmatrix}1&\alpha\\ 0&I_{2}\end{pmatrix}\cdot\begin{pmatrix}-f_{XX}-\alpha\Delta^{-1}\alpha^{T}&0 \\ \Delta^{-1}\alpha^{T}&I_{2}\end{pmatrix}, \tag{6.74}\] we obtain \[\begin{vmatrix}-f_{XX}&f_{tX}&f_{tY}\\ f_{Xt}&f_{XX}&f_{XY}\\ f_{Yt}&f_{YX}&f_{YY}\end{vmatrix}=\begin{vmatrix}-f_{XX}&\alpha\\ \alpha^{T}&\Delta\end{vmatrix}=|\Delta|\cdot(-f_{XX}-\alpha\Delta^{-1}\alpha^ {T}). \tag{6.75}\] Since \(|\Delta|>0\), \(f_{XX}>0\) and \(\alpha\Delta^{-1}\alpha^{T}>0\) by the positiveness of the matrix \(\Delta\), it follows that \[\begin{vmatrix}-f_{XX}&f_{tX}&f_{tY}\\ f_{Xt}&f_{XX}&f_{XY}\\ f_{Yt}&f_{YX}&f_{YY}\end{vmatrix}<0. \tag{6.76}\] Now, let us finish the proof of Proposition 5.3. Proof.: For \(p\geq 6\), by Lemma 5.11, we have \(U_{0}\subset D_{H}\). By Proposition 5.1, we get \[\begin{vmatrix}f_{XX}&f_{XY}\\ f_{XY}&f_{YY}\end{vmatrix}>0 \tag{6.77}\] on \(U_{0}\). Hence, by Lemma 6.9 and Lemma 6.7, we obtain \[h_{tt}<0\text{ and }\det(Hess(h))=h_{tt}h_{ss}-h_{ts}^{2}>0. \tag{6.78}\] On the other hand, by our construction of \(h(t,s)\), we know that as \((t,s)\) goes to the boundary of the region \(U_{0}\), \(h(t,s)\) goes to a value less than \(\frac{3.509}{2\pi}\) by Lemma 4.1. Therefore, \(h(t,s)\) has a unique maximal point which is a critical point denoted by \((t_{0R},s_{0R})\) lying in the region \(U_{0}\). Let \((t_{0},s_{0})=(t_{0R}+\sqrt{-1}X(t_{0R},s_{0R}),s_{0R}+\sqrt{-1}Y(t_{0R},s_{0R}))\), then \((t_{0},s_{0})\) is the unique critical point of \(V(p,t,s)\) with \((Re(t_{0}),Re(s_{0}))=(t_{0R},s_{0,R})\in U_{0}\). Furthermore, by Lemma 6.4, we know that if \((t_{0}^{\prime},s_{0}^{\prime})\) is another critical point of the potential function \(V(p,t,s)\) with \((t_{0R}^{\prime},s_{0R}^{\prime})\in D_{0}^{\prime}\), then \((t_{0R}^{\prime},s_{0R}^{\prime})\in U_{0}\). Therefore, by the above argument, it follows that \((t_{0}^{\prime},s_{0}^{\prime})=(t_{0},s_{0})\). Hence, there is a unique critical point \((t_{0},s_{0})\) of \(V(p,t,s)\) in the region \(D_{0}^{\prime}\) ### Proof of Lemma 5.4 First, applying the standard work of [30, 42] to the hyperbolic equation for the complement of the twist knot in \(S^{3}\), we obtain **Proposition 6.10**.: _The hyperbolic gluing equation for \(\mathcal{K}_{p}\) can be written in the following form_ \[\log(w-1)+2(-1+2q)\log(-\frac{1}{w})+\log(-\frac{1}{w}-1)=(3-2p)\pi\sqrt{-1} \tag{6.79}\] _with_ \[\log w+\log(-\frac{1}{w})=-\pi\sqrt{-1}. \tag{6.80}\] _Suppose \(w_{0}\) is a solution of the above equation, then we have_ \[vol(S^{3}\setminus\mathcal{K}_{p})+\sqrt{-1}cs((S^{3}\setminus \mathcal{K}_{p})\] \[=\sqrt{-1}\left(R(w_{0})+R(-\frac{1}{w_{0}})+R(\frac{1}{1-w_{0}} )+R(\frac{w_{0}}{w_{0}+1})\right)-\frac{\pi}{2}(2\pi\sqrt{-1}\] \[+\frac{2\pi\sqrt{-1}}{p}+\frac{\log(w_{0}-1)+2\log(-\frac{1}{w_{ 0}})-\log(-\frac{1}{w_{0}}-1)+\pi\sqrt{-1}}{p})\operatorname{mod}\pi^{2}\sqrt {-1}\mathbb{Z} \tag{6.81}\] _where_ \[R(U)=\frac{1}{2}\log U\log(1-U)+Li_{2}(U). \tag{6.82}\] Now, we going to prove Lemma 5.4. Proof.: Recall that the potential function for \(\mathcal{K}_{p}\) is given by \[V(p,t,s)=\pi\sqrt{-1}\left((2p+1)s^{2}-(2p+3)s-2t\right)\] \[+\frac{1}{2\pi\sqrt{-1}}\left(\operatorname{Li}_{2}(e^{2\pi\sqrt {-1}(t+s)})+\operatorname{Li}_{2}(e^{2\pi\sqrt{-1}(t-s)})-3\operatorname{Li}_ {2}(e^{2\pi\sqrt{-1}t})+\frac{\pi^{2}}{6}\right). \tag{6.83}\] Suppose \(t_{0},s_{0}\) are the critical point of \(V(p,t,s)\). I.e. \(t_{0},s_{0}\) satisfies equations (5.17) and (5.18). Set \(x_{0}=e^{2\pi\sqrt{-1}t_{0}}\) and \(y_{0}=e^{2\pi\sqrt{-1}s_{0}}\), then we obtain \[x_{0}=\sqrt{y}_{0}-\frac{1}{\sqrt{y}_{0}}+1. \tag{6.84}\] Furthermore, comparing with hyperbolic equation (6.79), we have \[w_{0}=\frac{1}{\sqrt{y}_{0}}. \tag{6.85}\] Now we are going to prove the following identity \[2\pi V(p,t_{0},s_{0})=vol(S^{3}\setminus\mathcal{K}_{p})+\sqrt{-1}cs( \mathcal{S}^{3}\setminus\mathcal{K}_{p})-(p+5)\pi^{2}\sqrt{-1} \tag{6.86}\] which is just the statement of Lemma 5.4. First, \[Vol(K_{p})+\sqrt{-1}CS(K_{p})+2\pi^{2}\sqrt{-1}\] \[=\sqrt{-1}\left(\frac{1}{2}\log w_{0}\log(1-w_{0})+Li_{2}(w_{0})+ \frac{1}{2}\log(-\frac{1}{w_{0}})\log(1+\frac{1}{w_{0}})+Li_{2}(-\frac{1}{w_{0 }})\right.\] \[\left.+\frac{1}{2}\log\frac{1}{1-w_{0}}\log(1-\frac{1}{1-w_{0}}) +Li_{2}(\frac{1}{1-w_{0}})+\frac{1}{2}\log\frac{w_{0}}{w_{0}+1}\log(1-\frac{w_ {0}}{w_{0}+1})+Li_{2}(\frac{w_{0}}{w_{0}+1})\right)\] \[-\frac{\pi}{2}(-2\pi\sqrt{-1}+\frac{2\pi\sqrt{-1}}{p}+\frac{\log (w_{0}-1)+2\log(-\frac{1}{w_{0}})-\log(-\frac{1}{w_{0}}-1)+\pi\sqrt{-1}}{p}) \tag{6.87}\] \(w_{0}\) satisfies the equation (6.79), we have \[p=\frac{\log(w_{0}-1)+2\log(-\frac{1}{w_{0}})-\log(-\frac{1}{w_{0}}-1)+3\pi \sqrt{-1}}{2(2\log(-\frac{1}{w_{0}})+\pi\sqrt{-1})} \tag{6.88}\] Then, we obtain \[vol(S^{3}\setminus\mathcal{K}_{p})+\sqrt{-1}cs(S^{3}\setminus K_ {p})+2\pi^{2}\sqrt{-1}\] \[=\sqrt{-1}\left(\frac{1}{2}\log w_{0}\log(1-w_{0})+Li_{2}(w_{0}) +\frac{1}{2}\log(-\frac{1}{w_{0}})\log(1+\frac{1}{w})+Li_{2}(-\frac{1}{w_{0} })\right.\] \[\left.+\frac{1}{2}\log\frac{1}{1-w_{0}}\log(1-\frac{1}{1-w_{0}}) +Li_{2}(\frac{1}{1-w_{0}})+\frac{1}{2}\log\frac{w_{0}}{w_{0}+1}\log\frac{1}{w +1}+Li_{2}(\frac{w_{0}}{w_{0}+1})\right)\] \[-\pi(-\pi\sqrt{-1}+\frac{(2\pi\sqrt{-1}+\log(w_{0}-1)+2\log(- \frac{1}{w_{0}})-\log(-\frac{1}{w_{0}}-1)+\pi\sqrt{-1})(2\log(-1/w_{0})+\pi \sqrt{-1})}{3\pi\sqrt{-1}+\log(w_{0}-1)+2\log(-\frac{1}{w_{0}})-\log(-\frac{1 }{w_{0}}-1)}), \tag{6.89}\] which now is a function \(w_{0}\), denoted by \(F(w_{0})\). On the other hand, since \((t_{0},s_{0})\) satisfies the critical equations (5.17) and (5.18), we have \[p=\frac{\left(3\pi\sqrt{-1}-2\pi\sqrt{-1}s_{0}+\log(1-e^{2\pi\sqrt{-1}(t_{0}+s _{0})})-\log(1-e^{2\pi\sqrt{-1}(t_{0}-s_{0})})\right)}{2(2s_{0}-1)\pi\sqrt{-1}}. \tag{6.90}\] Therefore, \[2\pi(V(p,t_{0},s_{0})+\frac{p+7}{2}\pi\sqrt{-1})\] \[=2\pi^{2}\sqrt{-1}(-2t_{0}-s_{0}+\frac{11}{4})+\frac{(2s_{0}-1) \pi}{2}\left(\log(1-e^{2\pi\sqrt{-1}(t_{0}+s_{0})})-\log(1-e^{2\pi\sqrt{-1}(t _{0}-s_{0})})\right)\] \[+\frac{1}{\sqrt{-1}}\left(\frac{\pi^{2}}{6}-3Li_{2}(e^{2\pi\sqrt{ -1}t_{0}})+Li_{2}(e^{2\pi\sqrt{-1}(t_{0}+s_{0})})+Li_{2}(e^{2\pi\sqrt{-1}(t_{0 }-s_{0})})\right). \tag{6.91}\] By using the formulas (6.84) and (6.85), we obtain \[2\pi(V(p,t_{0},s_{0})+\frac{p+7}{2}\pi\sqrt{-1})\] \[=\pi(-2\log(w_{0}-\frac{1}{w_{0}}+1)+2\log w_{0}+\frac{3\pi\sqrt{- 1}}{2})+\pi(-\frac{\log w_{0}}{\pi\sqrt{-1}}-\frac{1}{2})\] \[\cdot\left(\log(1-(w_{0}-\frac{1}{w_{0}}+1)/w_{0}^{2})-\log(1-(w_ {0}-\frac{1}{w_{0}}+1)w_{0}^{2})\right)\] \[+\frac{1}{\sqrt{-1}}\left(\frac{\pi^{2}}{6}-3Li_{2}(w_{0}-\frac{1 }{w_{0}}+1)+Li_{2}((w_{0}-\frac{1}{w_{0}}+1)/w_{0}^{2})+Li_{2}((w_{0}-\frac{1} {w_{0}}+1)w_{0}^{2})\right) \tag{6.92}\] Which now is a function \(w_{0}\), denoted by \(G(w_{0})\). Finally, by some tedious calculations of the derivative, we can prove that \(F(w_{0})=G(w_{0})\) and finish the proof of Lemma 5.4 ### Proof of Lemma 5.5 In this section, we prove the Lemma 5.5 which gives the estimation of the critical value \(\zeta_{\mathbb{R}}(p)\). Proof.: Recall that \(\zeta_{\mathbb{R}}(p)\) is given by \[\zeta_{\mathbb{R}}(p)=ReV(p,t_{0},s_{0}), \tag{6.93}\] where \((t_{0},s_{0})\) is the unique solution of the equations \[\frac{\partial V(p,t,s)}{\partial t} =-2\pi\sqrt{-1}+3\log(1-e^{2\pi\sqrt{-1}t})\] \[-\log(1-e^{2\pi\sqrt{-1}(t+s)})-\log(1-e^{2\pi\sqrt{-1}(t-s)})=0, \tag{6.94}\] \[\frac{\partial V(p,t,s)}{\partial s} =(4p+2)\pi\sqrt{-1}s-(2p+3)\pi\sqrt{-1}\] \[-\log(1-e^{2\pi\sqrt{-1}(t+s)})+\log(1-e^{2\pi\sqrt{-1}(t-s)})=0. \tag{6.95}\] Putting \(\gamma=\frac{1}{p}\), we regard \((t_{0},s_{0})\) as a function of \(\gamma\), and the denote it by \((t(\gamma),s(\gamma))\), then \[\zeta_{\mathbb{R}}(p)=ReV(p,t(\gamma),s(\gamma)). \tag{6.96}\] By expanding the above equations, we obtain the expansions of \(t(\gamma)\) and \(s(\gamma)\) as follows. \[t(\gamma) =\frac{\log(1-2\sqrt{-1})}{2\pi\sqrt{-1}}+1+\frac{(1+2\sqrt{-1}) \pi}{40}\gamma^{2}+\frac{(3+\sqrt{-1})\pi}{80}\gamma^{3}\] \[+(\frac{180\pi+19\pi^{3}}{9600}-\frac{45\pi-4\pi^{3}}{4800}\sqrt{- 1})\gamma^{4}+O(\gamma^{5}),\] \[s(\gamma) =\frac{1}{2}+\frac{1}{2}\gamma+\frac{1-\sqrt{-1}}{8}\gamma^{2}- \frac{\sqrt{-1}}{16}\gamma^{3}\] \[-(\frac{1}{61}+\frac{3+\pi^{2}}{192}\sqrt{-1})\gamma^{4}+O( \gamma^{5}). \tag{6.97}\] For brevity, we write \(t(\gamma)\) and \(s(\gamma)\) as \[t(\gamma) =\frac{\log(1-2\sqrt{-1})}{2\pi\sqrt{-1}}+1+\hat{t}(\gamma)\gamma^{2},\] \[s(\gamma) =\frac{1}{2}+\frac{1}{2}\gamma+\hat{s}(\gamma)\gamma^{2}. \tag{6.98}\] By using Taylor expansion, we obtain \[2\pi V(p,t(\gamma),s(\gamma)) =v_{8}-(p+\frac{23}{4})\pi^{2}\sqrt{-1}-\pi^{2}\sqrt{-1}\gamma- \pi^{2}\frac{1+\sqrt{-1}}{4}\gamma^{2}\] \[-\pi^{2}\frac{1}{8}\gamma^{3}-\pi^{2}\frac{6+\pi^{2}-6\sqrt{-1}}{ 192}\gamma^{4}+O(\gamma^{5}). \tag{6.99}\] Therefore \[2\pi\zeta_{\mathbb{R}}(p)=2\pi ReV(p,t(\gamma),s(\gamma))=v_{8}-\pi^{2}\frac{ 1}{4}\gamma^{2}-\pi^{2}\frac{1}{8}\gamma^{3}-\pi^{2}\frac{6+\pi^{2}}{192} \gamma^{4}+O(\gamma^{5}). \tag{6.100}\] From this estimation, we obtain Lemma 5.5 holds. Actually, we can numerically verify Lemma 5.5 holds for \(2\leq p\leq 1000\). As for \(p\geq 1001\), one can prove Lemma 5.5 with basic but tedious calculations. ### Proof of Proposition 5.16 For a fix constant \(c\in\mathbb{R}\), we define the subset \[D^{\prime}_{0}(c)=\{(t,s)\in D^{\prime}_{0}|s=c\}. \tag{6.101}\] We prove Proposition 5.16 by proving Proposiiton 6.11 and Proposition 6.22 in the following. **Proposition 6.11**.: _For \(c_{\text{upper}}(p)\leq c<1\) and \(n\in\mathbb{Z}\), there exists a constant \(C\) independent of \(c\), such that_ \[|\int_{D^{\prime}_{0}(c)}e^{(N+\frac{1}{2})V_{\text{N}}(p,t,c;0,n)}dt|<Ce^{(N+ \frac{1}{2})(\mathbb{G}_{\mathbb{R}}(p)-\epsilon)}. \tag{6.102}\] \(D^{\prime}_{0}(c)\) is a slice of the region \(D^{\prime}_{0}\), we will prove Proposition 5.16 by using the saddle point method on \(D^{\prime}_{0}(c)\). Recall that \[V(p,t,s;m,n)=\pi\sqrt{-1}\left((2p+1)s^{2}-(2p+3+2n)s-(2m+2)t\right)\] \[+\frac{1}{2\pi\sqrt{-1}}\left(\text{Li}_{2}(e^{2\pi\sqrt{-1}(t+s) })+\text{Li}_{2}(e^{2\pi\sqrt{-1}(t-s)})-3\text{Li}_{2}(e^{2\pi\sqrt{-1}t})+ \frac{\pi^{2}}{6}\right), \tag{6.103}\] so we have \[\frac{\partial V(p,t,s;0,n)}{\partial t} =-2\pi\sqrt{-1}+3\log(1-e^{2\pi\sqrt{-1}t})\] \[-\log(1-e^{2\pi\sqrt{-1}(t+s)})-\log(1-e^{2\pi\sqrt{-1}(t-s)}), \tag{6.104}\] and \[\frac{\partial V(p,t,s;0,n)}{\partial s} =(4p+2)\pi\sqrt{-1}s-(2p+3+2n)\pi\sqrt{-1}\] \[-\log(1-e^{2\pi\sqrt{-1}(t+s)})+\log(1-e^{2\pi\sqrt{-1}(t-s)}). \tag{6.105}\] **Proposition 6.12**.: _Fixing \(s=c\in[\frac{1}{2},1)\), as a function of \(t\), \(V(p,t,c;0,n)\) has a unique critical point \(T_{1}(c)\) with \(t_{1}(c)=Re(T_{1}(c))\in(\frac{1}{2},1)\)._ Proof.: Consider the equation \[\frac{dV(p,t,c;0,n)}{dt} =-2\pi\sqrt{-1}+3\log(1-e^{2\pi\sqrt{-1}t})\] \[-\log(1-e^{2\pi\sqrt{-1}(t+c)})-\log(1-e^{2\pi\sqrt{-1}(t-c)})=0 \tag{6.106}\] which gives \[x^{2}-2x+3-C-\frac{1}{C}=0 \tag{6.107}\] where \(x=e^{2\pi\sqrt{-1}t}\), \(C=e^{2\pi\sqrt{-1}c}\). So we obtain \[x=1\pm 2\sqrt{-1}\sin(\pi c). \tag{6.108}\] Let \(T_{\pm}(c)\) be the solution determined by the equation \[e^{2\pi\sqrt{-1}T_{\pm}(c)}=1\pm 2\sqrt{-1}\sin(\pi c). \tag{6.109}\] From (6.109), we have \[T_{\pm}(c)=\frac{\log(1\pm 2\sqrt{-1}\sin(\pi c))}{2\pi\sqrt{-1}}+\mathbb{Z}. \tag{6.110}\] Then \[Re(T_{\pm}(c))=\frac{arg(1\pm 2\sqrt{-1}\sin(\pi c))}{2\pi}+\mathbb{Z}. \tag{6.111}\] By \(c\in[\frac{1}{2},1)\), we obtain \[0<arg(1+2\sqrt{-1}\sin(\pi c))<\arctan(2)<1.2,\] \[-1.2<-\arctan(2)<arg(1-2\sqrt{-1}\sin(\pi c))<0. \tag{6.112}\] Therefore, one can see that only the solution \(T_{-}(c)=\frac{\log(1-2\sqrt{-1}\sin(\pi c))}{2\pi\sqrt{-1}}+1\) satisfies that \(Re(T_{-}(c))\in(\frac{1}{2},1)\). Moreover, by the following Lemma 6.13, we know that \(T_{-}(c)\) satisfies the equation (6.106), so \(T_{-}(c)\) is indeed a critical point of \(V(p,t,c;0,n)\). In the following, we will denote \(T_{-}(c)\) by \(T_{1}(c)=t_{1}(c)+\sqrt{-1}X_{1}(c)\) for convenience. **Lemma 6.13**.: \(T_{-}(c)=\frac{\log(1-2\sqrt{-1}\sin(\pi c))}{2\pi\sqrt{-1}}+1\) _satisfies the equation (6.106)._ Proof.: The equation (6.106) is equivalent to \[\begin{cases}x^{2}-2x+3-C-\frac{1}{C}=0,\\ 3arg(1-x)-arg(1-Cx)-arg(1-C^{-1}x)=2\pi,\end{cases} \tag{6.113}\] where \(x=e^{2\pi\sqrt{-1}t}\), \(C=e^{2\pi\sqrt{-1}c}\). Clearly, \(x_{0}=e^{2\pi\sqrt{-1}T_{-}(c)}=1-2\sqrt{-1}\sin(\pi c)\) has admitted the first equation, and we have the equation \[3arg(1-x_{0})-arg(1-Cx_{0})-arg(1-C^{-1}x_{0})=2k\pi \tag{6.114}\] for some \(k\in\mathbb{Z}\). In the following, we show \(k=1\). Indeed, for \(c\in[\frac{1}{2},1)\), we have \[3arg(1-x_{0})=3arg(2\sqrt{-1}\sin(\pi c))=\frac{3}{2}\pi,\] \[arg(1-Cx_{0})\] \[=arg(1-2\sin(\pi c)\sin(2\pi c)-\cos(2\pi c)+\sqrt{-1}(2\sin(\pi c )\cos(2\pi c)-\sin(2\pi c)))\] \[\in[-\frac{\pi}{4},\frac{\pi}{2})\] \[arg(1-C^{-1}x_{0}),\] \[=arg(1+2\sin(\pi c)\sin(2\pi c)-\cos(2\pi c)+\sqrt{-1}(2\sin(\pi c )\cos(2\pi c)+\sin(2\pi c)))\] \[\in(-\frac{\pi}{2},\frac{\pi}{2}). \tag{6.115}\] Therefore, we obtain \[3arg(1-x_{0})-arg(1-Cx_{0})-arg(1-C^{-1}x_{0})\in(\frac{\pi}{2},3\pi), \tag{6.116}\] which implies \(k=1\) in formula (6.114). **Lemma 6.14**.: _We have the following identities:_ \[Re\left(\log\left(1-e^{2\pi\sqrt{-1}(T_{1}(c)+c)}\right)\right) =\log\left(4\sin(\pi c)\sin\left(\frac{\pi c}{2}\right)\right), \tag{6.118}\] \[Re\left(\log\left(1-e^{2\pi\sqrt{-1}(T_{1}(c)-c)}\right)\right) =\log\left(4\sin(\pi c)\cos\left(\frac{\pi c}{2}\right)\right),\] (6.119) \[Re\left(-\log\left(1-e^{2\pi\sqrt{-1}(T_{1}(c)+c)}\right)\right. +\log\left(1-e^{2\pi\sqrt{-1}(T_{1}(c)-c)}\right)\right) =\log\left(\cot\left(\frac{\pi c}{2}\right)\right). \tag{6.117}\] Proof.: By straightforward computations, we obtain \[Re\left(\log\left(1-e^{2\pi\sqrt{-1}(T_{1}(c)+c)}\right)\right)\] \[=Re\left(\log\left(1-(1-2\sqrt{-1}\sin(\pi c))e^{2\pi\sqrt{-1}c} \right)\right)\] \[=Re\left(\log(1-2\sin(\pi c)\sin(2\pi c)-\cos(2\pi c)+\sqrt{-1}(2 \sin(\pi c)\cos(2\pi c))-\sin(2\pi c))\right)\] \[=\frac{1}{2}\log\left(1-2\sin(\pi c)\sin(2\pi c)-\cos(2\pi c)^{2} +(2\sin(\pi c)\cos(2\pi c))-\sin(2\pi c))^{2}\right)\] \[=\log\left(4\sin(\pi c)\sin\left(\frac{\pi c}{2}\right)\right)\] and \[Re\left(\log\left(1-e^{2\pi\sqrt{-1}(T_{1}(c)-c)}\right)\right)\] \[=Re\left(\log\left(1-(1-2\sqrt{-1}\sin(\pi c))e^{-2\pi\sqrt{-1}c }\right)\right)\] \[=Re\left(\log(1+2\sin(\pi c)\sin(2\pi c)-\cos(2\pi c)+\sqrt{-1}(2 \sin(\pi c)\cos(2\pi c))+\sin(2\pi c))\right)\] \[=\frac{1}{2}\log\left(1+2\sin(\pi c)\sin(2\pi c)-\cos(2\pi c)^{2} +(2\sin(\pi c)\cos(2\pi c))+\sin(2\pi c))^{2}\right)\] \[=\log\left(4\sin(\pi c)\cos\left(\frac{\pi c}{2}\right)\right),\] which prove the identities (6.117) and (6.118). Then the identity (6.119) follows from identities (6.117) and (6.118) immediately. **Lemma 6.15**.: _As a function of \(c\in[\frac{1}{2},1)\), \(ReV(p,T_{1}(c),c;0,n)\) is a decreasing function of \(c\). Furthermore, we have_ \[ReV(p,T_{1}(c),c;0,n)=2\left(\Lambda\left(\frac{c}{2}\right)-\Lambda\left( \frac{1}{2}-\frac{c}{2}\right)\right). \tag{6.120}\] Proof.: From the equation (6.109), we obtain \[\frac{dT_{1}(c)}{dc}=\frac{-\cos(\pi c)}{e^{2\pi\sqrt{-1}T_{1}(c)}}=\frac{- \cos(\pi c)}{1-2\sqrt{-1}\sin(\pi c)}. \tag{6.121}\] Then \[\frac{dReV(p,T_{1}(c),c;0,n)}{dc}\] \[=Re\left(\frac{\partial V(p,T_{1}(c),c;0,n)}{\partial t}\frac{dT _{1}(c)}{dc}+\frac{\partial V(p,T_{1}(c),c;0,n)}{\partial s}\frac{dc}{dc}\right)\] \[=Re\left(\frac{\partial V(p,T_{1}(c),c;0,n)}{\partial s}\right)\] \[=Re\left(-\log(1-e^{2\pi\sqrt{-1}(T_{1}(c)+c)})+\log\left(1-e^{2 \pi\sqrt{-1}(T_{1}(c)-c)}\right)\right). \tag{6.122}\] By identity (6.119), we obtain \[\frac{dReV(p,T_{1}(c),c;0,n)}{dc}=\log\left(\cot\left(\frac{\pi c}{2}\right) \right)<0, \tag{6.123}\] since \(\cot\left(\frac{\pi c}{2}\right)<1\) for \(\frac{1}{2}<c<1\). Hence \(ReV(p,T_{1}(c),c;0,n)\) is a decreasing function. For \(c\geq\frac{1}{2}\), we have \[ReV(p,T_{1}(c),c;0,n)-ReV\left(p,0,n,T_{1}\left(\frac{1}{2}\right), \frac{1}{2}\right)\] \[=\int_{\frac{1}{2}}^{c}\log\left(\cot\left(\frac{\pi\tau}{2} \right)\right)d\tau\] \[=\int_{\frac{1}{2}}^{c}\log\left(2\cos\left(\frac{\pi\tau}{2} \right)\right)d\tau-\int_{\frac{1}{2}}^{c}\log\left(2\sin\left(\frac{\pi\tau} {2}\right)\right)d\tau. \tag{6.124}\] Let \(x=\frac{1}{2}(1-\tau)\), we obtain \[\int_{\frac{1}{2}}^{c}\log\left(2\cos\left(\frac{\pi\tau}{2}\right)\right)d \tau=-2\int_{\frac{1}{4}}^{\frac{1}{2}(1-c)}\log(2\sin\pi x)dx=2\left(\Lambda \left(\frac{1}{2}-\frac{c}{2}\right)-\Lambda\left(\frac{1}{4}\right)\right). \tag{6.125}\] Let \(y=\frac{\tau}{2}\), we obtain \[\int_{\frac{1}{2}}^{c}\log\left(2\sin\left(\frac{\pi\tau}{2}\right)\right)d \tau=2\int_{\frac{1}{4}}^{\frac{c}{2}}\log(2\sin(\pi y))dy=-2\left(\Lambda \left(\frac{c}{2}\right)-\Lambda\left(\frac{1}{4}\right)\right). \tag{6.126}\] Hence, for \(c>\frac{1}{2}\), we have \[ReV(p,T_{1}(c),c;0,n)=2\left(\Lambda\left(\frac{c}{2}\right)-\Lambda\left(\frac{ 1}{2}-\frac{c}{2}\right)\right). \tag{6.127}\] Let \((t_{0},s_{0})=(t_{0R}+X_{0}\sqrt{-1},s_{0R}+Y_{0}\sqrt{-1})\) be a critical point of the potential function \(V(p,t,s)\) as given in Proposition 5.3. By the proof of Lemma 5.5 in Appendix 6.5, we have \(0<\zeta_{\mathbb{R}}(p)<\frac{v_{0}}{2\pi}\). We assume \(c(p)\) is a solution to the following equation \[ReV(p,T_{1}(c),c)=2\left(\Lambda\left(\frac{c}{2}\right)+\Lambda\left(\frac{1} {2}-\frac{c}{2}\right)\right)=\zeta_{\mathbb{R}}(p). \tag{6.128}\] **Lemma 6.16**.: _We have the following inequality:_ \[c(p)<c_{upper}(p). \tag{6.129}\] Proof.: Let \(h(c)=\Lambda(\frac{c}{2})+\Lambda(\frac{1}{2}-\frac{c}{2})\), then \(2h(c(p))=\zeta_{\mathbb{R}}(p)\). By Lemma 6.18, we have the following convergent power series \[h(c)=2\Lambda\left(\frac{1}{4}\right)-\frac{\pi}{4}\left(c-\frac{1}{2} \right)^{2}-\frac{\pi^{3}}{48}\left(c-\frac{1}{2}\right)^{4}-\frac{\pi^{5}}{1 44}\left(c-\frac{1}{2}\right)^{6}-\cdots. \tag{6.130}\] Then we obtain \[2h(c_{upper}(p)) =\zeta_{\mathbb{R}}(p)-2\left(\frac{\pi^{3}}{48}\left(c_{upper}(p )-\frac{1}{2}\right)^{4}+\frac{\pi^{5}}{144}\left(c_{upper}(p)-\frac{1}{2} \right)^{6}+\cdots\right)\] \[<\zeta_{\mathbb{R}}(p)=2h(c(p)). \tag{6.131}\] Hence, by Lemma 6.15, we obtain \(c(p)<c_{upper}(p)\). As a consequence of formula (6.129), we have **Corollary 6.17**.: _For \(c>c_{upper}(p)\), we have_ \[ReV(p,T_{1}(c),c)<ReV(p,T_{1}(c_{upper}(p)),c_{upper}(p))<ReV(p,T_{1}(c(p)),c( p))=\zeta_{\mathbb{R}}(p).\] **Lemma 6.18**.: _For \(\frac{1}{2}<c<1\), we have the following convergent power series_ \[h(c)=2\Lambda\left(\frac{1}{4}\right)-\frac{\pi}{4}\left(c-\frac{1}{2}\right) ^{2}-\frac{\pi^{3}}{48}\left(c-\frac{1}{2}\right)^{4}-\frac{\pi^{5}}{144} \left(c-\frac{1}{2}\right)^{6}-\cdots. \tag{6.132}\] Proof.: We use the following power series expansion for \(\sec(x)\), \[\sec(x)=1+\frac{1}{2}x+\frac{5}{24}x^{4}+\cdots,\text{ for }|x|<\frac{\pi}{2} \tag{6.133}\] From \(h(c)=\Lambda(\frac{c}{2})+\Lambda(\frac{1}{2}-\frac{c}{2})\), we obtain \[h^{\prime}(c) =\frac{1}{2}\log\cot\left(\frac{\pi c}{2}\right),\] \[h^{\prime\prime}(c) =-\frac{\pi}{2\sin(\pi c)}=-\frac{\pi}{2}\sec\left(\pi\left(c- \frac{1}{2}\right)\right). \tag{6.134}\] Since \(0<\left(c-\frac{1}{2}\right)\pi<\frac{\pi}{2}\), by using formula (6.133), we obtain \[h^{\prime}(c) =\int_{\frac{1}{2}}^{c}h^{\prime\prime}(t)dt-h^{\prime}\left(\frac {1}{2}\right)\] \[=-\frac{\pi}{2}\int_{\frac{1}{2}}^{c}\sec\left(\pi\left(t-\frac{1 }{2}\right)\right)dt\] \[=-\frac{1}{2}\int_{0}^{\pi\left(c-\frac{1}{2}\right)}\sec(x)dx\] \[=-\frac{1}{2}\int_{0}^{\pi\left(c-\frac{1}{2}\right)}\left(1+ \frac{1}{2}x^{2}+\frac{5}{24}x^{4}+\cdots\right)dx\] \[=-\frac{1}{2}\left(\pi\left(c-\frac{1}{2}\right)+\frac{1}{6} \left(\pi(c-\frac{1}{2})\right)^{3}+\frac{1}{24}\left(\pi(c-\frac{1}{2}) \right)^{5}+\cdots\right) \tag{6.135}\] Hence \[h(c) =\int_{\frac{1}{2}}^{c}h^{\prime}(t)dt+h\left(\frac{1}{2}\right)\] \[=-\frac{1}{2}\int_{\frac{1}{2}}^{c}\left(\pi\left(t-\frac{1}{2} \right)+\frac{\pi^{3}}{6}\left(t-\frac{1}{2}\right)^{3}+\frac{\pi^{5}}{24} \left(t-\frac{1}{2}\right)^{5}\right)dt+2\Lambda\left(\frac{1}{4}\right)\] \[=2\Lambda\left(\frac{1}{4}\right)-\frac{1}{2\pi}\int_{0}^{\pi \left(c-\frac{1}{2}\right)}\left(x+\frac{1}{6}x^{3}+\frac{1}{24}x^{5}+\cdots \right)dx\] \[=2\Lambda\left(\frac{1}{4}\right)-\frac{\pi}{4}\left(c-\frac{1}{ 2}\right)^{2}-\frac{\pi^{3}}{48}\left(c-\frac{1}{2}\right)^{4}-\frac{\pi^{5}}{ 144}\left(c-\frac{1}{2}\right)^{6}-\cdots. \tag{6.136}\] **Lemma 6.19**.: _For \(t\in D_{0}^{\prime}(c)\) and \(n\in\mathbb{Z}\), we have_ \[\frac{\partial^{2}\text{Re}V(p,t+X\sqrt{-1},c;0,n)}{\partial X^{2}}>0. \tag{6.137}\] Proof.: By straightforward computations, we have \[\frac{1}{2\pi}\frac{\partial^{2}\text{Re}V(p,t+X\sqrt{-1},c;0,n)} {\partial X^{2}}\] \[=-3\frac{\sin(2\pi t)}{e^{2\pi X}+e^{-2\pi X}-2\cos(2\pi t)}\] \[+\frac{\sin(2\pi(t+c))}{e^{2\pi X}+e^{-2\pi X}-2\cos(2\pi(t+c))}+ \frac{\sin(2\pi(t-c))}{e^{2\pi X}+e^{-2\pi X}-2\cos(2\pi(t-c))}. \tag{6.138}\] Clearly, for \(t\in D_{0}^{\prime}(c)\), we have \(\sin(2\pi t)<0\) and \(\cos(2\pi c)\leq 0\) which imply that \[-3\frac{\sin(2\pi t)}{e^{2\pi X}+e^{-2\pi X}-2\cos(2\pi t)}>0, \tag{6.139}\] and \[\cos(2\pi c)(e^{2\pi X}+e^{-2\pi X})-2\cos(2\pi t)<2\cos(2\pi c)-2\cos(2\pi t )<0. \tag{6.140}\] Hence \[\frac{\sin(2\pi(t+c))}{e^{2\pi X}+e^{-2\pi X}-2\cos(2\pi(t+c))}+\frac {\sin(2\pi(t-c))}{e^{2\pi X}+e^{-2\pi X}-2\cos(2\pi(t-c))}\] \[=2\sin(2\pi t)\frac{\cos(2\pi c)(e^{2\pi X}+e^{-2\pi X})-2\cos(2 \pi t)}{(e^{2\pi X}+e^{-2\pi X}-2\cos(2\pi(t+c)))(e^{2\pi X}+e^{-2\pi X}-2\cos (2\pi(t-c)))}>0. \tag{6.141}\] **Lemma 6.20**.: _For \(t\in D^{\prime}_{0}(c)\) and \(n\in\mathbb{Z}\), we have_ \[\text{Re}V(p,t+X\sqrt{-1},c;0,n)\text{ goes to }\infty\text{ uniformly},\text{ as }X^{2}\to\infty. \tag{6.142}\] Proof.: Based on Lemma 2.3, we introduce the following function for \(t\in D^{\prime}_{0}(c)\) \[F(X;n)=\begin{cases}X&\text{(if }X\geq 0),\\ \left(\frac{1}{2}-t\right)X&\text{(if }X<0).\end{cases} \tag{6.143}\] since \(t>\frac{1}{2}\), we have \[F(X;n)\to\infty\text{ as }X^{2}\to\infty, \tag{6.144}\] and by Lemma 2.3, we obtain \[2\pi F(X;n)-C<\text{Re}V(p,t+X\sqrt{-1},c;0,n)<2\pi F(X;n)+C, \tag{6.145}\] which implies formula (6.142). Now, we can finish the proof of Proposition 6.11. Proof.: We show that there exists a homotopy \(D^{\prime}_{\delta}(c)\) (\(0\leq\delta\leq 1\)) between \(D^{\prime}_{0}(c)\) and \(D^{\prime}_{1}(c)\) such that \[(T_{1}(c),c)\in D^{\prime}_{1}(c), \tag{6.147}\] \[D^{\prime}_{1}(c)-\{(T_{1}(c),c)\}\subset\{t\in\mathbb{C}|\text {Re}V(p,t,c;0,n)<\text{Re}V(p,T_{1}(c),c)\},\] (6.148) \[\partial D^{\prime}_{1}(c)\subset\{t\in\mathbb{C}|\text{Re}V(p,t,c;0,n)<\zeta_{\mathbb{R}}(p)-\epsilon\}. \tag{6.146}\] In the fiber of the projection \(\mathbb{C}\to\mathbb{R}\) at \((t,c)\in D^{\prime}_{0}(c)\), we consider the flow from \(X=0\) determined by the vector field \(-\frac{\partial\text{Re}V}{\partial X}\), By Lemma 6.19 and 6.20, we obtain that, for \(t\in D^{\prime}_{0}(c)\), then \(\text{Re}V\) has a unique minimal point, and the flow goes there. We put \(g(t,c)\) to be the minimal point. We define the ending of the homotopy to be the set of the destinations of the flow \[D^{\prime}_{1}(c)=\{t+g(t,c)\sqrt{-1}|t\in D^{\prime}_{0}(c)\}. \tag{6.149}\] Further, we define the internal part of the homotopy by setting it along the flows. We show (6.148) as follows, from the definition of \(D^{\prime}_{0}(c)\), \[\partial D^{\prime}_{0}(c)\subset\partial D^{\prime}_{0}\subset\{t\in\mathbb{ C}|\text{Re}V(p,t,c;0,n)<\zeta_{\mathbb{R}}(p)-\epsilon\}. \tag{6.150}\] Further, by the construction of the homotopy, \(\text{Re}V(p,t,c;0,n)\) monotonically decreases along the homotopy. Hence (6.148) holds. We show (6.146) and (6.147) as follows. Consider the following function \[h(t,c)=ReV(p,t+g(t,c)\sqrt{-1},c;0,n). \tag{6.151}\] It is shown from the definition of \(g(t,c)\) that \[\frac{\partial ReV(p,t+g(t,c)\sqrt{-1},c;0,n)}{\partial X}=0\text{ at }X=g(t,c). \tag{6.152}\] Hence, we have \[Im\frac{\partial V}{\partial t}=0\text{ at }t+g(t,c)\sqrt{-1}. \tag{6.153}\] Furthermore, we also have \[\frac{\partial h}{\partial t}=Re\frac{\partial V}{\partial t}\text{ at }t+g(t,c)\sqrt{-1}. \tag{6.154}\] Therefore, when \(t+g(t,c)\sqrt{-1}\) is a critical point of \(ReV,\ t\) is a critical point of \(h(t,c)\). By Proposition 6.12, \(h(t,c)\) has a unique maximal point at \(t=t_{1}(c)\). Moreover, by Corollary 6.17, the maximal value \[h(t_{1}(c),c)=ReV(p,T_{1}(c),c)<ReV(p,T_{1}(c_{upper}(p),c_{upper}(p))=\zeta_{ \mathbb{R}}(p)-\epsilon,\] for some small \(\epsilon>0\). Therefore, (6.146) and (6.147) hold. The assumption of the saddle point method in one dimension is verified and we finish the proof of Proposition 6.11. **Remark 6.21**.: By using the same method as illustrated above, one can prove **Proposition 6.22**.: _For \(0<c\leq 1-c_{upper}(p)\) and \(n\in\mathbb{Z}\), there exists a constant \(C\) independent of \(c\), such that_ \[|\int_{D^{\prime}_{0}(c)}e^{(N+\frac{1}{2})V_{N}(p,t,c;0,n)}dt|<C\left(e^{(N+ \frac{1}{2})(\zeta_{\mathbb{R}}(p)-\epsilon)}\right). \tag{6.155}\] Actually, Proposition 6.22 can also be derived from the symmetric property of the function \(ReV(p,t,c;0,n)\) with respect to the line \(c=\frac{1}{2}\). Therefore, combining Proposition 6.11 and Proposition 6.22 together, we obtain Proposition 5.16. ### Proof of Proposition 5.14 This subsection is devoted to the proof Proposition 5.14. **Lemma 6.23**.: _For \(c\in(0,1)\), \(t\in D^{\prime}_{0}(c)\), \(n\in\mathbb{Z}\), we have_ \[ReV(p,t+X\sqrt{-1},c;-1,n)\to 0,\text{ as }X\to+\infty. \tag{6.156}\] Proof.: Note that \[ReV(p,t+X\sqrt{-1},c;-1,n) =Re\left(\frac{1}{2\pi\sqrt{-1}}\left(\text{Li}_{2}(e^{-2\pi X}e ^{2\pi\sqrt{-1}(t+c)})\right.\right.\] \[\left.\left.+\text{Li}_{2}(e^{-2\pi X}e^{2\pi\sqrt{-1}(t-c)})-3 \text{Li}_{2}(e^{-2\pi X}e^{2\pi\sqrt{-1}t})\right)\right), \tag{6.157}\] which is independent of \(p\) and \(n\). Then it is easy to see that \(ReV(p,t+X\sqrt{-1},c;-1,n)\) uniformly converges to \(0\) as \(X\to+\infty\) **Lemma 6.24**.: _For \(t\in D^{\prime}_{0}(c)\), we have_ \[\frac{\partial^{2}{\it{Re}}V(p,t+X\sqrt{-1},c;-1,n)}{\partial X^{2}}>0, \tag{6.158}\] Proof.: Note that we only consider the second order derivative of \(X\) here, so \[\frac{\partial^{2}{\it{Re}}V(p,t+X\sqrt{-1},c;-1,n)}{\partial X^{2}}=\frac{ \partial^{2}{\it{Re}}V(p,t+X\sqrt{-1},c;0,n)}{\partial X^{2}}>0 \tag{6.159}\] by Lemma 6.19. Now, we can finish the proof of Proposition 5.14. Proof.: We show the existence of a homotopy \(D^{\prime}_{\delta}(c)\) (\(0\leq\delta\leq\delta_{0}\)) between \(D^{\prime}_{0}(c)\) and \(D^{\prime}_{\delta_{0}}(c)\) such that \[D^{\prime}_{\delta_{0}}(c) \subset\{(t,c)\in D^{\prime}_{0}(c)|{\it{Re}}V(p,t+X\sqrt{-1},c; -1,n)<\zeta_{\mathbb{R}}(p)-\epsilon\}, \tag{6.161}\] \[\partial D^{\prime}_{\delta}(c) \subset\{(t,c)\in D^{\prime}_{0}(c)|{\it{Re}}V(p,t+X\sqrt{-1},c; -1,n)<\zeta_{\mathbb{R}}(p)-\epsilon\}. \tag{6.160}\] For each fixed \((t,c)\in D^{\prime}_{0}(c)\), we move \(X\) from \(0\) along the flow \(-\frac{\partial{\it{Re}}V(p,t+X\sqrt{-1},c;-1,n)}{\partial X}\), then by Lemma 6.23, the value of \({\it{Re}}V(p,t+X\sqrt{-1},c;-1,n)\) monotonically decreases and it goes to \(0\). As for (6.161), since \(D^{\prime}_{0\mathbb{C}}(c)\subset\{(t,c)\in D^{\prime}_{0\mathbb{C}}(c)|{\it{ Re}}V(p,t+X\sqrt{-1},c;-1,n)<\zeta_{\mathbb{R}}(p)-\epsilon\}\) and the value of \({\it{Re}}V\) monotonically decreases, hence (6.161) holds. As for (6.160), since the value of \({\it{Re}}V\) monotonically goes to \(0\) by Lemma 6.23, (6.160) holds for sufficiently large \(\delta_{0}\). Therefore, such a required homotopy exists and we prove Proposition 6.22.
2302.10772
Source of black bounces in general relativity
Black bounces are spacetimes that can describe, depending on certain parameters, black holes or wormholes. In this work, we use a method to obtain the matter content that generates black bounce solutions in general relativity. The method is constructed in a general way, and as models, we apply it to the Simpson--Visser black bounce solution and the Bardeen-type black bounce solution. We obtain that these metrics are solutions of Einstein's equations when we consider the coupling of the gravitational interaction with a phantom scalar field with a nonlinear electrodynamics. The presence of the phantom scalar field is linked to the fact that this type of solution violates the null energy condition. We analyze separately the energy conditions associated with the stress-energy tensor for the scalar field and for the electromagnetic field.
Manuel E. Rodrigues, Marcos V. de S. Silva
2023-02-20T14:42:53Z
http://arxiv.org/abs/2302.10772v1
# Source of black bounces in general relativity ###### Abstract Black bounces are spacetimes that can describe, depending on certain parameters, black holes or wormholes. In this work, we use a method to obtain the matter content that generates black bounce solutions in general relativity. The method is constructed in a general way, and as models, we apply it to the Simpson-Visser black bounce solution and the Bardeen-type black bounce solution. We obtain that these metrics are solutions of Einstein's equations when we consider the coupling of the gravitational interaction with a phantom scalar field with a nonlinear electrodynamics. The presence of the phantom scalar field is linked to the fact that this type of solution violates the null energy condition. We analyze separately the energy conditions associated with the stress-energy tensor for the scalar field and for the electromagnetic field. pacs: 04.50.Kd,04.70.Bw ## I Introduction From the classical point of view, general relativity is the theory that, in the simplest way, best describes gravitational interaction [1; 2]. This theory was able to solve existing problems and predict new phenomena, such as the bending of light and the existence of gravitational waves [3; 4; 5; 6; 7; 8; 9; 10]. Mathematically, general relativity is described through the Einstein equations, a set of nonlinear, second-order differential equations coupled in the components of the metric tensor [2; 11]. Depending on the imposed conditions, such as symmetry or matter content, Einstein equations have different solutions. One of the best known solutions is the Schwarzschild metric. The Schwarzschild line element can be used to study spacetime around static stars. This solution also describes the simplest model of a black hole that exists, since it has no spin or charge [11]. From an astrophysical point of view, black holes are compact objects with a strong gravitational field, which not even light can escape [2]. This strong field allows us to test general relativity more precisely, allowing to measure small deviations. Currently, these experiments are the detection of gravitational waves and images of black holes [7; 12]. Another interesting solution to Einstein equations is the wormhole. This object is distinguished by the possibility of connecting two different points of the same universe, or from different universes, through a tunnel, the throat [13]. The first wormhole solution was proposed by Einstein and Rose and is basically the maximum extension of the Schwarzschild solution [14]. Despite connecting two distinct regions, the Einstein-Rosen solution is not traversable, which means that, no particle can cross through the throat. The first traversable wormhole solution was proposed by Ellis and Bronnikov, and later studied again by Morris and Thorne [15; 16; 17]. Interestingly, wormholes can mimic the ringdown of black holes [18]. Thus, the signal of a gravitational wave is not a definitive proof of the presence of an event horizon. One problem that arises with these solutions is that exotic matter content is required to maintain them. This exotic matter violates energy conditions, such as phantom scalar fields and nonlinear electrodynamics [19; 20; 21; 22; 23; 24; 25]. However, solutions have recently emerged with a more viable material content, considering for example a Dirac field [26; 27; 28; 29]. In the literature there is a very extensive number of works involving wormholes, from geodesics to the stability of these solutions [30; 31; 32; 33; 34; 35; 36; 37]. Nonlinear electrodynamics is also involved with another type of solution, known as regular black hole. These solutions are characterized by the fact that they have event horizons but do not have singularities like traditional black holes, such as Schwarzschild [38]. The first regular solution was proposed by James Bardeen [39]. However, were Beato and Garcia who showed that this solution arises from Einstein equations coupled with nonlinear electrodynamics [40]. These solutions have interesting properties, such as the fact that photons do not follow geodesics or changes in the thermodynamics of these solutions [41; 42]. Using nonlinear electrodynamics, it is also possible to construct solutions with multiple horizons [43]. As in the case of wormholes, the study of regular solutions has been extensively explored in recent decades [44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55]. Recently, Simpson and Visser proposed a new type of regular solution that, depending on the choice of parameter, can become the Schwarzschild solution, a regular black hole, an one-way traversable wormhole, and a two-way traversable wormhole [56]. This type of solution is known as a black bounce. This solution has a throat at \(r=0\) and the area of the event horizon has no dependence on the parameter of the solution. In addition to the Simpson-Visser solution, there are other black bounce models [57; 58; 59; 60]. However, like the previous solution, most of these metrics were not proposed as solutions to Einstein equations. Even without knowing the material content of these solutions, several properties can be analyzed [61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79]. However, there are other properties that can only be studied when the source of matter is known. Pedro and Bronnikov, in different works, proposed ways to obtain this material content considering the nonlinear electrodynamics with a phantom scalar field [80; 81; 82]. Phantom scalar fields are commonly associated with wormholes [19]. This type of source is capable of generating wormhole solutions even with minimal coupling [15]. These fields are distinguished by the presence of a negative energy density, thus violating the known energy conditions. In general, solutions of this type end up having a series of complications such as instability or even poorly defined thermodynamics [30]. Another type of solution, which are similar to the black bounces, that can be generated by the phantom scalar field is black universe [83; 84; 85; 86; 87; 88]. In addition to these facts, the phantom scalar field can still be treated as a candidate for dark energy, further increasing its relevance. Still, on the phantom scalar field, we can have black hole solutions, also known as phantom black holes. This was first proposed by Bergmann and Leipnik, in 1957 [89]. Later, other authors consider this type of source to find more solutions [90; 91; 92]. There are also regular versions of these solutions [93; 94; 95]. Some phantom black holes, known as cold black holes, have zero temperature [96; 97; 98]. The structure of this paper is organized as follows. In Sec. II, we give the motivation why we should use the phantom scalar field with nonlinear electrodynamics to generate black bounce solutions. We also use Einstein equations and build, in a general form, the formalism that will be used to obtain the material content of the solutions. In the Sec. III and Sec. IV, using the method constructed in the section before, we obtain the material content of the Simpson-Visser solution and the Bardeen-type black bounce. Section V is dedicated to the study the energy conditions for each field that makes up the source of the solutions. Our conclusions and perspectives are present in Sec. VI. We adopt the metric signature \((+,-,-,-)\). We shall work in geometrodynamics units where \(G=\hbar=c=1\). ## II General solution Black bounces are structures that have a throat covered by an event horizon. These compact objects interpolate between a regular black hole and a wormhole. The Simpson-Visser solution describes a black bounce, and is given by the line element [56] \[ds^{2}=f(r)dt^{2}-f(r)^{-1}dr^{2}-\Sigma(r)^{2}\left(d\theta^{2}+\sin^{2} \theta d\varphi^{2}\right), \tag{1}\] with \[f(r)=1-\frac{2m}{\sqrt{r^{2}+a^{2}}},\quad\text{and}\quad\Sigma(r)=\sqrt{r^{2} +a^{2}}. \tag{2}\] If we consider general relativity, the black bounce model proposed by Simpson-Visser does not satisfy Einstein equations in a vacuum. In fact, we can interpret the matter content as being an anisotropic fluid given by \[\rho = -\frac{a^{2}\left(\sqrt{r^{2}+a^{2}}-4m\right)}{8\pi\left(r^{2}+a ^{2}\right)^{5/2}}, \tag{3}\] \[p_{1} = -\frac{a^{2}}{8\pi\left(r^{2}+a^{2}\right)^{2}},\] (4) \[p_{2} = -\frac{a^{2}\left(\sqrt{r^{2}+a^{2}}-m\right)}{8\pi\left(r^{2}+a ^{2}\right)^{5/2}}, \tag{5}\] where \(t\) is the timelike coordinate, and \[\rho = -\frac{a^{2}}{8\pi\left(r^{2}+a^{2}\right)^{2}}, \tag{6}\] \[p_{1} = -\frac{a^{2}\left(\sqrt{r^{2}+a^{2}}-4m\right)}{8\pi\left(r^{2}+a^ {2}\right)^{5/2}},\] (7) \[p_{2} = -\frac{a^{2}\left(\sqrt{r^{2}+a^{2}}-m\right)}{8\pi\left(r^{2}+a ^{2}\right)^{5/2}}, \tag{8}\] where \(t\) is the spacelike coordinate. The quantities \(\rho\), \(p_{1}\), and \(p_{2}\) are the components of the stress-energy tensor \[T^{\mu}{}_{\nu}=\mathrm{diag}\left[\rho,-p_{1},-p_{2},-p_{2}\right], \tag{9}\] where \(t\) is the timelike coordinate, and \[T^{\mu}{}_{\nu}=\mathrm{diag}\left[-p_{1},\rho,-p_{2},-p_{2}\right], \tag{10}\] where \(t\) is the spacelike coordinate. A first attempt to build a source for this solution would be to consider nonlinear electrodynamics. However, the stress-energy tensor for nonlinear electrodynamics has the symmetry \(T^{0}{}_{0}=T^{1}{}_{1}\)[41], and we see from equations (3)-(4) that \(T^{0}{}_{0}\neq T^{1}{}_{1}\). This implies that the Simpson-Visser metric cannot be interpreted as a solution of Einstein equations in the presence of nonlinear electrodynamics. This type of behavior is not unique to the Simpson-Visser solution. To solve that problem, let us consider the theory described by the action \[S=\int d^{4}x\sqrt{-g}\left[R-2\kappa^{2}\left(\epsilon g^{\mu\nu}\partial_{ \mu}\phi\partial_{\nu}\phi-V\left(\phi\right)\right)+2\kappa^{2}L(F)\right], \tag{11}\] where \(R\) is the Ricci scalar, \(g_{\mu\nu}\) are the components of the metric tensor, \(g\) is the determinant of the metric, \(\phi\) is a scalar field, \(V\left(\phi\right)\) is the potential related with the scalar field and the electromagnetic Lagrangian \(L(F)\) is an arbitrary function of the electromagnetic scalar \(F=F^{\mu\nu}F_{\mu\nu}/4\). Here, \(\epsilon=\pm 1\). To \(\epsilon=+1\) we have a usual scalar field, and to \(\epsilon=-1\) we have a phantom scalar field [97]. The field equations related with the action (11) are \[\nabla_{\mu}\left[L_{F}F^{\mu\nu}\right]=\frac{1}{\sqrt{-g}} \partial_{\mu}\left[\sqrt{-g}L_{F}F^{\mu\nu}\right]=0, \tag{12}\] \[2\epsilon\nabla_{\mu}\nabla^{\mu}\phi=-\frac{dV(\phi)}{d\phi},\] (13) \[R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=\kappa^{2}T^{\phi}_{\mu\nu}+ \kappa^{2}T^{EM}_{\mu\nu}, \tag{14}\] where \(L_{F}=\partial L/\partial F\), \(R_{\mu\nu}\) is the Ricci tensor, \(T^{\phi}_{\mu\nu}\) is the stress-energy tensor of the scalar field, and \(T^{EM}_{\mu\nu}\) is the stress-energy tensor of the electromagnetic field. The stress-energy tensors \(T^{\phi}_{\mu\nu}\) and \(T^{EM}_{\mu\nu}\) are given by \[T^{\phi}_{\mu\nu} = 2\epsilon\partial_{\mu}\phi\partial_{\nu}\phi-g_{\mu\nu}\left( \epsilon g^{\alpha\beta}\partial_{\alpha}\phi\partial_{\beta}\phi-V(\phi) \right), \tag{15}\] \[T^{EM}_{\mu\nu} = g_{\mu\nu}L(F)-L_{F}F_{\mu}{}^{\alpha}F_{\nu\alpha}. \tag{16}\] The line element that describes a general black bounce spacetime is written as [57] \[ds^{2}=f(r)dt^{2}-f(r)^{-1}dr^{2}-\Sigma^{2}(r)\left(d\theta^{2}+\sin^{2} \theta d\varphi^{2}\right). \tag{17}\] We will consider only magnetic charged solutions. So that, the only nonzero component of the Maxwell-Faraday tensor, \(F_{\mu\nu}\), is \[F_{23}=q\sin\theta, \tag{18}\] and the electromagnetic scalar is \[F(r)=\frac{q^{2}}{2\Sigma^{4}}. \tag{19}\] The equations of motion to the gravitational field and the scalar field are \[-\frac{f^{\prime}(r)\Sigma^{\prime}(r)}{\Sigma(r)}-\frac{f(r)\Sigma ^{\prime}(r)^{2}}{\Sigma(r)^{2}}-\frac{2f(r)\Sigma^{\prime\prime}(r)}{\Sigma(r) }+\frac{1}{\Sigma(r)^{2}}=\kappa^{2}L(r)+\kappa^{2}\epsilon f(r)\phi^{\prime}(r )^{2}+\kappa^{2}V(r), \tag{20}\] \[-\frac{f^{\prime}(r)\Sigma^{\prime}(r)}{\Sigma(r)}-\frac{f(r) \Sigma^{\prime}(r)^{2}}{\Sigma(r)^{2}}+\frac{1}{\Sigma(r)^{2}}=\kappa^{2}L(r)- \kappa^{2}\epsilon f(r)\phi^{\prime}(r)^{2}+\kappa^{2}V(r),\] (21) \[-\frac{f^{\prime}(r)\Sigma^{\prime}(r)}{\Sigma(r)}-\frac{f^{ \prime\prime}(r)}{2}-\frac{f(r)\Sigma^{\prime\prime}(r)}{\Sigma(r)}=\kappa^{2} L(r)-\frac{\kappa^{2}q^{2}L_{F}(r)}{\Sigma(r)^{4}}+\kappa^{2}\epsilon f(r)\phi^{ \prime}(r)^{2}+\kappa^{2}V(r),\] (22) \[-2\epsilon\left(f^{\prime}(r)\phi^{\prime}(r)+f(r)\phi^{\prime \prime}(r)\right)-\frac{4\epsilon f(r)\Sigma^{\prime}(r)\phi^{\prime}(r)}{ \Sigma(r)}=-\frac{V^{\prime}(r)}{\phi^{\prime}(r)}. \tag{23}\] From the field equations, we find that \[\phi^{\prime}(r)^{2}=-\frac{\Sigma^{\prime\prime}(r)}{\kappa^{2} \epsilon\Sigma(r)},\quad\mbox{or}\quad\phi^{\prime}(r)=\frac{i}{\kappa\sqrt{ \epsilon}}\sqrt{\frac{\Sigma^{\prime\prime}(r)}{\Sigma(r)}}. \tag{24}\] Usually, \(\Sigma^{\prime\prime}(r)/\Sigma(r)>0\), so that, to guarantee that the scalar field is real, we need \(\epsilon=-1\). The scalar field is real only to a phantom scalar field. If we calculate the null energy condition, in regions where \(f(r)>0\), one of the inequalities is \[NEC_{1}\Longleftrightarrow-\frac{2f\Sigma^{\prime\prime}}{ \kappa^{2}\Sigma}\geq 0. \tag{25}\] As \(\Sigma^{\prime\prime}(r)/\Sigma(r)>0\), the null energy condition is violated. However, if \(\Sigma^{\prime\prime}(r)/\Sigma(r)<0\), the null energy condition is satisfied and we have a real scalar field to \(\epsilon=1\). Based on these points, we can elaborate the following theorem: _**Theorem**: For any black bounce solution that arises from coupling the gravitational field with a scalar field and nonlinear electrodynamics, the scalar field must necessarily be phantom if \(\Sigma^{\prime\prime}(r)/\Sigma(r)>0\), i.e., if the inequality \(NEC_{1}\) is violated. In this case, \(\epsilon=-1\)._ From equations (20)-(23) we find \[L(r) = -\frac{f^{\prime}(r)\Sigma^{\prime}(r)}{\kappa^{2}\Sigma(r)}- \frac{f(r)\Sigma^{\prime}(r)^{2}}{\kappa^{2}\Sigma(r)^{2}}+\epsilon f(r)\phi^ {\prime}(r)^{2}+\frac{1}{\kappa^{2}\Sigma(r)^{2}}-V(r), \tag{26}\] \[L_{F}(r) = \frac{\Sigma(r)^{4}f^{\prime\prime}(r)}{2\kappa^{2}q^{2}}-\frac {f(r)\Sigma(r)^{2}\Sigma^{\prime}(r)^{2}}{\kappa^{2}q^{2}}+\frac{f(r)\Sigma (r)^{3}\Sigma^{\prime\prime}(r)}{\kappa^{2}q^{2}}+\frac{2\epsilon f(r)\Sigma (r)^{4}\phi^{\prime}(r)^{2}}{q^{2}}+\frac{\Sigma(r)^{2}}{\kappa^{2}q^{2}},\] (27) \[V^{\prime}(r) = \frac{2\phi^{\prime}(r)\left(\epsilon\Sigma(r)f^{\prime}(r)\phi^ {\prime}(r)+2\epsilon f(r)\Sigma^{\prime}(r)\phi^{\prime}(r)+\epsilon f(r) \Sigma(r)\phi^{\prime\prime}(r)\right)}{\Sigma(r)}. \tag{28}\] To solve (24) and to obtain (26)-(28), we need to specify \(f(r)\) and \(\Sigma(r)\). ## III Simpson-Visser solution The Simpson-Visser model is given by the line element (1) with (2). Here, we will consider that the parameter \(a=q\) is the magnetic charge. We find that the scalar field, the potential, and the electromagnetic quantities are \[\phi(r) = \frac{\tan^{-1}\left(\frac{r}{q}\right)}{\kappa}, \tag{29}\] \[V(r) = \frac{4mq^{2}}{5\kappa^{2}\left(q^{2}+r^{2}\right)^{5/2}},\] (30) \[L(r) = \frac{6mq^{2}}{5\kappa^{2}\left(q^{2}+r^{2}\right)^{5/2}},\] (31) \[L_{F}(r) = \frac{3m}{\kappa^{2}\sqrt{q^{2}+r^{2}}}. \tag{32}\] In Fig. 1 we show the behavior of the scalar field in terms of the radial coordinate for different charge values. The field is always positive for positive radial coordinate values and always negative for negative radial coordinate values. We thus see that the scalar field is not symmetric \(r\rightarrow-r\). The electromagnetic quantities must obey the relation \[L_{F}-\frac{\partial L}{\partial F}=L_{F}-\frac{\partial L}{\partial r}\left( \frac{\partial F}{\partial r}\right)^{-1}=0. \tag{33}\] If we substitute the expressions (31) and (32) in equation (33), we verify that it is identically satisfied. From (29) and (19), we obtain the forms of \(r(\phi)\) and \(r(F)\). This allows us to get \(V(\phi)\) and \(L(F)\), \[V(\phi) = \frac{4m\cos^{5}\left(\phi\kappa\right)}{5\kappa^{2}\left|q\right| ^{3}}, \tag{34}\] \[L(F) = \frac{12\sqrt[4]{2}mF^{5/4}}{5\kappa^{2}\sqrt{|q|}}, \tag{35}\] so that, we obtain the material content that describes the Simpson-Visser solution. In Fig. 2 we see that the potential tends to a constant for a zero value of the scalar field and is periodic as \(\phi\) increases. The intensity of the potential decreases for larger values of charge. We also see that the electromagnetic Lagrangian is not a multivalued function, as expected for a solution with a magnetic source, it tends to zero when \(F=0\), and grows as \(F\) increases. ## IV Bardeen-type black bounce Now we consider the Bardeen-type black bounce solution described by the line element (1) with [57] \[f(r)=1-\frac{2mr^{2}}{\left(r^{2}+q^{2}\right)^{3/2}},\quad\text{and}\quad \Sigma(r)=\sqrt{r^{2}+q^{2}}. \tag{36}\] The scalar field is also given by equation (29), since it depends on \(\Sigma(r)\). The potential and the electromagnetic quantities are \[V(r) = \frac{4m\left(7q^{2}r^{2}-8q^{4}\right)}{35\kappa^{2}\left(q^{2} +r^{2}\right)^{7/2}}, \tag{37}\] \[L(r) = \frac{2mq^{2}\left(16q^{2}+91r^{2}\right)}{35\kappa^{2}\left(q^{ 2}+r^{2}\right)^{7/2}},\] (38) \[L_{F}(r) = \frac{m\left(13r^{2}-2q^{2}\right)}{\kappa^{2}\left(q^{2}+r^{2} \right)^{3/2}}. \tag{39}\] Figure 1: Behavior of the scalar field as a function of the radial coordinate to different values of charge. Equations (38) and (39) satisfy the condition (33). Using (29) and (19), we find \(V(\phi)\) and \(L(F)\), that are given by \[V(\phi) = \frac{4m\cos^{5}\left(\phi\kappa\right)\left(7\sin^{2}\left(\phi \kappa\right)-8\cos^{2}\left(\phi\kappa\right)\right)}{35\kappa^{2}\left|q^{3} \right|}, \tag{40}\] \[L(F) = \frac{4\sqrt[4]{2}F^{5/4}m\left(91-75\sqrt{2}Fq\right)}{35\kappa^ {2}\sqrt{\left|q\right|}}. \tag{41}\] With this, we realize that the Bardeen-type solution has more complications than the Simpson-Visser solution. The electromagnetic field does not behaves like Maxwell in the weak field limit. Through Fig. 3, we see graphically that the Bardeen-type solution has more complications in its matter content than the Simpson-Visser solution. The potential associated with the scalar field is periodic and in each cycle there are several maximums and minimums. The electromagnetic Lagrangian tends to zero to \(F\to 0\). Unlike the Simpson-Visser case, the Lagrangian does not grow indefinitely with \(F\). There is a maximum value that \(L(F)\) can reach and after that it decreases, assuming negative values. Figure 3: Behavior of the scalar field potential, as a function of \(\phi\), and of the electromagnetic Lagrangian, as a function of \(F\), that generates the Bardeen-type solution to different values of charge. Figure 2: Behavior of the scalar field potential, as a function of \(\phi\), and of the electromagnetic Lagrangian, as a function of \(F\), that generates the Simpson–Visser solution to different values of charge. Energy conditions As we said before, these black bounce solutions violate the null energy condition, such that the scalar field becomes real. Once the null energy condition is violated, all other conditions are violated. However, we can find out how the energy conditions behave separately for the scalar field and for the electromagnetic field. The energy conditions are given by \[NEC_{1,2}^{\phi,EM}=WEC_{1,2}^{\phi,EM}=SEC_{1,2}^{\phi,EM} \Longleftrightarrow\rho^{\phi,EM}+p_{1,2}^{\phi,EM}\geq 0, \tag{42}\] \[SEC_{3}^{\phi,EM}\Longleftrightarrow\rho^{\phi,EM}+p_{1}^{ \phi,EM}+2p_{2}^{\phi,EM}\geq 0,\] (43) \[DEC_{1,2}^{\phi,EM}\Longrightarrow\rho^{\phi,EM}-p_{1,2}^{\phi, EM}\geq 0,\] (44) \[DEC_{3}^{\phi,EM}=WEC_{3}^{\phi,EM}\Longleftrightarrow\rho^{ \phi,EM}\geq 0, \tag{45}\] where the indexes \(\phi\) and \(EM\) determine whether the energy condition is associated with the scalar field or the electromagnetic field. The fluid quantities are identify through the stress-energy tensor as \[T^{\mu}{}_{\nu}=\mathrm{diag}\left[\rho^{\phi,EM},-p_{1}^{\phi,EM},-p_{2}^{ \phi,EM},-p_{2}^{\phi,EM}\right]\,, \tag{46}\] where \(f(r)>0\). In regions where \(f(r)<0\), the fluid quantities are identified as \[T^{\mu}{}_{\nu}=\mathrm{diag}\left[-p_{1}^{\phi,EM},\rho^{\phi,EM},-p_{2}^{ \phi,EM},-p_{2}^{\phi,EM}\right]\,. \tag{47}\] So that, in regions where \(f(r)>0\), we have the following set of equations \[\rho^{\phi} = -f(r)\phi^{\prime}(r)^{2}+V(r), \tag{48}\] \[p_{1}^{\phi} = -f(r)\phi^{\prime}(r)^{2}-V(r),\] (49) \[p_{2}^{\phi} = f(r)\phi^{\prime}(r)^{2}-V(r),\] (50) \[\rho^{EM} = L(r),\] (51) \[p_{1}^{EM} = -L(r),\] (52) \[p_{2}^{EM} = -L(r)+\frac{q^{2}L_{F}(r)}{\Sigma(r)^{4}}. \tag{53}\] Finally, for regions where \(f(r)<0\), we have \[\rho^{\phi} = f(r)\phi^{\prime}(r)^{2}+V(r), \tag{54}\] \[p_{1}^{\phi} = f(r)\phi^{\prime}(r)^{2}-V(r),\] (55) \[p_{2}^{\phi} = f(r)\phi^{\prime}(r)^{2}-V(r),\] (56) \[\rho^{EM} = L(r),\] (57) \[p_{1}^{EM} = -L(r),\] (58) \[p_{2}^{EM} = -L(r)+\frac{q^{2}L_{F}(r)}{\Sigma(r)^{4}}. \tag{59}\] These equations allow us to obtain the energy conditions (42)-(45), that, to \(f(r)>0\), are given by \[NEC_{1}^{\phi}=WEC_{1}^{\phi}=SEC_{1}^{\phi}\Longleftrightarrow-2f (r)\phi^{\prime}(r)^{2}\geq 0, \tag{60}\] \[NEC_{2}^{\phi}=WEC_{2}^{\phi}=SEC_{2}^{\phi}\Longleftrightarrow 0,\] (61) \[SEC_{3}^{\phi}\Longleftrightarrow-2V(r)\geq 0,\] (62) \[DEC_{1}^{\phi}\Longrightarrow 2V(r)\geq 0,\] (63) \[DEC_{2}^{\phi}\Longrightarrow 2\left(V(r)-f(r)\phi^{\prime}(r)^{2} \right)\geq 0,\] (64) \[DEC_{3}^{\phi}=WEC_{3}^{\phi}\Longleftrightarrow V(r)-f(r) \phi^{\prime}(r)^{2}\geq 0, \tag{65}\] \[NEC_{1}^{EM}=WEC_{1}^{EM}=SEC_{1}^{EM}\Longleftrightarrow 0, \tag{66}\] \[NEC_{2}^{EM}=WEC_{2}^{EM}=SEC_{2}^{EM}\Longleftrightarrow\frac{q^{2 }L_{F}(r)}{\Sigma(r)^{4}}\geq 0,\] (67) \[SEC_{3}^{EM}\Longleftrightarrow\frac{2q^{2}L_{F}(r)}{\Sigma(r) ^{4}}-2L(r)\geq 0,\] (68) \[DEC_{1}^{EM}\Longrightarrow 2L(r)\geq 0,\] (69) \[DEC_{2}^{EM}\Longrightarrow 2L(r)-\frac{q^{2}L_{F}(r)}{\Sigma(r) ^{4}}\geq 0,\] (70) \[DEC_{3}^{EM}=WEC_{3}^{EM}\Longleftrightarrow L(r)\geq 0. \tag{71}\] From (61), we see that \(NEC_{2}^{\phi}\) is identically satisfied to the scalar case and from (66) we see that \(NEC_{1}^{EM}\) is also identically satisfied. For regions where \(f(r)<0\), the structure of the energy conditions for the electromagnetic sector does not change, remaining equal to equations (66)-(71). To the scalar sector, we find \[NEC_{1}^{\phi}=WEC_{1}^{\phi}=SEC_{1}^{\phi}\Longleftrightarrow 2f(r) \phi^{\prime}(r)^{2}\geq 0, \tag{72}\] \[NEC_{2}^{\phi}=WEC_{2}^{\phi}=SEC_{2}^{\phi}\Longleftrightarrow 2V(r) \geq 0,\] (73) \[SEC_{3}^{\phi}\Longleftrightarrow 2V(r)\geq 0,\] (74) \[DEC_{1}^{\phi}\Longrightarrow 2V(r)\geq 0,\] (75) \[DEC_{2}^{\phi}\Longrightarrow 2f(r)\phi^{\prime}(r)^{2}\geq 0,\] (76) \[DEC_{3}^{\phi}=WEC_{3}^{\phi}\Longleftrightarrow V(r)+f(r) \phi^{\prime}(r)^{2}\geq 0. \tag{77}\] Equations (60)-(65) are essentially different, for the most part, from equations (72)-(77). Now we have the requirements to evaluate the energy conditions for the models studied in this work. ### Simpson-Visser solution To the Simpson-Visser model, we replace equations (29)-(32) in the energy conditions, that results, to \(f(r)>0\) in \[NEC_{1}^{\phi}\Longleftrightarrow-\frac{2q^{2}\left(\sqrt{q^{2} +r^{2}}-2m\right)}{\kappa^{2}\left(q^{2}+r^{2}\right)^{5/2}}\geq 0,\qquad SEC_{3}^{ \phi}\Longleftrightarrow-\frac{8mq^{2}}{5\kappa^{2}\left(q^{2}+r^{2}\right)^ {5/2}}\geq 0, \tag{78}\] \[DEC_{1}^{\phi}\Longrightarrow\frac{8mq^{2}}{5\kappa^{2}\left(q^ {2}+r^{2}\right)^{5/2}}\geq 0,\qquad DEC_{2}^{\phi}\Longrightarrow-\frac{2q^{2} \left(5\sqrt{q^{2}+r^{2}}-14m\right)}{5\kappa^{2}\left(q^{2}+r^{2}\right)^{5/2 }}\geq 0,\] (79) \[WEC_{3}^{\phi}\Longleftrightarrow-\frac{q^{2}\left(5\sqrt{q^{2 }+r^{2}}-14m\right)}{5\kappa^{2}\left(q^{2}+r^{2}\right)^{5/2}}\geq 0, \tag{80}\] \[NEC_{2}^{EM}\Longleftrightarrow\frac{3mq^{2}}{\kappa^{2}\left(q ^{2}+r^{2}\right)^{5/2}}\geq 0,\qquad SEC_{3}^{EM}\Longleftrightarrow\frac{18mq^{2}}{5 \kappa^{2}\left(q^{2}+r^{2}\right)^{5/2}}\geq 0, \tag{81}\] \[DEC_{1}^{EM}\Longrightarrow\frac{12mq^{2}}{5\kappa^{2}\left(q ^{2}+r^{2}\right)^{5/2}}\geq 0,\qquad DEC_{2}^{EM}\Longrightarrow-\frac{3mq^{2}}{5 \kappa^{2}\left(q^{2}+r^{2}\right)^{5/2}}\geq 0,\] (82) \[WEC_{3}^{EM}\Longleftrightarrow\frac{6mq^{2}}{5\kappa^{2}\left( q^{2}+r^{2}\right)^{5/2}}\geq 0. \tag{83}\] If there is an event horizon, outside the horizon, the scalar field will not violate the inequalities \(NEC_{2}^{\phi}\) and \(DEC_{1}^{\phi}\). The scalar field violates all the energy conditions in this region. The electromagnetic field violates only the dominant energy condition once only the inequality \(DEC_{2}^{EM}\) is not satisfied. Inside the possible event horizon, we find \[NEC_{1}^{\phi} \Longleftrightarrow\frac{2q^{2}\left(\sqrt{q^{2}+r^{2}}-2m\right)}{ \kappa^{2}\left(q^{2}+r^{2}\right)^{5/2}}\geq 0,\qquad NEC_{2}^{\phi}\Longleftrightarrow \frac{8mq^{2}}{5\kappa^{2}\left(q^{2}+r^{2}\right)^{5/2}}\geq 0, \tag{84}\] \[SEC_{3}^{\phi} \Longleftrightarrow\frac{8mq^{2}}{5\kappa^{2}\left(q^{2}+r^{2} \right)^{5/2}}\geq 0,\qquad WEC_{3}^{\phi}\Longleftrightarrow\frac{q^{2}\left(5 \sqrt{q^{2}+r^{2}}-6m\right)}{5\kappa^{2}\left(q^{2}+r^{2}\right)^{5/2}}\geq 0,\] (85) \[DEC_{1}^{\phi} \Longrightarrow\frac{8mq^{2}}{5\kappa^{2}\left(q^{2}+r^{2} \right)^{5/2}}\geq 0,\qquad DEC_{2}^{\phi}\Longrightarrow\frac{2q^{2}\left( \sqrt{q^{2}+r^{2}}-2m\right)}{\kappa^{2}\left(q^{2}+r^{2}\right)^{5/2}}\geq 0. \tag{86}\] The scalar field violates all energy condition once the inequalities \(NEC_{1}^{\phi}\), \(WEC_{3}^{\phi}\), and \(DEC_{2}^{\phi}\) are not satisfied. ### Bardeen-type solution To the Bardeen-type model, we replace equations (29), and (37)-(39) in the energy conditions, that results, to \(f(r)>0\) in \[NEC_{1}^{\phi} \Longleftrightarrow-\frac{2q^{2}\left(1-\frac{2mr^{2}}{\left(q^{2 }+r^{2}\right)^{3/2}}\right)}{\kappa^{2}q^{2}\left(r^{2}+q^{2}\right)^{2}}\geq 0,\qquad SEC_{3}^{\phi}\Longleftrightarrow-\frac{8mq^{2}\left(8q^{2}-7r^{2} \right)}{35\kappa^{2}\left(q^{2}+r^{2}\right)^{7/2}}\geq 0, \tag{87}\] \[DEC_{1}^{\phi} \Longrightarrow\frac{8m\left(7q^{2}r^{2}-8q^{4}\right)}{35\kappa ^{2}\left(q^{2}+r^{2}\right)^{7/2}}\geq 0,\qquad DEC_{2}^{\phi}\Longrightarrow- \frac{2q^{2}\left(5\sqrt{q^{2}+r^{2}}-14m\right)}{5\kappa^{2}\left(q^{2}+r^{ 2}\right)^{5/2}}\geq 0,\] (88) \[WEC_{3}^{\phi} \Longleftrightarrow-\frac{q^{2}\left(32mq^{2}-98mr^{2}+35\left(q ^{2}+r^{2}\right)^{3/2}\right)}{35\kappa^{2}\left(q^{2}+r^{2}\right)^{7/2}} \geq 0,\] (89) \[NEC_{2}^{EM} \Longleftrightarrow\frac{mq^{2}\left(13r^{2}-2q^{2}\right)}{ \kappa^{2}\left(q^{2}+r^{2}\right)^{7/2}}\geq 0,\qquad SEC_{3}^{EM} \Longleftrightarrow-\frac{6mq^{2}\left(34q^{2}-91r^{2}\right)}{35\kappa^{2} \left(q^{2}+r^{2}\right)^{7/2}}\geq 0,\] (90) \[DEC_{1}^{EM} \Longrightarrow\frac{4mq^{2}\left(16q^{2}+91r^{2}\right)}{35\kappa ^{2}\left(q^{2}+r^{2}\right)^{7/2}}\geq 0,\qquad DEC_{2}^{EM} \Longrightarrow\frac{mq^{2}\left(134q^{2}-91r^{2}\right)}{35\kappa^{2}\left(q ^{2}+r^{2}\right)^{7/2}}\geq 0,\] (91) \[WEC_{3}^{EM} \Longleftrightarrow\frac{2mq^{2}\left(16q^{2}+91r^{2}\right)}{ 35\kappa^{2}\left(q^{2}+r^{2}\right)^{7/2}}\geq 0. \tag{92}\] To the scalar field, where \(f(r)>0\), the inequalities \(NEC_{1}^{\phi}\), \(DEC_{2}^{\phi}\), and \(WEC_{3}^{\phi}\) are not satisfied, so that, all energy conditions are violated. The electromagnetic field violates the dominant energy condition once the inequality \(DEC_{2}^{EM}\) is not satisfied to \(r>>1\). The conditions \(SEC_{3}^{EM}\) and \(NEC_{2}^{EM}\) will not necessarily be violated outside of a possible event horizon. This will depend on the charge value of the solution. However, even if these conditions are not violated outside the horizon, they are violated within the horizon. In the region where \(f(r)<0\), we find \[NEC_{1}^{\phi} \Longleftrightarrow\frac{2q^{2}\left(1-\frac{2mr^{2}}{\left(q^{2} +r^{2}\right)^{3/2}}\right)}{\kappa^{2}q^{2}\left(r^{2}+q^{2}\right)^{2}}\geq 0, \qquad NEC_{2}^{\phi}\Longleftrightarrow\frac{8m\left(7q^{2}r^{2}-8q^{4} \right)}{35\kappa^{2}\left(q^{2}+r^{2}\right)^{7/2}}\geq 0, \tag{93}\] \[SEC_{3}^{\phi} \Longleftrightarrow\frac{8mq^{2}\left(7r^{2}-8q^{2}\right)}{35 \kappa^{2}\left(q^{2}+r^{2}\right)^{7/2}}\geq 0,\qquad WEC_{3}^{\phi}\Longleftrightarrow\frac{q^{2} \left(-32mq^{2}-42mr^{2}+35\left(q^{2}+r^{2}\right)^{3/2}\right)}{35\kappa^{2} \left(q^{2}+r^{2}\right)^{7/2}}\geq 0,\] (94) \[DEC_{1}^{\phi} \Longrightarrow\frac{8mq^{2}\left(7r^{2}-8q^{2}\right)}{35\kappa ^{2}\left(q^{2}+r^{2}\right)^{7/2}}\geq 0,\qquad DEC_{2}^{\phi}\Longrightarrow\frac{2q^{2} \left(1-\frac{2mr^{2}}{\left(q^{2}+r^{2}\right)^{3/2}}\right)}{\kappa^{2}q^{2} \left(r^{2}+q^{2}\right)^{2}}\geq 0. \tag{95}\] The scalar field violates all energy condition once the null energy condition is not satisfied. Conclusion In this work, we obtain the material content of black bounce solutions in general relativity. For this, we consider the coupling of gravitational theory with a scalar field and nonlinear electrodynamics. We show that, as the stress-energy tensor of black bounces does not satisfy certain symmetries, only nonlinear electrodynamics is not enough to generate these solutions. For this reason, we consider the theory described by the action (11) and obtained the field equations for this theory. The parameter \(\epsilon\) determines whether we have the presence of an usual scalar field or a phantom scalar field. Through the field equations, we built the necessary formalism to obtain the material content to a general solution. With these results, we propose a theorem that relates the need for the scalar field to be phantom with the fact that the null energy condition is violated. It means that, if the null energy condition were satisfied, the scalar field that generates the black bounce could be nonphantom. We applied the method to the Simpson-Visser solution and the Bardeen-type black bounce solution and obtained the shape of the Lagrangian \(L(F)\), equations (35) and (41), and the potential \(V(\phi)\), equations (34) and (40), that generate these solutions. The functions (31), (32), (38), and (39) are obtained independently through the field equations. However, they must satisfy the condition (33), what really happens. Once we have separately the stress-energy tensor of each field that composes the source, we analyze the energy conditions associated with each field. In the case of the Simpson-Visser Solution, the scalar field violated all energy conditions, while the electromagnetic field violated only the dominant energy condition. In the case of the Bardeen-type solution, both the scalar field and the electromagnetic field violate all energy conditions. However, for the electromagnetic field, it is possible to at least guarantee the positivity of the energy density, \(\rho^{EM}=WEC_{3}^{EM}\). There are several black bounce solutions. However, it will not always be possible to apply the method presented in this work to obtain the material content in an analytical way. In some cases, it is not possible to solve equation (28) analytically. Now that we have the material content of these solutions, we can analyze other properties associated with the material content, such as thermodynamics and perturbations. ###### Acknowledgements. M.E.R. thanks Conselho Nacional de Desenvolvimento Cientifico e Tecnologico - CNPq, Brazil for partial financial support.
2301.09773
Stratified inclined duct: direct numerical simulations
The stratified inclined duct (SID) experiment consists of a zero-net-volume exchange flow in a long tilted rectangular duct, which allows the study of realistic stratified shear flows with sustained internal forcing. We present the first three-dimensional direct numerical simulations (DNS) of SID to explore the transitions between increasingly turbulent flow regimes first described by Meyer \& Linden (\textit{J. Fluid Mech.} \textbf{753}, 242-253, 2014). We develop a numerical set-up that faithfully reproduces the experiments and sustains the flow for arbitrarily long times at minimal computational cost. We recover the four qualitative flow regimes found experimentally in the same regions of parameter space: laminar flow, waves, intermittent turbulence, and fully-developed turbulence. We find good qualitative and quantitative agreement between DNS and experiments and highlight the added value of DNS to complement experimental diagnostics and increase our understanding of the transition to turbulence, both temporally (laminar/turbulent cycles) and parametrically (as the tilt angle of the duct and the Reynolds number are increased). These results demonstrate that numerical studies of SID -- and deeper integration between simulations and experiments -- have the potential to lead to a better understanding of stratified turbulence in environmental flows.
Lu Zhu, Amir Atoufi, Adrien Lefauve, John R. Taylor, Rich R. Kerswell, Stuart B. Dalziel, Gregory. A. Lawrence, P. F. Linden
2023-01-24T01:05:38Z
http://arxiv.org/abs/2301.09773v1
# Stratified inclined duct: direct numerical simulations ###### Abstract The stratified inclined duct (SID) experiment consists of a zero-net-volume exchange flow in a long tilted rectangular duct, which allows the study of realistic stratified shear flows with sustained internal forcing. We present the first three-dimensional direct numerical simulations (DNS) of SID to explore the transitions between increasingly turbulent flow regimes first described by Meyer & Linden (_J. Fluid Mech._**753**, 242-253, 2014). We develop a numerical set-up that faithfully reproduces the experiments and sustains the flow for arbitrarily long times at minimal computational cost. We recover the four qualitative flow regimes found experimentally in the same regions of parameter space: laminar flow, waves, intermittent turbulence, and fully-developed turbulence. We find good qualitative and quantitative agreement between DNS and experiments and highlight the added value of DNS to complement experimental diagnostics and increase our understanding of the transition to turbulence, both temporally (laminar/turbulent cycles) and parametrically (as the tilt angle of the duct and the Reynolds number are increased). These results demonstrate that numerical studies of SID - and deeper integration between simulations and experiments - have the potential to lead to a better understanding of stratified turbulence in environmental flows. s 1 Astrated mass model (SID) experiment consists of a zero-net-volume exchange flow in a long tilted rectangular duct, which allows the study of realistic stratified shear flows with sustained internal forcing. We present the first three-dimensional direct numerical simulations (DNS) of SID to explore the transitions between increasingly turbulent flow regimes first described by Meyer & Linden (_J. Fluid Mech._**753**, 242-253, 2014). We develop a numerical set-up that faithfully reproduces the experiments and sustains the flow for arbitrarily long times at minimal computational cost. We recover the four qualitative flow regimes found experimentally in the same regions of parameter space: laminar flow, waves, intermittent turbulence, and fully-developed turbulence. We find good qualitative and quantitative agreement between DNS and experiments and highlight the added value of DNS to complement experimental diagnostics and increase our understanding of the transition to turbulence, both temporally (laminar/turbulent cycles) and parametrically (as the tilt angle of the duct and the Reynolds number are increased). These results demonstrate that numerical studies of SID - and deeper integration between simulations and experiments - have the potential to lead to a better understanding of stratified turbulence in environmental flows. keywords:stratified flows, stratified turbulence, turbulent transition, direction numerical simulation, flow restoring ## 1 Introduction Large-scale fluid motions in the ocean are almost always stably-stratified in density due to differences in temperature and/or salinity at different depths. The transport of momentum and mass (temperature, salinity, and other solutes) by turbulence plays an important role in setting the large-scale structure and circulation of the ocean, with implications for the global climate. Consequently, the influence of stable stratification on turbulence and the resulting mixing has attracted much attention (Linden 1979; Riley & Lelong 2000; Gregg _et al._ 2018; Caulfield 2020; Dauxois _et al._ 2021; Caulfield 2021). ###### Abstract We consider the evolution of the evolution of the density of the dust in the interstellar medium, with the observed \(\theta\)-Fe abundance \(\beta\)-Fe, and the observed \(\theta\)-Fe abundance \( ## 2 Methodology ### Governing equations Our simulation geometry in non-dimensional units is shown in figure 1(a,b). It replicates the experimental geometry (see, e.g. figure 1 of Lefauve _et al._ (2019_a_) in dimensional units), which consists of a duct of square cross-section with internal height \(H\), width \(W\) and length \(L\) connecting two large reservoirs with fluids at densities \(\rho_{0}\pm\Delta\rho/2\) (white and blue shaded areas in figure 1(a)). To match previous experimental studies of SID, we non-dimensionalise all lengths by the duct half height \(H/2\), making the duct non-dimensional length, height and width \(2A\times 2B\times 2\), respectively, where \(A\equiv L/H\) and \(B=W/H\) are the streamwise and spanwise aspect ratios, respectively. We also non-dimensionalise (i) the velocities by the fixed buoyancy velocity scale \(\Delta U/2\equiv\sqrt{g^{\prime}H}\) (where \(g^{\prime}=g\Delta\rho/\rho_{0}\) is the reduced gravity and \(\rho_{0}\) is the reference density); (ii) the time by the advective time unit (ATU) \(H/\Delta U\); (iii) the density variations around the reference \(\rho_{0}\) by \(\Delta\rho/2\); and (iv) the pressure by \(\rho_{0}(\Delta U/2)^{2}\). Note that the \(x\)-axis (the streamwise direction) is aligned along the duct, whereas the gravity points downwards at an angle \(\theta\) from the \(-z\) axis (the vertical direction in the frame of the duct), hence in these duct coordinates \(\mathbf{g}=g\mathbf{\hat{g}}=g(\sin\theta,0,-\cos\theta)\). The resulting non-dimensional governing equations for our DNS are the Navier-Stokes equations under the Boussinesq approximation \[\boldsymbol{\nabla}\cdot\mathbf{u} =0, \tag{1}\] \[\frac{D\mathbf{u}}{Dt} =-\boldsymbol{\nabla}p+\frac{1}{\text{Re}}\nabla^{2}\mathbf{u}+ \text{Ri}\,\rho\,\mathbf{\hat{g}}-\boldsymbol{F}_{u},\] (2) \[\frac{D\rho}{Dt} =\frac{1}{\text{Re}\,\,\text{Pr}}\nabla^{2}\rho-F_{\rho}, \tag{3}\] where the material derivative is \(D/Dt\equiv\partial_{t}+\mathbf{u}\cdot\boldsymbol{\nabla}\), the velocity is \(\mathbf{u}=(u,v,w)\) is in the non-dimensional coordinate system \(\mathbf{x}=(x,y,z)\) aligned with the duct, the non-dimensional pressure is \(p\), the non-dimensional density variation around the mean is \(\rho\) (bounded between \(-1\) and \(1\)). The forcing terms \(\boldsymbol{F}_{u}\) and \(F_{\rho}\) used to maintain the quasi-steady exchange flows will be described in SS2.2. The non-dimensional Reynolds, Richardson, and Prandtl numbers are related to the dimensional experimental parameters as follows \[\text{Re}\equiv\frac{\frac{\Delta U}{2}\frac{H}{2}}{\nu}\equiv\frac{\sqrt{g^{ \prime}HH}}{2\nu},\qquad\text{Ri}\equiv\frac{\frac{g}{\rho_{0}}\frac{\Delta \rho}{2}\frac{H}{2}}{\left(\frac{\Delta U}{2}\right)^{2}}\equiv\frac{1}{4}, \qquad\text{Pr}\equiv\frac{\nu}{\kappa}\equiv 7, \tag{4}\] where \(\nu\) is the kinematic viscosity and \(\kappa\) is the mass diffusivity. Previous studies of SID showed that the streamwise velocity scales with \(\Delta U/2\), motivating this definition of Reynolds number. The Richardson number is always equal to 1/4 due to the definition of \(\Delta U\). The Prandtl number in all simulations was set to \(\text{Pr}=7\), approximately representative of temperature stratification in water at room temperature. For a given duct and reservoir geometry, there are two remaining free non-dimensional parameters: the tilt angle \(\theta\) and the Reynolds number \(\text{Re}\) (based on the driving density difference \(\Delta\rho\)). ### Artificial restoring of the exchange flow The exchange flow in the duct is driven by the hydrostatic longitudinal pressure gradient and by the longitudinal gravitational acceleration \(g\sin\theta\). In the context of the two-layer flow, the along-duct component of gravity accelerates the heavier layer rightwards (downhill) and the lighter layer leftwards (uphill). The role of the hydrostatic pressure gradient turns out to be more intricate and will be examined in SS5.3. In the experiments the flow inside the duct is sustained over long time periods (typically several hundred advective time units) until the discharged fluids accumulated in the large reservoirs have reached the level of the duct. Simulating such large reservoirs would be prohibitively expensive. In the simulations, we use smaller reservoirs and add _ad hoc_ forcing terms \(\mathbf{F}_{u},F_{\rho}\) in the momentum and buoyancy equations (2) and (3), respectively, \[\mathbf{F}_{u}\equiv F_{u}\mathbf{u}\equiv\Big{[}\frac{1-\tanh(\frac{2}{\Delta}(x+ \frac{L_{x}-l_{f}}{2}))}{\eta_{u}}+\frac{1+\tanh(\frac{2}{\Delta}(x-\frac{L_{ x}-l_{f}}{2}))}{\eta_{u}}\Big{]}\mathbf{u}, \tag{5}\] \[F_{\rho}\equiv\frac{1-\tanh(\frac{2}{\Delta}(x+\frac{L_{x}-l_{f}}{2}))}{\eta_{ \rho}}(\rho-1)+\frac{1+\tanh(\frac{2}{\Delta}(x-\frac{L_{x}-l_{f}}{2}))}{\eta_{ \rho}}(\rho+1), \tag{6}\] where \(l_{f}\) is the streamwise length of influence of the forcing, and \(\Delta=2l_{f}/L_{f}\) (with a fixed \(L_{f}\equiv 8\)) defines the steepness of the transition from the forced to the unforced regions. The density forcing term restores the density of the fluid in the reservoir to the prescribed value (i.e., \(\pm 1\)), and the momentum forcing term acts to dampen motion in the reservoir. The timescales \(\eta_{u}\) and \(\eta_{\rho}\) control the momentum and density forcing terms, respectively. Compromise values of these timescales must be found, as large values are too slow to sufficiently damp reservoir motion and restore density, while small values are too fast and overreact, threatening numerical stability. These parameters were optimised with the size of the reservoirs in order to minimise their influence on the large-scale flow in the duct compared to the Bench. cases with large reservoirs and without forcing (see SS2.6). Tests revealed little variation in the range \(l_{f}\in[0.3L_{x}^{r},0.7L_{x}^{r}]\), therefore we set \(l_{f}=0.5L_{x}^{r}\), confining the forcing region to half the reservoir (greved out in figure 1(a)). The timescales \(\eta_{u}\) and \(\eta_{\rho}\) should then be smaller than the times for a discharging flow (with non-dimensional Figure 1: Schematics of SID geometry in non-dimensional units. (a) Overview of the rectangular simulation domain of dimensions \(L_{x}\), \(L_{y}\), \(L_{z}\) within which immersed boundaries create a square duct of dimensions \(2A\times 2\times 2\). (b) Detail of the duct geometry and coordinate system. (c) Shape of the different reservoirs considered in this paper (Bench., AR, BR, and SR), with the total domain length \(L_{x}=2(A+L_{x}^{r})\). All numerical parameters are summarised in table 1. speed 1) to pass through the forcing region, i.e. \(\approx l_{f}\). Practically, we set \(2.5\lesssim\eta_{u}\lesssim 5\) and \(0.1\lesssim\eta_{\rho}\lesssim 0.5\), depending on \(l_{f}\). Physically, \(\mathbf{F}_{u}\) decelerates the fluid entering the reservoir until it comes to rest, and \(F_{\rho}\) ensures that the density of this fluid matches that of the reservoir before it re-enters the duct. This forcing thus effectively mimics the action of infinitely large reservoirs with a finite-sized, computationally-feasible domain. ### Solver The DNS were performed with the open-source solver Xcompact3D (Bartholomew _et al._, 2020), which uses 4th-order and 6th-order compact finite-difference schemes for the first and second spatial derivatives, respectively, and a 3rd-order Adams-Bashforth scheme (Peyret, 2002; Zhu & Xi, 2020) for the time integration with a time step \(\delta_{t}=0.001\). Pressure field is obtained from a conventional Poisson equation based on applying divergence operator on the (2.2), employing continuity (2.1). The Poisson equation is then solved numerically using the fast Fourier transform with modified wavenumbers. For more details about the core of the code (Incompact3D), see Laizet & Lamballais (2009) and Laizet & Li (2011), and for the application of Xcompact3D to stratified turbulent flows see Frantz _et al._ (2021). We modified Xcompact3D to include the forcing terms \(\mathbf{F}_{u},F_{\rho}\) discussed above. ### Domain and boundary conditions The computational domain had dimensions \(L_{x}\), \(L_{y}=2\), and \(L_{z}\) along \(x\), \(y\), and \(z\), respectively (see figure 1(a,b)). On the boundaries of this domain, we applied a no-slip condition for \(\mathbf{u}\) and a no-flux condition for \(\rho\) as in Laizet & Lamballais (2009). To represent the duct and reservoir geometry within this computational domain, we applied the immersed boundary method (IBM) in Xcompact3D to the yellow-shaded region in figure 1(a). The IBM treatment of \(\mathbf{u}\) (no slip) uses a direct forcing method described in Mohd-Yusof (1997) and specifically for Xcompact3D in Laizet & Lamballais (2009); Gautier _et al._ (2014), which imposes \(\mathbf{u}=\mathbf{0}\) in the solid regions.The pressure \(p\) in the solid region is treated by reducing the Poisson equation to a Laplace equation (Laizet & Lamballais, 2009). The IBM allows relatively simple implementation of complex geometries in scalable codes such as Xcompact3D that are built upon Cartesian coordinates and rectangular computational domains. The IBM treatment of \(\rho\) (no flux) required a slightly different approach to minimize the modifications of Xcompact3D and maintain the consistency between \(\mathbf{u}\) and \(\rho\). A \(\tanh\) function was used to reconstruct points inside the solid region \[\rho_{i}=\frac{1}{2}(\rho_{l}+\rho_{r})+\frac{1}{2}(\rho_{r}+\rho_{l})\, \tanh\Big{[}\frac{L_{w}}{\xi_{r}-\xi_{l}}\big{(}\xi_{i}-\frac{1}{2}(\xi_{r}+ \xi_{l})\big{)}\Big{]}, \tag{7}\] as shown in figure 2. Here \(\xi\) is the wall-normal coordinate (horizontal or vertical), the Figure 2: Schematic of the IBM to implement the no-flux boundary condition on density \(\rho\) at fluid-solid boundaries in the \(x\) or \(z\)-direction. The curved red line represents the fictitious density profile across the solid region. subscript \(i\) is the index of grid points, \(r\) and \(l\) are the right and left grid points (in the fluid region), respectively, adjacent to the solid walls. The \(\tanh\) function ensures a zero-flux boundary condition at the wall while maintaining a smooth change of the density through the solid region. A similar approach using a polynomial reconstruction has been used to treat the Dirichlet and Neumann boundary conditions in Gautier _et al._ (2014) and Frantz _et al._ (2021). The length scale \(L_{w}=10\) was chosen to ensure a smooth change of density inside the solid region while maintaining exponentially small density flux at SID walls. ### Initial conditions All simulations were initialized at \(t=0\) with a density \(\rho=\tanh(x/L_{I})\) (where \(L_{I}=0.1\)) at the centre of the duct, simulating 'lock exchange' conditions with a sharp but continuous change from densest fluid on the left-hand side to lightest fluid on the right-hand side. A zero-mean uniform distributed random noise with nondimensional amplitude \(\varsigma=0.5\), is applied to the initial velocity \(\mathbf{u}_{n}\). Such random noise is set to break the symmetry of the exchanging flow and initiate instabilities inside the duct. Note that smaller perturbation amplitudes (e.g. \(\varsigma=0.005\)) can be applied, but we verified it did not influence the main features of the flow (see Supplementary Material S1). Shortly after \(t>0\), a gravity current formed at the centre of the duct (\(x=0\)) and propagated in both directions toward the ends of the duct. After a typical duct transit time of order \(t\approx A\) (transiting at non-dimensional velocity \(\approx 1\) over a non-dimensional length \(A\)), the exchange flow was established. ### Parameters, duct and reservoirs geometries In order to investigate the various flow regimes we varied the Reynolds number \(\mathrm{Re}\) in the range \(400-1250\) and the duct tilt angle \(\theta\) in the range \(1-10^{\circ}\). As mentioned above the Prandtl number \(Pr=7\) and Richardson number \(\mathrm{Ri}=1/4\) were fixed. The suite of DNS is summarised in table 1. Most DNS were run with a long duct of streamwise and spanwise aspect ratios \(A=30\) and \(B=1\), respectively, for direct comparison with the experiments in Lefauve & Linden (2020\(a\)) (the'mini SID Temperature' dataset abbreviated'mSIDT'). However, a couple of DNS (cases 'BRw' in table 1) were run in a longer and wider duct at \(A=44\), \(B=2\) to compare with a new experimental set-up. To validate the performance of our forcing to sustain a realistic exchange flow, we ran a benchmark DNS ('Bench.') without forcing (\(F_{u}=F_{\rho}=0\)) but with large reservoirs (\(L_{x}^{r}\times L_{z}=30\times 8\)). This benchmark had a combined reservoir volume of four times that of the duct (\(60\times 8/(60\times 2)=4\)), which is sufficient for our validation but still much smaller than the experiments (volume ratio \(\approx 30\)). We will show in SS3, that the different reservoirs do not seem to influence the flow statistics within SID. This is expected from the knowledge that the flow in SID is hydraulically controlled (Meyer & Linden, 2014), i.e. that information from the reservoirs cannot travel into the duct because of 'control' regions at the inlet and outlet, where the convective flow speed is faster than interfacial waves (Lawrence, 1990). This conveniently ensures that different reservoir geometries and conditions do not influence the flow within the duct, as long as unmixed and quiescent fluid are available at either end of the duct. All the other DNS had non-zero forcing and smaller, more computationally affordable reservoirs. To test the impact of reservoir size, we used the three following reservoirs sketched in figure 1(c): the A-reservoir ('AR') of dimensions \(L_{x}^{r}\times L_{z}=10\times 8\), which is a third of the length of the Bench but equally tall; the B-reservoir ('BR') \(L_{x}^{r}\times\ L_{z}=5\times 4\), which is half the length and half the height of the A-reservoir; and finally, the smallest S-reservoir ('SR') \(L_{x}^{r}\times L_{z}=10\times 2\). Note that (unlike the experiments) almost all reservoirs have the same spanwise width as the duct \(L_{y}=2\). The only exception is 'BRw', which has \(L_{y}=4\) and \(L_{x}^{r}\times L_{z}=10\times 2\). The set of forcing parameters (\(l_{f}\,,\nu_{u},\nu_{\rho}\)) for \(\mathbf{F}_{u},F_{\rho}\) that we found to have minimal impacts on the duct for each case are also listed in table 1. The bold font for the seven cases at \(\mathrm{Re}=650\) and \(1000\) highlight the DNS that we analysed in more detail in this paper, with the superscripts giving their shorthand names (B2, B5, B6, B8, B10, S6, and S8). The other DNS were used for validation and for plotting the regime diagrams in the \((\theta,\mathrm{Re})\) plane. Finally, we adopt a uniform grid size \(N_{x}\,\times\,N_{y}\,\times\,N_{z}\) for the entire domain \(L_{x}\,\times\,L_{y}\,\times\,L_{z}\), which has the advantage of helping maintain numerical stability near the immersed boundaries. The grid size was small enough to capture the Kolmogorov turbulent lengthscale, and \(2-3\) times the Batchelor lengthscale in our most turbulent dataset B10 (discussed in more detail in SS5.4), ensuring adequate resolution of the kinetic and scalar energy spectra. ## 3 Validation In figure 3 we assess the ability of the forcing introduced in (2.5)-(2.6) to sustain the exchange flow by comparing, for the B-reservoir, a standard DNS with forcing ('forced') and a DNS without forcing ('unforced') i.e. \(F_{u}=F_{\rho}=0\). \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline \hline Case & \(\mathrm{Re}\) & \(\theta\) (\(\mathrm{deg.}\)) & \(A\) & \(B\) & \(L_{x}^{r}\times L_{y}\times L_{z}\) & \(N_{x}\times N_{y}\times N_{z}\) & \(l_{f}\) & \(\eta_{u}\) & \(\eta_{\rho}\) \\ \hline Bench. & \(400\) & \(2\) & \(30\) & \(1\) & \(30\times 2\times 8\) & \(1621\times 49\times 385\) & - & - & - \\ & \(650\) & \(6\) & \(30\) & \(1\) & \(30\times 2\times 8\) & \(1921\times 61\times 481\) & - & - & - \\ \hline AR & \(400\) & \(2\), \(5\) & & & & & & \(1081\times 49\times 385\) & & & \\ & \(650\) & \(4\), \(6\), \(8\) & & & & & \(1441\times 65\times 481\) & & & \\ & \(800\) & \(3\), \(4\), \(10\) & \(30\) & \(1\) & \(10\times 2\times 8\) & \(1537\times 65\times 577\) & 5 & 5 & \(0.1\) \\ & \(1000\) & \(2\), \(4\) & & & & & \(1729\times 65\times 577\) & & & \\ & \(1250\) & \(1\), \(3\) & & & & & \(1801\times 81\times 641\) & & & \\ \hline BR & \(400\) & \(2\), \(5\), \(7\), \(10\) & & & & & \(961\times 61\times 193\) & & & \\ & \(650\) & \(\mathbf{2}^{(\mathrm{B2})}\), \(4\), \(\mathbf{5}^{(\mathrm{B2})}\), \(\mathbf{6}^{(\mathrm{B6})}\), \(\mathbf{8}^{(\mathrm{B8})}\) & & & & & \(1081\times 65\times 241\) & & \\ & \(800\) & \(7\) & \(30\) & \(1\) & \(5\times 2\times 4\) & \(1351\times 65\times 289\) & 2.5 & 2.5 & 0.1 \\ & \(1000\) & \(3\), \(4\), \(5\), \(10\) & & & & & \(1501\times 65\times 281\) & & & \\ & \(1000\) & \(\mathbf{10}^{(\mathrm{B10})}\) & & & & & \(3001\times 121\times 241\) & & & \\ & \(1250\) & \(5\) & & & & & \(1501\times 65\times 289\) & & & \\ \hline SR & \(400\) & \(2\), \(5\) & & & & & \(1081\times 65\times 121\) & \(8\) & 5 & 0.5 \\ & \(650\) & \(\mathbf{6}^{(\mathrm{S6})}\), \(\mathbf{8}^{(\mathrm{S8})}\) & & & & & \(1201\times 65\times 121\) & & & \\ \hline BRw & \(650\) & \(\mathbf{3}^{(\mathrm{W3})}\), \(\mathbf{5}^{(\mathrm{W5})}\) & & \(44\) & \(2\) & \(10\times 4\times 4\) & \(1601\times 121\times 241\) & 8 & 5 & 0.5 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of the DNS, from left to right: reservoir geometry (case) as shown in figure 1(c); Reynolds number; tilt angle; duct streamwise aspect ratio; duct spanwise aspect ratio; reservoir size; grid size of the entire computational domain; and forcing parameters. Bold font and superscripts denote the most used DNS. Figure 3(a) shows the time series of the volume flux \(Q\) and the mass flux \(Q_{m}\) defined as \[Q(t) \equiv\langle|u|\rangle_{\mathcal{V}}, \tag{1}\] \[Q_{m}(t) \equiv\langle\rho u\rangle_{\mathcal{V}}, \tag{2}\] where \(\langle\cdot\rangle_{\mathcal{V}}\equiv(1/8A)\int_{-1}^{1}\int_{-1}^{1}\int_{- A}^{A}\cdot dx\ dy\,dz\) denotes an average over the entire volume of the duct. Note that since \(|\rho|\leqslant 1\), by definition \(Q_{m}\leqslant Q\). A more diffuse interface and turbulent mixing can cause \(Q_{m}\) to be significantly lower than \(Q\). The values of \(Q(t)\) (solid lines) and \(Q_{m}(t)\) (dashed lines) in the forced (red) and unforced (green) DNS are identical in the initial stage of accelerating gravity current (\(0<t\lesssim 60\)). They remain equal until the exchange flow approaches a steady-state at \(Q\approx 0.5\) and \(Q_{m}\approx 0.45\) (\(60\lesssim t\lesssim 100\)). However, from \(t\approx 100\), the unforced time series drops sharply, signalling that the flow slows down (see \(Q(t)\)) and becomes overall more mixed inside the duct (as \(Q_{m}(t)\) decays faster than \(Q(t)\)). By contrast, the forced time series remains steady until the end (\(t\approx 160\)) of the simulation. Figures 3(_b,c_) show \(x-z\) slices of the density field on the rightmost quarter of the computational domain at \(t=100\) for the forced DNS (panel b) and unforced DNS (panel c). In the unforced DNS, the dense, right-flowing bottom layer (in blue) has filled over half of the B-reservoir. The large kinetic energy of this layer has led to mixing inside the reservoir. This dense fluid contaminates the exchange flow as it is entrained back into the duct by the left-flowing buoyant layer (in red). In the forced DNS, this does not happen; the outflowing layer is slowed down and its density is gradually converted to that of the inflowing fluid. This allows an infinitely-long quasi-steady exchange flow to be maintained inside the duct. In figure 4 we compare the statistics of the established exchange flow in the Bench. case (very large reservoirs, unforced), and in progressively smaller, but forced, reservoirs: BR and SR. We compare two different flows: a laminar regime at \((\mathrm{Re},\theta)=(400,5)\) (red, blue, green curves) and a wave regime at \((\mathrm{Re},\theta)=(650,6)\) (purple, pink, and cyan curves). Panel (a) shows \(\langle u\rangle(z)\) (where \(\langle\cdot\rangle\equiv\langle\cdot\rangle_{x,y,t}\) is the average over the entire duct length, width, and time series), panel (b) shows \(\langle\rho\rangle(z)\), and panel (c) shows the time series of the total kinetic energy \(\langle\bar{k}\rangle_{\mathcal{V}}(t)\) (where \(\bar{k}\equiv|\mathbf{u}|^{2}/2\)). Comparing the Bench., BR, and SR cases, we find excellent agreement between the vertical profiles and the time series of kinetic energy. Minor temporal discrepancies in the wave regime after \(t\gtrsim 150\) are probably caused by variations in the initial random noise, but have negligible influence on flow statistics and dynamics. Overall, our forcing method faithfully models the effects of reservoirs as far as the flow inside the duct is concerned, even Figure 3: Demonstration of the effects of the forcing terms \(F_{\mu},F_{\rho}\) in finite-sized reservoirs (here in a B-reservoir, ‘BR’) at \((\mathrm{Re},\theta)=(650,4^{\circ})\). (a) Time series of the volume and mass flux. (b,c) Instantaneous mid-duct slices of \(\rho(x,y=0,z,t=100)\): (b) forced and (c) unforced DNS, showing only the right-most quarter of the duct and the right reservoir. in very small reservoirs. We provide additional evidence that the key flow dynamics in the SID are largely independent of the reservoirs in Supplementary Material S1. We compare spatio-temporal diagrams of the turbulent kinetic energy for \((\mathrm{Re},\theta)=(650,6^{\circ})\) for the benchmark, AR, BR, as well as BR with a reservoir wider than the duct \(L_{y}=4\) and show that details of wave motion and occasional turbulence over 200 advective time units do not vary more than they would under different initial noise conditions. Our typical computation at \(\mathrm{Re}=650\) required \(45\times 10^{6}\) points in the AR, but only \(17\times 10^{6}\) in the BR (a reduction of \(60\,\%\)). This explains why, in the following, we use the BR for more detailed analyses requiring longer time series of order \(\approx 200\) ATU. We use the even more affordable SR more sparingly in this paper, since our main goal is to compare the SR results to the BR to investigate the ability of the SR to reproduce the key flow physics. ## 4 Comparison between DNS and experiments ### Regimes: observations Figure 5 shows snapshots of the DNS density field exemplifying the quasi-steady states of the different flow regimes. All cases use the B-reservoir, with a duct aspect ratio of \(A=30\), as highlighted in table 1 and named B2, B5, B6, B8, and B10. The first four cases B2-B8 were at \(\mathrm{Re}=650\), the last one B10 was at \(\mathrm{Re}=1000\), and the numbers 2, 5, 6, 8, 10 indicate the respective value of \(\theta\) in degrees. Slices through the density field at the middle \(y=0\) plane (top five panels), and through a cross-sectional \(y-z\) plane at \(x=0\) in the duct are shown. The full temporal evolution of these five cases can be seen in our Supplementary Movies. We recover, in our DNS, the same four key flow regimes (laminar, wave, intermittently turbulent, and fully turbulent) identified in experimental studies of SID, in particular Meyer & Linden (2014); Lefauve (2018); Lefauve _et al._ (2019); Lefauve & Linden (2020), which is a first key result of this paper. Moreover, the DNS allow us, for the first time, to observe three-dimensional instantaneous snapshots along the entire domain, including the duct and the in- and out-flow in the reservoirs, which were not accessible to experiments. We describe each regime in turn. #### 4.1.1 Laminar regime First, in B2 (figure 5(a,f)) we observe a simple laminar (L) flow, which is largely parallel and steady without any observable waves or turbulent fluctuations. Molecular diffusion creates a relatively thin interface of intermediate density (in white). This density interface slopes at an angle since the two counter-flowing layers (in blue or red) get thinner as they accelerate Figure 4: Comparison of the effects of reservoir sizes on the: (a) streamwise velocity and (b) density profiles, and (c) kinetic energy time series in both the laminar and wave regimes. along the duct. This convective acceleration \(u\partial_{x}u\) in each layer is caused by the pressure gradient \(-\partial_{x}p\), gravity \(\mathrm{Ri}\,\sin\theta\,\rho\) and opposed by the viscous term \(\mathrm{Re}^{-1}\mathbf{\nabla}^{2}u\). #### 4.1.2 Wave regime(s) Second, in B5 and B6 (figure 5(b,c,g,h)) we observe a wave (W) flow, including large-scale waves with streamwise wavelength of order \(O(1)-O(A)\) (i.e. \(O(1)-O(30)\)). These waves are triggered by disturbances within the duct; information (waves) does not travel from the reservoirs into the duct. In this regime, small-scale, weakly-turbulent structures of limited spatial extent may occasionally be generated by the breakdown of large-scale waves but they always dissipate rapidly. We note that most of the previous SID experiments were done with salt stratification (\(Pr\approx 700\)), in which case the much-thinner density interface supports Holmboe waves; hence these studies called this wave regime the 'Holmboe' regime. However, with temperature stratification (\(\mathrm{Pr}\approx 7\), as simulated here) Lefauve (2018, SS 3.4.2) and Lefauve & Linden (2020\(a\)) highlighted that Holmboe waves were never found on the thicker interface. At \(\mathrm{Pr}\approx 7\) they found the same wave regime observed here, with interfacial gravity waves on the edges of a thicker density interface. We find that W flows can feature waves that are either stationary (B5; figure 5(b,g)) or travelling (B6; figure 5(c,h)) in the streamwise direction \(x\). Increasing \(\theta\) tends to first decrease the slope of the interface (relative to the duct) and accelerate the flow until the interface is parallel to the axis of the duct (reducing \(u\partial_{x}u\) and \(-\partial_{x}p\)), at which point the gravitational term \(\mathrm{Ri}\,\sin\theta\rho\) can no longer be balanced by laminar diffusion alone. This appears to coincide Figure 5: Snapshots of the density field in the mid-plane \(y=0\) (top five panels) and in the duct cross-section (bottom five panels) in five representative flows: (a,f) laminar (B2), (b,g) stationary wave (B5), (c,h) travelling wave (B6), (d,i) intermittent turbulence (B8, active phase), and (e,j) fully-developed turbulence (B10). These cases are highlighted in bold font in table 1. with the creation of a third, partially mixed layer (in light red, white, and light blue) that is neutrally buoyant and thus reduces the gravitational forcing \(\mathrm{Ri}\sin\theta\rho\). This third layer, often located near the centre of the duct (i.e. around \(|x|\approx 0\)) rather than near the ends (\(|x|\approx A\)) supports both stationary and travelling interfacial waves. We note that, in contrast to L flow, in these W flows the in-flowing layers (before reaching the 'wavy' area in the centre of the duct) are thinner than the out-flowing layers (after going through the 'wavy' area), which is somewhat reminiscent of an internal hydraulic jump. Travelling waves (figure 5(c)) tend to travel along the two density interfaces (between the top and middle layers, and between the middle and bottom layers) in a specific fashion. Left-going waves are most often found on the right quarter of the duct (\(x\gtrsim A/2\)), travelling towards the centre. Vice versa, right-going waves are most often found on the left quarter of the duct (\(x\lesssim-A/2\)), travelling towards the centre. Once they reach the central region (\(|x|\lesssim A/2\)), both types of waves usually end up decaying. This observation suggests that the flow may be'supercritical' outside of the central region, i.e. that information transported by interfacial waves can only propagate in one direction (towards the centre but not toward the ends). #### 4.1.3 Intermittently turbulent regime Third, in B8 (shown in figure 5(d,i)) we observe an intermittently turbulent (I) flow, which becomes more chaotic and in which patterns of individual waves become indistinguishable. Small-scale turbulent structures (of typical non-dimensional scale \(\ll 1\)) are generated, often by a breakdown of waves akin to the 'bursting' events of turbulent boundary layers (Robinson, 1991; Jimenez & Simens, 2001; Zhu & Xi, 2020). This interfacial turbulence, which persists for much longer times than in the W regime, enhances interfacial mixing and creates a third partially mixed layer (shown in white) over an increasingly long streamwise extent (as compared with the W regime). The interfacial turbulence sometimes extends along the full length of the duct. The combination of the decreasing magnitude of the gravitational forcing \(\mathrm{Ri}\sin\theta|\rho|\) by the increasingly mixed layer and the increasing smaller-scale viscous dissipation are presumably the key ingredients that keep the flow steady as \(\theta\) is increased from \(2^{\circ},5^{\circ},6^{\circ}\) to \(8^{\circ}\) in B2, B5, B6, B8 (at constant \(\mathrm{Re}=650\)). The defining characteristic of the I regime is that the turbulence identified by small-scale structures is temporally intermittent; turbulence occasionally decays and the flow'relaminarises' before transitioning to turbulence again; these cycles will be described in SS5.2. The time scales associated with the transition to turbulence and its decay, and the advection of perturbations along the length of the duct, occasionally make this turbulence also spatially intermittent in \(x\). #### 4.1.4 Fully turbulent regime Fourth, in B10 (shown in figure 5(e,j)) we observe a fully turbulent regime (T) in which turbulence is sustained in time and is more vigorous than in the I regime. Although the intensity of the turbulence can fluctuate in time, the flow in this regime never fully relaminarises. The central partially-mixed layer typically covers the entire length of the duct and at least a third of the height of the duct. ### Regime diagrams In figure 6 we map the flow regimes described above in 12 DNS with the AR (panel a) and in 15 DNS with the BR (panel b) for a range of \(\theta\) and \(\mathrm{Re}\). These 27 DNS data points, shown as large symbols, are compared with the 148 experimental data points of Lefauve & Linden (2020_a_) taken from their figure 4(_e_) and displayed here as smaller, fainter symbols using the same colour coding for the different flow regimes. The experimental data points were obtained using the same aspect ratios (\(A=30,B=1\)) and with temperature stratification (\(\mathrm{Pr}\approx 7\)). The regimes were identified by shadowgraph visualization, often over a small streamwise extent of the duct (their movies can be downloaded from Lefauve & Linden (2020_c_)). Figure 6 therefore represents the first direct comparison of DNS results with experimental results in SID, with all non-dimensional control parameters matched. First, we find a general agreement between DNS and experiments in the location of flow regimes in the \((\theta,\mathrm{Re})\) plane, as evidenced by the fact that most large symbols (DNS) are of the same type as the smaller symbols (experiments). This is a second key result of this paper, because it confirms that our DNS, with small computationally-efficient reservoirs, can reproduce the key physics of SID, encapsulated in the flow regimes. The minor exception to this agreement is found near the L/W transition, where some of our DNS found the W regime whereas the experiments found the L regime. This may be a genuine difference, but we suspect that this may be due to the fact that the weak stationary waves found near the L transition may have been missed in the experiments. This is because the experimental shadowgraphs were visualised over a limited extent of the duct and because low-amplitude waves in low-Re temperature-stratified flows produce very small changes in refractive index and thus weak shadowgraph signals. Second, the AR and BR yield consistent results (panels a and b), confirming that the smallest 'true' reservoir (excluding the SR for now) is indeed sufficient to reproduce the experiments. These results offer strong further support to the preliminary validation of our suite of DNS in SS3. ### Shadowgraphs We now turn to a side-by-side comparison of shadowgraph visualisations of the flow in DNS and experiments within a particular flow regime. Experimental shadowgraph movies are obtained by the projection onto a semi-transparent screen of initially parallel light rays that have travelled through the duct along the spanwise Figure 6: Regime transitions in \(\theta-\mathrm{Re}\) parameter space. The large symbols are our DNS data in the (a) A- and (b) B-reservoir, and the small markers are the temperature-stratified experimental data of Lefauve & Linden (2020_a_) (see their figure 4_e_) with matched non-dimensional parameters. \(y\)-direction. Any variations in the curvature (normal to the rays) of the density field \(\rho\) (and hence refractive index field \(n\)) causes the rays to focus or defocus, varying the intensity that reaches the screen (Weyl, 1954). In the limit of weak variations, the intensity of the image formed is (see e.g. Lefauve, 2018, SS 2.1) \[I(x,z,t)=\beta I_{0}(x,z)\int_{-B}^{B}(\partial_{xx}+\partial_{zz})\rho(x,y,z,t) \ dy. \tag{10}\] Here \(\beta\) depends on \((\rho_{0}/n_{0})\partial n/\partial\rho\) and the experimental geometry, while \(I_{0}\) is the (approximately) uniform background intensity of the illumination. This field is thus particularly suited to detect density interfaces, and is a simple and efficient proxy to compare the structure of interfacial density waves and small-scale turbulence in DNS and experiments. Figure 7 compares false colour instantaneous snapshots of \(I(x,z)\) in the I regime over a central portion of the duct \(|x|<9\). The DNS shadowgraphs reconstructed from the calculated density fields (assuming \(\beta I_{0}=1\)) are shown in the left column (W3 and W5, see table 1), and the matching experimental shadowgraph images of \(I/I_{0}\) are shown in the right column, all at Re = 650 and Pr = 7. We show a single snapshot at \(\theta=3^{\circ}\), at the boundary between the W and I regime (panel a) and two snapshots at \(\theta=5^{\circ}\), well into the I regime, where the flow is in a quiet laminar phase (panel b) and in an active turbulent phase (panel c). The full temporal evolution of these four shadowgraphs can be found in our Supplementary Movies. Note that these shadowgraphs were obtained in a new experimental apparatus having a wide duct \(B=2\), with a regular straight rectangular section of length \(A=40\), and trumpet-shaped expansions at either end (over an additional 10 % of its length) for a smoother connection to the reservoirs. While we did not model the trumpet ends in our DNS, we used \(B=2\) and the total length \(A=44\) to reproduce the geometry as faithfully as possible. Trumpet ends were first used in Meyer & Linden (2014) who reported no visible impact in shadowgraphs when compared to straight ends. With the parameters \(A\) and \(B\) increased with respect to cases B2-10, the I regime is found at smaller \(\theta\) values than would be expected from figure 6 (see Lefauve & Linden (2020)). We find good agreement in the structure of interfacial waves, somewhat reminiscent of Kelvin-Helmholtz billows, in DNS and experiments (compare panels a,b to d,e). These waves have higher amplitude than the stationary waves previously found in B5 (see figure 5(b)) because the flow is more energetic and prone to the growth of stratified shear instabilities at \(B=2\) than at \(B=1\), due to a weaker influence of the no-slip side walls (see Ducimetiere _et al._, 2021, SS IIIc). These waves tend to break into weak and short-lived turbulence at \(\theta=3^{\circ}\) (placing it borderline in the I regime), and into stronger and longer-lived turbulence at \(\theta=5^{\circ}\) (placing it well into the I regime). We also find good agreement in the overall appearance of small-scale turbulence in the 'active' phase (compare panels c to f). Active turbulence in the experiment extends slightly closer to the top and bottom boundaries than in the DNS. This may be a result of various factors including the non-zero thermal conductivity of the experimental duct walls, spurious reflections of light, and excessive cropping of near-wall regions caused by the difficulty in locating the wall in the shadowgraph images. Figure 8 illustrates these temporal dynamics with the corresponding \(z-t\) spatio-temporal diagrams in DNS (left column) and in experiments (right column). We find again good agreement, both in the vertical growth and decay of the waves, and in the alternation and approximate period of the quiet and active phases. These shadowgraphs show that our DNS faithfully reproduce not only the qualitative flow regimes and their distribution in \(\theta-\)Re space, but also details of their spatial structures and temporal dynamics, which is a third key result of this paper. ## 5 Added value of DNS In this section we examine quantitative DNS diagnostics which, because they are difficult or impossible to obtain in experiments, add value to the experimental study of SID. ### Vertical profiles and gradient Richardson number Figure 9 shows, for the five flow regimes previously shown in figure 5, the \(x,y,t\)-averaged velocity \(\langle u\rangle(z)\), density \(\langle\rho\rangle(z)\), and the gradient Richardson number \(\mathrm{Ri}_{g}\) based on the gradients of these mean flows \[\mathrm{Ri}_{g}(z)\equiv-\mathrm{Ri}\;\frac{\partial_{z}\langle\rho\rangle}{ (\partial_{z}\langle u\rangle)^{2}}. \tag{1}\] Such simultaneous velocity and density diagnostics are available in salt-stratified experiments (at \(\mathrm{Pr}\approx 700\)), and we superimpose on figure 9 the mean profiles in the I and T regimes from Lefauve _et al._ (2019) (their figure 4_f,l_). However, these diagnostics cannot be accurately obtained in temperature-stratified experiments (at \(\mathrm{Pr}\approx 7\)) to match our DNS for two main reasons. First, the velocity field measurements rely on particle image velocimetry (PIV) in a refractive index matched fluid, which is impossible without the introduction of Figure 8: Spatio-temporal diagrams of shadowgraph in W3 (top row) and W5 (bottom row) comparing DNS (left column) to experiments (right column). The vertical black solid lines indicate the time of the snapshots in figure 7. Figure 7: Snapshots of shadowgraph comparing the normalised intensity \(I(x,z)\) in DNS (left column) to the matching experiments (right column) in two cases W3 (top row) and W5 (bottom two rows). Magnitudes (colour bar limits) are naturally different due to the unknown experimental \(\beta\) factor in (1). The times at which these snapshots were taken are shown by the vertical lines in the spatio-temporal diagrams of figure 8. another stratifying agent (necessarily having a much smaller diffusivity than temperature). Second, because the density field measurements rely on laser-induced fluorescence (LIF) with a dye having a much smaller diffusivity than temperature, and therefore 'tagging' it poorly (temperature-sensitive fluorescent dyes exist but such measurements are more difficult and less accurate). In figure 9(a), the velocity in the L regime (B2) adopts an approximately sinusoidal profile with a low amplitude (\(\max|u|\approx 0.3\)), whereas in the SW, TW, I and T regimes (B5, B6, B8, B10, respectively), the mean velocity varies nearly linearly with height for \(|z|\lesssim 0.5\), and \(\max|u|\approx 1\). The vertical locations of the peaks in velocity, initially around \(z\approx\pm 0.5\) in the L regime, shift slightly towards the top and bottom walls \(z\approx\pm 0.7\) in the I and T regimes. These observations agree qualitatively with the experimental profiles of Lefauve _et al._ (2019_a_) in the four regimes (see their figures 3_(f,l)_ and 4_(f,l)_). However, exact agreement should not be expected as their \(\theta\), Re and Pr values differ from ours. In figure 9(b), the density profile resembles an error function in the L regime (B2) whereas it has a partially mixed layer in the W and I regimes (B5-8 and I,Exp.) identifiable by a central region of reduced gradient (a layer) flanked by two regions of enhanced gradient (two interfaces). These three profiles are almost identical, with the nuance that the intermediate layer becomes slightly thicker from B5 to B6 to B8, as expected. Finally, in the T regime (B10 and T, Exp.), the middle layer becomes noticeably thicker, and the two interfaces flanking it become less sharp, leading to a profile approaching a uniform stratification. In figure 9(c), the L flow has \(\mathrm{Ri}_{g}\approx 0.5\) throughout the central quarter of the height of the duct, flanked by steeply increasing values. The W and I flows have \(\mathrm{Ri}_{g}\approx 0.1\) near \(z=0\). The T flow has a broader minimum with \(\mathrm{Ri}_{g}\approx 0.1-0.15\). The \(\mathrm{Ri}_{g}\) profiles in the I and T flows are qualitatively consistent between DNS and experiments, showing that despite the difference in parameters, some key dynamical features of turbulence are not sensitive to the fluid properties. Note that \(\mathrm{Ri}_{g}<0.25\) at \(z=0\) in the W, I, and T flows, but not in the L flow. Therefore, the mean profiles in the non-laminar flows reassuringly satisfy the Miles-Howard criterion necessary for the development of instabilities in a steady, inviscid, Boussinesq, parallel stably-stratified shear flow. Moreover, in the T regime \(\mathrm{Ri}_{g}(z)\approx\mathrm{Ri}_{e}\approx 0.1-0.15\) (i.e. robustly below the Miles-Howard criterion of 0.25). This agrees with the experimental conclusions of Lefauve & Linden (2022_a_) (see their figure 5), drawn from a wider data set Figure 9: Vertical profiles of the mean (a) streamwise velocity \(\langle u\rangle\), (b) density \(\langle\rho\rangle\) and (c) gradient Richardson number \(\mathrm{Ri}_{g}\), in the five flows of figure 5. We also include the I and T experimental profiles in Lefauve _et al._ (2019_a_) at Pr \(\approx 700\). The vertical dashed lines in (c) denote \(Ri_{g}=0.1\) and 0.25. of 16 flows with increasing levels of turbulence. The reasons for this particular equilibrium, originally suggested by Turner (1973) and much observed since in numerical, experimental and observational data, are still debated. Other authors (Thorpe, 2010; Smyth & Moum, 2013; Salehipour _et al._, 2018) have called it'self-organised criticality' or'marginal stability', and found values ranging from \(\mathrm{Ri}_{e}\approx 0.07\) to \(0.25\) in various stratified shear flows that differ (sometimes significantly) from SID (Lefauve & Linden, 2022). ### Kinetic energy We now study the spatio-temporal dynamics of kinetic energy along the entire length of the duct in our DNS. Similar experimental diagnostics are not yet available since the resolution of video cameras and geometry of the laser sheet limit us to shorter windows spanning only a limited part of the duct length. We start by decomposing the velocity into mean and turbulent (fluctuating) components. Lefauve & Linden (2022) defined the mean as the \(x,t\) average. However, as our DNS data are available along the entire length of the duct, the flow (especially \(u\)) becomes noticeably inhomogeneous in \(x\) (see figure 5(a-c)). A simple \(x\) average would therefore make the 'turbulent' component artificially large by incorporating a significant non-parallel - but laminar - component. To resolve this, we define the mean and fluctuations using a moving-average, \[\mathbf{\bar{u}}_{m}\left(x,y,z,t\right) \equiv\frac{1}{\Delta L}\int_{-\Delta L/2}^{\Delta L/2}\mathbf{u} \left(x-s,y,z,t\right)ds, \tag{10}\] \[\mathbf{u}_{m}^{\prime}(x,y,z,t) \equiv\mathbf{u}-\mathbf{\bar{u}}_{\mathbf{m}}, \tag{11}\] and the respective'moving' mean kinetic energy (MKE) and turbulent kinetic energy (TKE) are \[\bar{k}_{m} \equiv\frac{1}{2}\mathbf{\bar{u}}_{m}\cdot\mathbf{\bar{u}}_{m}, \tag{12}\] \[k_{m}^{\prime} \equiv\frac{1}{2}\mathbf{u}_{m}^{\prime}\cdot\mathbf{u}_{m}^{ \prime}. \tag{13}\] The length of the averaging stencil \(\Delta L=10\) was chosen to maximise the time- and duct-volume-averaged MKE \(\langle\bar{k}_{m}\rangle_{\mathcal{V},t}\), as shown in figure 10(d). In figure 10(a-c) we demonstrate the use of this moving average with a snapshot in the travelling wave regime (B6, as in figure 5(c)). The underlying turbulence 'hotspots' visualised by the density field (panel a) are faithfully captured by our moving-averaged TKE \(k_{m}^{\prime}\) (panel b), whereas they are greatly overestimated by the 'naive' TKE based on the \(x,t\)-averaged velocity (panel c), which is equivalent to setting \(\Delta L=2A=60\) and \(x=0\) in (10). Figure 11 shows timeseries of the duct-volume-averaged MKE \(\langle\bar{k}_{m}\rangle_{\mathcal{V}}\) (panel a) and TKE \(\langle k_{m}^{\prime}\rangle_{\mathcal{V}}\) (panel b) for the five regimes shown in figure 5. The laminar (B2) and the stationary wave (B5) cases quickly reach a steady state with constant or near-constant MKE (panel a) and zero or near-zero TKE (panel b). We note that the flow in B5 is faster than in B2 as their MKE plateau at \(\approx 0.2\) and \(\approx 0.06\), respectively. The travelling wave (B6) case shows more fluctuations in the MKE (but also around \(\approx 0.2\)), and a larger TKE fluctuating between \(\approx 0.001-0.005\). The intermittent and turbulent cases (B8 and B10) show much larger fluctuations in MKE and TKE, and a significantly larger TKE than in the travelling wave case. In these two cases, the MKE and TKE fluctuations are of comparable magnitude to their temporal mean. The TKE fluctuations are particularly striking, showing that the flow, even when averaged over the entire duct volume, is alternating between phases of intense and weak turbulence. These fluctuations appear to be quasi-periodic with a period of \(\approx 100\) advective time units, corresponding to approximately one and a half full-duct transit times at advective speed 1. Although long known from experiments, the mechanisms responsible for these fluctuations remain poorly understood and beyond the scope of this paper. The intermittent regime (B8) differs from the turbulent regime (B10) in that its TKE occasionally drops to zero for extended periods of time (here \(150\lesssim t\lesssim 210\)); specifically, the flow relaminarises in the duct. This never happens in the turbulent regime, although it does feature cycles of weaker and stronger turbulence. Our MKE and TKE timeseries are approximately similar to their salt-stratified experimental counterparts in Lefauve & Linden (2022\(b\)) (see their figure 1(_a,b_) and figure 3(_f,i,o,r_), noting that they correspond to \(\mathrm{Pr}\approx 700\)). This agreement between the DNS and experiments extends from the values of mean \(\mathrm{MKE}\approx 0.2\) in the W/I/T regimes, to the mean TKE \(\approx 0.01-0.02\) in the I/T regimes, corresponding to a typical turbulent/mean velocity ratio of \(\sqrt{k_{m}^{\prime}/\bar{k}_{m}}\approx 20-30\,\%\) (although the ratio appears to be slightly higher in the DNS B10 than in the experiments T2 and T3). The large temporal fluctuations in B10 (which is at the limit of our computational resources) are typical of a flow near the I/T regime transition rather than well into the T regime, as was already clear from its location on the regime diagram (figure 6(b)). The experimental time series of TKE (see Lefauve & Linden (2022\(b\)), figure 3(_o,r_) in their datasets T1 and T3) suggest that these temporal fluctuations would significantly decrease in a more highly turbulent flow at higher Re and \(\theta\) (i.e. that the TKE would become increasingly steadily sustained at a higher level). Our B8 has a time series similar to their Figure 11: Time series of volume-averaged (a) MKE and (b) TKE for the five cases B2-B10 of figure 5. A snapshot of density and TKE in B6 at \(t=160\) was shown in figure 10(a-b). Figure 10: Snapshots of DNS B6 at \(t=160\) showing the (a) density, (b) moving-averaged TKE \(k_{m}^{\prime}\) defined in (5.5), and (c) ‘naive’ TKE based on \(\mathbf{u}^{\prime}=\mathbf{u}-\langle\mathbf{u}\rangle_{x,t}\); (d) bulk MKE \(\langle\bar{k}_{m}\rangle\gamma_{y,t}\) as a function of the stencil length \(\Delta L\). The dashed line corresponds to our choice in the remainder of the paper \(\Delta L=10\). dataset T1 (both being near the I/T transition), whereas their more turbulent dataset T3 is further away from the I/T transition. Our MKE and TKE timeseries are generally out of phase in time in the I and T regimes (figure 11), although the anti-correlation is less clear in the T regime. In other words, in B8, the MKE tends to decrease as the TKE increases (i.e. turbulence slows down the mean flow), and vice versa (the MKE tends to increase when the TKE decreases or is zero). This behaviour, wherein the mean flow and the turbulence appear to regulate one another, supports the ideas of'self-organised criticality' and'marginal stability' discussed previously. In B10, this anticorrelation holds until \(t\approx 200\), at which point the TKE increases rapidly while the MKE continues increasing, leading both TKE and MKE to peak approximately at the same time. In the T regime the mean flow thus appears closer to a turbulent threshold, such that perturbations grow more readily without 'waiting' for the mean flow to fully accelerate. Equivalently, in the T regime (which has the highest \(\theta\)Re), the mean flow is able to keep accelerating despite the growing turbulence, presumably due to a higher forcing (because it is proportional to \(\theta\)) and a lower TKE dissipation than in the I regime (because it is inversely proportional to Re). Finally, figure 12 shows \(x-t\) diagrams of TKE (averaged along \(y\) and \(z\)) for B2 to B10 (left to right) after the initial transients (\(t>80\)) have decayed. In the L regime (B2, figure 12(a)), the TKE is negligible except very near the ends of the duct \(|x|\approx 30\), where tiny fluctuations are found where the exchange flow discharges into the reservoirs. In the stationary W regime (B5, panel b), the TKE at both ends is higher, extends a little further into the duct, and is also occasionally visible near the centre of the duct \(|x|\approx 0\) where it appears to remain stationary. Similarly, the 'end waves' do not propagate into the duct and are probably swept into the reservoirs, implying that their phase speed is smaller than the convective speed of the flow (i.e. that the flow may be critical or supercritical in these regions). In the travelling W regime (B6; figure 12(\(c\))), larger TKE develops and it sometimes appears to propagate along the duct. These waves appear to be generated within the duct rather than travelling from the ends. In the I regime (B8, panel d), a laminar phase develops between \(120\lesssim t\lesssim 220\), lasting over a full duct transit time (taking \(\approx 2A=60\) ATU (advective time unit) at the maximum flow speed \(\approx 1\)). The boundary between laminar and turbulent phases appears to propagate from one end of the duct to the other. In the T regime, a quiescent patch develops near the centre of the duct just after \(t=200\). The quiescent phase ends when energetic turbulent regions move in from both ends of the duct. Figure 12: Spatio-temporal diagram of TKE \(\langle k^{\prime}_{m}\rangle_{y,z}(x,t)\) for \(t\in[80,260]\) (after the initial transients) in DNS (\(a\)) B2, (\(b\)) B5, (\(c\)) B6, (\(d\)) B8, (\(e\)) B10. Note the colorbar is in log scale here. ### Pressure We now analyse the pressure field throughout the interior of duct, which is inaccessible to experiments. Figure 13 shows representative snapshots of the spanwise-averaged pressure \(\left<p\right>_{y}\) and five equally spaced isopycnals for B2, S6, B6, S8, and B8. Note that our definition of the non-dimensional density \(\rho\) as the perturbation around the reference \(\rho_{0}\) (see SS2.1) implicitly subtracts the hydrostatic pressure due to \(\rho_{0}\) in the reservoirs. The pressure distribution is qualitatively similar in the W and I regimes (S6, B6, S8, B8; figure 13(b-e)), but different in the L regime (B2, figure 13(a)). In the L flow inclined at \(\theta=2^{\circ}\) (panel a), the pressure conforms to what we expect from an exchange flow in a horizontal duct where \(\theta\) does not play a major role (see the sketch in Lefauve (2018), figure 1.4). Essentially, each layer experiences a favourable streamwise pressure gradient: \(-\partial_{x}p>0\) in the right-flowing lower layer where \(u>0\), causing a convective acceleration along the duct, \(u\partial_{x}u>0\), and vice versa, \(-\partial_{x}p<0\) in the left-flowing upper layer where \(u<0\), causing the expected \(u\partial_{x}u<0\) (\(\partial_{x}u>0\) in both layers). This is achieved by a high-pressure zone in the bottom left and top right reservoirs (in red), and a low-pressure zone in the top left and bottom right reservoirs (in blue), as would be naturally obtained by the hydrostatic equilibrium of two solutions having different densities and required to match hydrostatic pressures at mid-height. However, in flows inclined at \(\theta=6^{\circ}\) and \(8^{\circ}\) (figure 13(b-e)), the pressure has a large-scale global minimum in the centre of the duct (in blue). This low-pressure zone causes both layers to experience a favourable pressure gradient over approximately the first half of their course, causing the fluid to accelerate as it flows towards the centre of the duct, but an _adverse_ pressure gradient over the second half of their course, causing the fluid to decelerate from the centre of the duct as it flows away from the centre of the duct. This adverse pressure gradient confirms the predictions of Lefauve & Linden (2022_a_) (see their SS 4.3) who, without having direct access to the pressure field, found that in most data sets the Reynolds-averaged budget implied the existence of an adverse pressure gradient. Furthermore, these features of the pressure distribution are found in both the BR and Figure 13: Spanwise-averaged pressure field snapshots (colours) superimposed with five isopycnals (lines) \(\rho=0,\pm 0.4,\pm 0.8\) in DNS (a) B2, (b) S6, (c) B6, (d) S8, and (e) B8. SR geometries (compare figure 13(b,c) and figure 13(d,e)) as the outflowing layers must decelerate in both geometries. In the BR (and by extension in the AR and Bench.), this occurs since the streams encounter fluid at rest in a large reservoir; in the SR this occurs since the streams encounter the artificial forcing region where the flow is brought to rest. This suggests that the SR appears to adequately mimic the Bench. despite the absence of reservoirs, which may help reduce computational costs further in future studies. Finally, we address the impact of the adverse pressure gradient on the density field and its interface(s). The isopycnals in figure 13(a-d) (the yellow lines denoting the lower, densest layer, and the dark blue lines denoting the upper, lightest layer) show that the central low-pressure zone is linked with an increasing depth of each layer along the direction of the flow. This is expected from mass conservation along a straight duct: an accelerating layer (because of a favourable pressure gradient or gravitational forcing) must become thinner along its course, and vice versa, a decelerating layer (caused by an adverse pressure gradient) must become thicker. We also see from the isopycnals that this thickening is associated with the emergence of displaced isopycnals (in B6, S6) and turbulence (in B8, S8). Note that this interface thickening implies the occurrence of internal hydraulic 'jump' and the set-up of hydraulic control (Meyer & Linden, 2014; Lefauve & Linden, 2020_b_) in the middle of the duct. This effect of internal hydraulics is beyond the scope of this paper and will be revisited in more detail in our future work. ### Turbulent energy fluxes We conclude this section with an analysis of the turbulent energy fluxes in datasets B5, B6, B8 and B10 and use it to demonstrate how DNS data allows us to overcome the current experimental limitations identified in Lefauve & Linden (2022_b_) (their appendix B) and improve our physical understanding of stratified turbulent mixing. To do so, we adopt the'shear-layer' non-dimensional framework of Lefauve & Linden (2022_a_) (SS 3.3) by rescaling all velocities such that the \((x,t)\)-averaged \(\langle u\rangle_{t}(y=0,z)\) has extrema \(\pm 1\), and rescaling all spatial variables such that the \(z\) location of these extrema are located at \(\pm 1\) (whereas previously the top and bottom walls were located at \(\pm 1\)); we call this central region of non-dimensional height = 2 the shear-layer. This effectively rescales the effective Reynolds number and bulk Richardson number of the flow, which we now denote \(Re^{s}\) and \(Ri_{b}^{s}\) respectively, and allows for more meaningful comparison of datasets with one another as well as with the literature. Following Lefauve & Linden (2022_a_) we further remove from our analysis all data outside the shear layer, i.e. exclude the top and bottom near-wall boundary layers in \(z\), as well the boundary layers in \(y\) (where the peak \(|u|\) is less than 0.7), in order to focus on the 'core' region with turbulent activity. We then define the non-dimensional time and volume-averaged mean kinetic energy \(\bar{K}=\langle k_{m}\rangle\) and turbulent kinetic energy \(K^{\prime}=\langle k^{\prime}_{m}\rangle\), as well as the mean scalar variance \(\bar{K}_{\rho}\equiv Ri_{s}^{b}\langle\bar{\rho}_{m}^{2}/2\rangle\) and turbulent scalar variance \(K^{\prime}_{\rho}\equiv Ri_{s}^{b}\langle{\rho^{\prime}_{m}}^{2}/2\rangle\). Note that the subscript \(m\) denotes the moving average introduced in figure 5, and that the multiplying factor \(Ri_{b}^{s}\) allows us to interpret \(K_{\rho},K^{\prime}_{\rho}\) as proxies for potential energy under linear stratification. The simple bracket averaging \(\langle\cdot\rangle\) denotes a combined three-dimensional volume averaging over the shear layer region and along the central two-thirds of the duct (by excluding one averaging window \(\Delta L\) on either end) and time-averaging averaging over \(t\in[80,280]\) (focussing on the established dynamics as in figure 12). Considering the evolution equations of these four energy reservoirs under a set of'safe' approximations in SID, Lefauve & Linden (2022_b_) derived the following approximate balances between energy fluxes in a statistically steady state: \[\mathcal{P} \approx\mathcal{F}-\bar{\epsilon}\] (production of \[K^{\prime}=\text{forcing}-\text{laminar dissipation}\] \[\mathcal{E} \approx\mathcal{P}-\mathcal{B}\] (dissipation of \[K^{\prime}=\text{production}-\text{buoyancy flux}\] \[\mathcal{P}_{\rho} \approx\Phi^{\bar{K}_{\rho}}\] (production of \[K^{\prime}_{\rho}=\text{boundary net flux of unmixed fluid}\] \[\chi \approx\mathcal{P}_{\rho}\] (dissipation of \[K^{\prime}_{\rho}\] i.e. mixing = production) (5.6d) where the eight fluxes are: \[\mathcal{P}\equiv-\langle u^{\prime}v^{\prime}\partial_{y}\bar{u}+u^{\prime}w ^{\prime}\partial_{z}\bar{u}\rangle,\] [MISSING_PAGE_POST] of five smaller than \(\chi\). The remaining two thirds of this discrepancy do not appear to be due to under-resolution, since our spatial grid approaches the Batchelor length-scale computed in the shear-layer \([\Delta x,\,\Delta y,\,\Delta z]=[3.3,\,2.3,\,2.3]\,\ell_{B}\), where \(\ell_{B}\equiv\langle\mathcal{E}\rangle^{-1/4}(Re^{s})^{-3/4}Pr^{-1/2}=0.01\) in non-dimensional shear-layer units. Furthermore, we verified that \(\chi\) was already converged after comparing with a coarser grid (by a factor of \(\approx 2\) in \(x\) and \(y\), see table 1). Closer evaluation of the underlying time series shows that \(\chi\) undergoes two cycles over \(t\in[80,280]\), much like the TKE in figure 11(b), with peaks being 10 times larger than the troughs. Therefore it is possible that our insufficiently long time averaging window (containing only two extreme events) may yield a time-averaged \(\chi\) slightly below what it would be when averaged over longer times. Having largely verified (100), we now examine in figure 14(e-f) two robust empirical relations from the SID experimental literature in very turbulent flows (\(\mathcal{E}\gg\bar{\epsilon}\)): \[\mathcal{P}_{\rho} \approx\mathcal{B}\] because \[\partial_{z}\langle\rho\rangle_{x,t}\approx-1\] in the the shear layer ( 10 \[a\] ) \[\mathcal{E}+\bar{\epsilon} \approx\mathcal{E}\approx 0.035\theta\] because of hydraulic control and \[Ri_{s}^{b}\approx 0.10-0.15\] ( 10 \[b\] ) where \(\theta\) is in radians (Lefauve & Linden 2022\(b\)). Our DNS data generally confirm (100) (panel e) and ( 100 \[b\] ) (panel f) but with two reservations. First, in B10, \(\mathcal{P}_{\rho}\) is 25 % below the expected value \(\mathcal{B}\) as a result of the mean vertical density gradient being slightly weaker in our DNS (at \(Pr=7\)) than in the experiments (at \(Pr=700\)). Second, although the total dissipation \(\mathcal{E}+\bar{\epsilon}\) corrected with the appropriate boundary fluxes (full symbols) follows ( 100 \(b\)) (in particular since our \(Ri_{b}^{s}\) indeed converges to \(0.12-0.14\) in all datasets B5-10), the agreement is less good for the turbulent dissipation alone \(\mathcal{E}\) (smaller empty symbols). In other words, our ability to fully capture \(\mathcal{E}\) and all boundary fluxes in DNS allow us to quantify the relative importance of the (subdominant) terms \(\Phi^{K},\Phi^{K^{\prime}}\) and \(\bar{\epsilon}\) in SID energetics. We Figure 14: Correlation between the time- and volume-averaged energy fluxes (101)-(102) in B5, B6, B8, B10. (a-d) Verification of the energy balances in (100) (LHS vs RHS), full symbols denoting the minor correction of boundary fluxes (100), (e-f) Verification of the empirical relations (101). (g) Determination of the empirical flux coefficient (100) suggesting \(\Gamma=0.1\). hypothesisise that stronger turbulence at higher values of \(\theta Re^{s}\) would see these subdominant terms plateau, and \(\mathcal{E}\) follow \(0.035\theta\) increasingly closely. We now move to the ultimate goal of this energetics analysis, and more broadly of research into turbulent mixing, which is to eventually connect all turbulent fluxes to known non-dimensional parameters of the flow in a closed system of equations. In the asymptotic'strong SID turbulence' scenario under which \(\bar{\boldsymbol{\epsilon}}\) becomes subdominant, we have seven key turbulent fluxes in (5.7),(5.8) and six independent equations: the four equations (5.6) expressing conservation of energy, and the two robust empirical equations (5.10), one of which crucially involves the single input parameter \(\theta\). To close the system, we require a seventh independent equation, which we choose to be the classical flux parameter in the ocean mixing literature: \[\Gamma\equiv\frac{\mathcal{B}}{\mathcal{E}}. \tag{5.11}\] Combining these seven equations in matrix form, and inverting this linear system, we deduce all fluxes in closed form as: \[\begin{bmatrix}1&-1&0&0&0&0&0\\ 0&-1&1&1&0&0&0\\ 0&0&0&0&1&-1&0\\ 0&0&0&0&1&0&-1\\ 0&0&0&1&-1&0&0\\ 0&0&1&0&0&0&0\\ 0&0&\Gamma&-1&0&0&0\end{bmatrix}\begin{bmatrix}\mathcal{F}\\ \mathcal{P}\\ \mathcal{E}\\ \mathcal{B}\\ \Phi^{\mathcal{E}_{\rho}}\\ \chi\end{bmatrix}=\begin{bmatrix}0\\ 0\\ 0\\ 0\\ 0.0035\theta\\ 0\\ \chi\end{bmatrix}\implies\begin{bmatrix}\mathcal{F}\\ \mathcal{P}\\ \mathcal{E}\\ \mathcal{B}\\ \Phi^{\mathcal{E}_{\rho}}\\ \chi\end{bmatrix}=0.035\theta\begin{bmatrix}1+\Gamma\\ 1+\Gamma\\ 1\\ \Gamma\\ \Gamma\\ \Gamma\\ \Gamma\end{bmatrix}, \tag{5.12}\] This expression highlights the importance of knowing the value of \(\Gamma\), and its potential dependence on any of the non-dimensional flow parameters such as \(\theta,Re\) or \(Pr\), as the keystone to turbulent mixing in SID. Figure 14(g) shows \(\mathcal{B}\) vs \(\mathcal{E}\) (both of which are fully resolved in our DNS) which strongly support the constant value of \(\Gamma\approx 0.1\) in the two most turbulent datasets (\(0.097\) and \(0.096\) in B8 and B10 respectively). The experimental data Lefauve & Linden (2022_b_) also found \(\Gamma\approx 0.1\) but their insufficient spatial resolution to fully capture \(\mathcal{E}\) led them to conjecture a slightly lower value in the range \(0.05-0.07\). Our DNS allow us to confirm that \(\Gamma\approx 0.1\) is a robust estimate, at least for turbulence at \(Pr=7\) in this narrow region of the \((\theta,Re)\) space. ## 6 Conclusions In this paper, we performed and interpreted DNS of stratified shear flows in a long rectangular duct connecting two reservoirs. The flow is continuously forced by gravity by a modest positive tilt angle \(\theta=0-10^{\circ}\), and has Reynolds number \(\mathrm{Re}=400-1250\), bulk Richardson number \(\mathrm{Ri}=0.25\), and Prandtl number \(\mathrm{Pr}=7\). Our results are summarised as follows. ### An efficient numerical paradigm for SID In SS2 we presented a new numerical set-up (figure 1) designed to closely mimic the experimental set-up of the stratified inclined duct (SID). We introduced a new forcing term in the reservoirs (figure 1) that allows the exchange flow to be sustained indefinitely with small reservoirs, thus focusing our computational resources on the flow of interest within the duct. We also implemented an immersed boundary method (figure 2) to enforce the boundary conditions on the duct walls that match the experiments, i.e. no-slip for velocity and no-flux for density. In SS3 we validated this numerical configuration. First, we showed that our artificial forcing in the reservoirs was necessary to sustain the exchange flow by'refreshing' any finite-sized reservoirs beyond the short time-scale over which they would otherwise fill with mixed fluid (figure 3). Second, we showed that small reservoirs combined with the appropriate forcing were sufficient to reproduce the flow of a Bench. case having very large reservoirs and no forcing (figure 4). In SS4 we described the properties of increasingly disorganised and turbulent flow regimes found by increasing Re and \(\theta\) (figure 5). These regimes are similar to those found in experiments where the stratification is achieved by temperature (approximately matching our \(\mathrm{Pr}=7\)), which further validates the relevance and accuracy of our DNS to faithfully reproduce experimentally-realisable flows. These flow regimes are generally found in the same region of \(\theta\) - Re parameter space as the experiments (figure 6), with very little difference between the larger reservoir (AR) and the smaller reservoir (BR). This agreement between DNS and experiments carries over to more detailed flow characteristics visualised by instantaneous shadowgraph snapshots (figure 7) and spatio-temporal diagrams (figure 8). In SS5 we studied quantitative DNS diagnostics that complement experimental diagnostics. We first investigated the vertical velocity and density profiles. The gradient Richardson number (figure 9) displayed the same turbulent 'equilibrium' with nearly uniform \(\mathrm{Ri}_{g}\approx 0.10-0.15\) across the shear layer as in the experiments, despite the difference in Pr. We then moved to the mean and turbulent kinetic energies (figure 10), introducing a moving-average (in \(x\)) definition suited to our DNS data, focussing on the intermittency of the turbulence (figure 11). We investigated the spatio-temporal behaviour of the TKE (figure 12), exploiting the fact that our DNS data is available along the full length of the duct. We contrasted stationary and travelling waves, described where waves originate from, and how turbulence or relaminarisation sometimes occur synchronously along the duct, and sometimes in 'waves' propagating at the advective speed. Next, we investigated the pressure field (figure 13) and discovered a large-scale low-pressure zone inside the duct in all non-laminar flows, which was previously conjectured with experimental data, but only proven with DNS data. We showed that our ad hoc forcing, even in the smallest geometry SR, was sufficient to reproduce this behaviour observed with real reservoirs. This low-pressure zone creates a favourable pressure gradient in both layers over roughly the first half of their transit, allowing them to accelerate as they flow in, but, crucially, an adverse pressure gradient over roughly the second half of their transit, causing them to decelerate before they flow out. This result suggests a potentially new mechanism for hydraulic control in exchange flows tilted at a favourable angle \(\theta\), which requires further study. Finally, we have largely confirmed the simplified model in (5.6) for the steady-state kinetic and scalar energy fluxes in SID turbulence, as well as the empirical relations in (5.10) (figure 14) hypothesised from experimental data. This allowed us to express in (5.12) all seven fluxes fully characterising the time- and volume-averaged turbulent energetics and mixing in SID as functions of \(\theta\) and the flux coefficient \(\Gamma\). Our data suggest \(\Gamma\approx 0.1\) in the most turbulent flows, a value lower than the classical value of \(0.2\) used in most of the ocean mixing literature. ### Outlook This paper introduced a computationally efficient way to simulate realistic shear-driven stratified turbulence over long time periods, which shows excellent agreement with the experiments, with all non-dimensional parameters being matched (at \(Pr=7\)). We consider this comprehensive agreement between highly-nonlinear numerical and experimental fluid dynamics to be the major result of this paper, and a milestone in the study of SID and stratified turbulence. Furthermore, the numerics add considerable value to the experiments by providing accurate and highly-resolved data over the entire domain, and by allowing arbitrarily long integration times with the addition of forcing terms in the reservoirs. There is significant scope to build on this study, by overcoming technical challenges. For example, improved experimental technology is needed to obtain more accurate, higher-resolution data in more highly turbulent flows at \(\mathrm{Re}=O(10^{3}-10^{4})\). Increased computational power is also needed to match such \(\mathrm{Re}\) and to tackle the differences between temperature and salt stratification in the range \(\mathrm{Pr}=O(10^{2})\). Finally, studying the slow, quasi-periodic dynamics of intermittent turbulence requires large physical reservoirs and data acquisition as well as long integration times and costly simulations. Nevertheless, as experimental technology improve and computational power increases, we anticipate that they will be able to cover a much larger range in parameter space and answer questions previously inaccessible to theory, observations, experiments or simulations alone. **Acknowledgements.** We are grateful to Dr Xianyang Jiang and Dr Gaopan Kong for their help in carrying out the new shadowgraph experiments for figure 7 and figure 8. We thank Dr Ricardo Frantz for his help with the development of our DNS with Xcompact3d. **Funding.** This work was supported by the European Research Council (ERC) under the European Union Horizon 2020 Research and Innovation Grant No 742480 'Stratified Turbulence And Mixing Processes' (STAMP). Part of the DNS were run with resources from Compute/Calcul Canada. A.L. is supported by a Leverhulme Early Career Fellowship. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising from this submission. **Declaration of interests.** The authors report no conflict of interest.
2306.12492
Strain-Switchable Field-Induced Superconductivity
Field-induced superconductivity is a rare phenomenon where an applied magnetic field enhances or induces superconductivity. This fascinating effect arises from a complex interplay between magnetism and superconductivity, and it offers the tantalizing technological possibility of an infinite magnetoresistance superconducting spin valve. Here, we demonstrate field-induced superconductivity at a record-high temperature of T=9K in two samples of the ferromagnetic superconductor Eu(Fe$_{0.88}$Co$_{0.12}$)$_{2}$As$_{2}$. We combine tunable uniaxial stress and applied magnetic field to shift the temperature range of the zero-resistance state between 4K and 10K. We use x-ray diffraction and spectroscopy measurements under stress and field to demonstrate that stress tuning of the nematic order and field tuning of the ferromagnetism act as independent tuning knobs of the superconductivity. Finally, DFT calculations and analysis of the Eu dipole field reveal the electromagnetic mechanism of the field-induced superconductivity.
Joshua J. Sanchez, Gilberto Fabbris, Yongseong Choi, Jonathan M. DeStefano, Elliott Rosenberg, Yue Shi, Paul Malinowski, Yina Huang, Igor I. Mazin, Jong-Woo Kim, Jiun-Haw Chu, Philip Ryan
2023-06-21T18:04:05Z
http://arxiv.org/abs/2306.12492v1
# Strain-Switchable Field-Induced Superconductivity ###### Abstract Field-induced superconductivity is a rare phenomenon where an applied magnetic field enhances or induces superconductivity. This fascinating effect arises from a complex interplay between magnetism and superconductivity, and it offers the tantalizing technological possibility of an infinite magnetoresistance superconducting spin valve. Here, we demonstrate field-induced superconductivity at a record-high temperature of T=9K in two samples of the ferromagnetic superconductor Eu(Fe\({}_{0.88}\)Co\({}_{0.12}\))\({}_{2}\)As\({}_{2}\). We combine tunable uniaxial stress and applied magnetic field to shift the temperature range of the zero-resistance state between 4K and 10K. We use x-ray diffraction and spectroscopy measurements under stress and field to demonstrate that stress tuning of the nematic order and field tuning of the ferromagnetism act as independent tuning knobs of the superconductivity. Finally, DFT calculations and analysis of the Eu dipole field reveal the electromagnetic mechanism of the field-induced superconductivity.
2302.00561
Trash to Treasure: Using text-to-image models to inform the design of physical artefacts
Text-to-image generative models have recently exploded in popularity and accessibility. Yet so far, use of these models in creative tasks that bridge the 2D digital world and the creation of physical artefacts has been understudied. We conduct a pilot study to investigate if and how text-to-image models can be used to assist in upstream tasks within the creative process, such as ideation and visualization, prior to a sculpture-making activity. Thirty participants selected sculpture-making materials and generated three images using the Stable Diffusion text-to-image generator, each with text prompts of their choice, with the aim of informing and then creating a physical sculpture. The majority of participants (23/30) reported that the generated images informed their sculptures, and 28/30 reported interest in using text-to-image models to help them in a creative task in the future. We identify several prompt engineering strategies and find that a participant's prompting strategy relates to their stage in the creative process. We discuss how our findings can inform support for users at different stages of the design process and for using text-to-image models for physical artefact design.
Amy Smith, Hope Schroeder, Ziv Epstein, Michael Cook, Simon Colton, Andrew Lippman
2023-02-01T16:26:34Z
http://arxiv.org/abs/2302.00561v1
# Trash to Treasure: ###### Abstract Text-to-image generative models have recently exploded in popularity and accessibility. Yet so far, use of these models in creative tasks that bridge the 2D digital world and the creation of physical artefacts has been understudied. We conduct a pilot study to investigate if and how text-to-image models can be used to assist in upstream tasks within the creative process, such as ideation and visualization, prior to a sculpture-making activity. Thirty participants selected sculpture-making materials and generated three images using the Stable Diffusion text-to-image generator, each with text prompts of their choice, with the aim of informing and then creating a physical sculpture. The majority of participants (23/30) reported that the generated images informed their sculptures, and 28/30 reported interest in using text-to-image models to help them in a creative task in the future. We identify several prompt engineering strategies and find that a participant's prompting strategy relates to their stage in the creative process. We discuss how our findings can inform support for users at different stages of the design process and for using text-to-image models for physical artefact design. ## Background and Motivations Text-to-image deep learning models are generative AI techniques that synthesize images from text inputs. Implementations of this technology such as Midjourney, DALLE-2, and Stable Diffusion [12] have exploded in popularity in the last year. The company Stability AI has released the models, weights, and code for Stable Diffusion, allowing it to be used publicly, and Midjourney's Discord interface has reached over 1 million users. Increased access to this technology has accelerated the development of open-source tools and resources for creativity and design. As a result, we have seen artists using AI-generated imagery as part of their visual design process. Two examples include a _Cosmopolitan_ magazine cover designed in tandem with DALLE-2, [13], and an artist using Midjourney to win an art competition [1]. This democratization has raised questions regarding how such models can be used in a wide variety of tasks, such as idea visualization [1, 12, 13, 14, 15]. These tools offer new possibilities for navigating the creative process. Design research suggests a need for flexible pathways for creative computational assistance _early_ on in the design process [16]. Many AI-empowered co-creative tools focus on support for the later stages of design, when participants exploit, refine, or implement ideas, but are less involved in the earliest stages of co-creation, when participants are in the "explore" phase [17]. In contrast to using the generative model for directly creating digital media content, a growing body of work explores the use of generative AI for upstream tasks in the creative process, such as ideation and visualization, that have a range of downstream tangible outcomes. These can include tattoos [18], fashion [10], community values [1], and visualizations of the future [1]. A key challenge for using generative AI in upstream creative tasks, such as ideation and idea visualization, is the fact that the affordances of physical materials could differ from those of generated imagery, and users may be frustrated bridging that distance to bring their idea into reality. Work by dang2022towards shows that lack of support in the trial and error process of prompt engineering (tactics for refining prompts to synthesize desired outputs) can be frustrating for users, motivating further investigation into users' prompting strategies. Connecting a user's needs, based on their prompting strategy, to their design stage could help provide individualized support for a user's design goals. ## Study Design We conduct a pilot study to examine the impact of introducing AI-generated images into the early stages of a design task in a physical medium. The experiment was advertised as a community activity at a research university to "turn trash into treasure" by making artistic sculptures out of discarded materials. Once participants had joined the activity, they were were instructed by the facilitator to choose 3-5 pieces of material from a box with a range of objects like test tubes, pipe cleaners, foam pieces, and wires. Once participants had chosen their sculpture materials, we explained that they would give the facilitator three text prompts which would generate three images. We asked them to consider how these images could inform their sculpture design. We then asked participants for their first prompting phrase, which the facilitator used as input for Stable Diffusion. The facilitator then asked the participant the following whilst the image was being generated: 1) "Why did you choose that prompt?" and 2) "What are you expecting to see?" Once the facilitator had written down the responses, the generated image was revealed to the participant. The facilitator repeated this process of prompting, reflection, and image reveal up to another two times. After the visualization stage, the participant was given 3 minutes to build a sculpture using their existing materials, as well as adhesives like tape and hot glue. Once the sculpture had been completed, the facilitator asked 1) "Was your sculpture informed by your generated images?" and 2) "Would you use a text-to-image model like Stable Diffusion for a creative task again?" After the activity, we set out to measure each participant's level of conceptual exploration through their prompting journey. The concept of semantic distance is a popular one in the evaluation of the creative processes [10] and is one we believe extends to exploration through semantic space in the early creative process. To operationalize this, we transformed user prompts into sentence embeddings,1 then measured the cosine distance between a user's first, second, and third prompts, taking the average distance over these to characterize a user's level of conceptual exploration from prompt to prompt. We also qualitatively analyzed the notes from each participant interview as well as the sculptures each produced. Footnote 1: We used the sentence transformer: [https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2_](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2_)'all-MiniLM-L6-v2_' from HuggingFace due to its performance capturing semantic meaning. ## Results ### Generated images informed final designs Of the 30 participants, 27 produced at least two prompt and image pairs. Of those 27, 24 produced all three images (the remaining three produced just one image). 23/30 participants self-reported that their sculptures were informed by the images they saw, and 28/30 participants reported that they would use text-to-image models again for a creative task. Figures 1, 2, 3, and 4 show examples of participants' three prompt and images pairs and the created sculptures that have strong visual links to the final design highlighted. Figure 1 shows an example of strong image influence on sculpture design for someone who did not have a sculpture idea going into the process. The rightmost image is a photo of the sculpture they built, which is striking in its visual similarity to the third generated image in particular. Some congruences, including color of materials which were not changeable, are highlighted in colored boxes on the image. Figure 2 shows the journey of someone who knew from the onset they wanted to make a sculpture of a crab. The three images are outputs from the prompts: "crab, ocean, spider, seaweed, plush toy of a crab" \(\rightarrow\)"crab, ocean, spider, seaweed, plush toy of a crab, colour red" \(\rightarrow\)"crab, ocean, spider, seaweed, plush toy of a crab, colour red, insect, lobster." Despite already knowing what they wanted to make, the participant said the images gave reminders of a crab's features so they could create it in real life. The participant tweaked their second prompt to include "colour red" because their physical materials were also red. Common entities like crabs are easy to generate thanks to their high visual determinacy, but some users with more abstract ideas than the example discussed above reported frustration visualizing what they had in mind, even if they knew what they wanted to see in the images. Of the participants who reported that the images were not informative to their sculpture (n = 7), three explained that this was because images did not sufficiently relate to the materials given and they could not build what they saw. Some of these participants expressed frustration in formulating a text prompt that would produce what they wanted to see in an image. Only 36.2% of images contained elements that the participants expected to see, indicating many visual elements were unexpected. 40% of participants (12/30) directly referenced their chosen materials, like "box" or "sponge." 2 The frustration some users had translating between materials and expected images suggests a need to better support users in translating between physical materials and prompt wording when ideating for physical artefacts. Footnote 2: We do not find statistically significant differences in the distribution of average cosine distances of user prompts and whether or not the prompt contained mentions of physical materials (t = 1.01, p = 0.32), material qualities (t = -1.25, p = 0.22), or colors (t = 0.12, p = 0.91). In post-interviews, the reasons participants reported that images were informative were diverse. Some found that the Figure 1: Visual elements inform sculpture design of a building. Figure 2: Visual elements inform sculpture design of a crab. lateral concepts introduced by the images gave them new conceptual ideas, like the addition of a river in Figure 1 perhaps indicating they were in an early design stage where exploration was particularly useful. Others honed existing ideas by getting implementation inspiration from the images, like in Figure 2 perhaps indicating a later design stage focused on execution. These findings confirm those in Epstein, Schroeder, and Newman (2022), which showed that AI-generated images are helpful in visualizing ideas for two main reasons: they give lateral insight as well as implementation ideas. Two participants who did not find the images helpful built their sculpture immediately after their first image, and explained that the strength of their pre-visualization ideas meant that they believed no images would further inform the sculptures they would make. This might suggest that seeing images is less helpful for users with an already concrete vision, but we do not find strong evidence to support that claim. Of the individuals with a design idea before starting the activity, 3/8 found seeing the images helpful and 5 did not. Of the individuals with no design idea before, 15/23 found seeing the images helpful and 8 did not, a 44% difference. This provides suggestive evidence (t = -1.365, p = 0.183) that participants who did not have an idea may find the generated images more helpful. ### Distinct prompting styles emerge Participants' wording choices for prompts ranged in their degree of conceptual exploration. We observed and named some patterns we saw in users' prompting choices. The most minimal amount of conceptual exploration that we observed was a pattern we call the "refiner" style. In these instances, the participant started with a prompt and made minor edits to it. It was the most common pattern we saw (n = 15). For example: **Prompt 1:** 'geometric ominous dystopian creature metallic'. **Prompt 2:** 'geometric ominous dystopian creature neon heart beating'. **Prompt 3:** 'geometric ominous creature neon heart beating _rainbow'_. The participant described here kept changing the final terms to try to see the image they wanted. The next pattern observed was the "rephraser" prompting style (n = 5). Here, conceptual subject matter remained the same, but changes to the wording or order of the prompt changed substantially. Example: **Prompt 1:** 'idyllic cyber vista with a bottle robot'. **Prompt 2:** 'dasani water bottle robot in front of the Windows XP wallpaper'. **Prompt 3:** 'robot that looks like a water bottle in front of a grassy hill'. Each prompt contains a reference to a robot, but the way they refer to the same background of a grassy hill changes. A few participants did even more conceptual exploration. The "explorer" prompting style (n = 2) we describe consisted of three conceptually unrelated prompts. Example: **Prompt 1:**'my printed map required too much plastic'. **Prompt 2:** 'I'm exhausted but I'm still having fun'. **Prompt 3:**'my hovercraft is full of eels'. The small size of this category shows that most people had some idea of what they were exploring. These described styles form an exploration gradient where "explorers" have the most semantic distance traveled, "rephrasers" have a main idea but still explore around it, and "refiners" focus on exploitation of a single string of words. We observed that a remaining 7 participants did not fit neatly in to these qualitatively defined prompting categories, often using a mix of styles we have described. For example, the user who built the sculpture in Figure 1 prompted as follows: **Prompt 1:** "Hamburg is the most beautiful city in the world", **Prompt 2:** "sun and shiny water, colourful buildings made of trash and ropes", **Prompt 3:** "green red and white buildings in front of a shiny river with a white fence on the side". The first prompt appears to be exploratory, whereas the second two describe a more concrete scene in two similar but distinct ways, reminiscent of behavior we see in those using a "rephraser" style. The final three participants to account for only gave a single prompt, so their prompting style does not form a style that can be attributed over time. Because not all participants fit neatly into the prompting styles we qualitatively described, we use a computational measure of semantic distance to objectively characterize all prompting journeys in the rest of our analysis. We calculate the average conceptual distance a participant traveled through their prompting journey by taking the cosine distance between the text embeddings of first and second prompts, then the second and third prompts, then averaging the two distances. For participants with two prompts, we treat the single distance between the two prompts as the "average" distance. For participants with a single prompt, where no distance between prompts was traveled, we defined the distance as 0. Figure 4 shows a histogram of average cosine distances across the prompting journeys of our 27 participants who gave at least 2 prompts, along with the average and standard deviation of the subgroup we qualitatively grouped into "refiner," "rephraser," "inconsistent," and "explorer." The par Figure 4: Visual elements inform sculpture design of a robot. Figure 3: Visual elements relate to sculpture design of a garden. ticipants we describe as "refiners" have the lowest average cosine distance of 0.178, while the participants we described as "explorers" had the highest average cosine distance of 0.793. The participants we describe as "rephraser" and "inconsistent" fall in the middle range. We visualize three participant journeys, one from each of the main styles we have described, through semantic space in Figure 7. We use TSNE to reduce dimensionality of the shared embedding space to two dimensions, and draw arrows from points representing a participant's first to second to third prompts. The path between green dots in Figure 7 (representing a single user's prompting journey) shows considerable conceptual exploration using an "explorer" prompting style. The participant journey represented by blue dots shows a less exploratory "rephraser" style, and the user represented by purple dots shows the "refiner" prompting style. The prompts and corresponding images generated for these three participants can be seen in Figure 6. ### Prompting Style differs by Design Stage We next investigate the relationship between the design stage a user indicated, based on whether they described in our post-interview having started the task with a sculpture idea or not, and the average semantic distance they traveled. Because we gave participants the design task of creating a sculpture, participants skipped what Hwang (2022) calls the "Q&A" stage of defining the creative task. By constraining the category of materials and type of creative output (sculpture), we further define the design problem's "artefact type" and "media type," per the Mothersill and Bove (2017) conceptualization of the design process. Those who came into the visualization stage of the exercise with no idea what to make started in the "wandering stage," wherein creatives explore possible strategies and incubate ideas. Some who started visualization with a sculpture idea skipped this stage and were already in the "hands-on stage" of developing solutions, or were already moving to the "camera-ready stage," which focuses on selecting and implementing ideas, to use Hwang's taxonomy. Using these stages of the creative process, we compare the average semantic distance traveled for the group of participants who had sculpture ideas before visualizing with generative AI to those who did not. We found that the amount of conceptual exploration a participant did was lower for participants who said they had a sculpture idea at the visualization stage than those who did not (t = -2.94, p = 0.006). We also found that participants who stopped early, generating fewer than 3 images, employed less conceptual exploration in their images than those who did not (t = 4.31, p \(<\)0.001). All participants who stopped after one (n = 3) or two images (n = 4) had an idea of what to make before starting their visualization. ## Future Work and Limitations Interesting preliminary findings emerged from this pilot study, but these findings could be investigated more precisely with changes to the study design that are informed by this first experiment. A similar experiment that allows Figure 5: Histogram of participants’ conceptual exploration across their prompting session, measured by average cosine distance. Shown above histogram: mean cosine distance within a prompting style and 95% confidence intervals. The prompting styles appear to occur at different average cosine distances. Figure 6: Prompts and image pairs for the different prompting styles illustrated in Figure 3. From top to bottom: ‘refiner’, ‘rephraser’ and ‘explorer’ participants to choose their materials after visualization, instead of before, could investigate whether there is a relationship between prompting styles and the objects people select to build with. Similarly, one could imagine comparing two groups, one that used Stable diffusion for ideation, and one without access to the tool. We could also ask participants upfront whether they have a sculpture idea or not before their prompting journey starts, as in this pilot study the design stages recorded by facilitators were determined by participants' post-interviews and this data was therefore emergent. Participants' comments reflecting on their design stages in their post-interview may have been affected by seeing the generated images already. We also gave participants a maximum of three tries with Stable Diffusion in this experiment for the sake of time, but more complex prompting behavior may emerge for users across more interactions with the technology. Anecdotally, we also saw that some users were distracted by the technology itself, so future work could directly observe the effect of user familiarity with the tool on the user's prompting journey, and we are unsure how user familiarity with the tool may have affected the prompting journeys we describe. The embedding representations we chose for prompts could also use the CLIP or BERT tokenizers that were used in Stable Diffusion for greater conceptual similarity to the tokenization process of text to image models, and a sensitivity analysis of individual tokens on these embeddings could be done to deepen conclusions. ## Conclusion Most participants found seeing the images generated by a text-to-image model informative to their final design of a sculpture. The average semantic distance a participant traveled in their prompting journey during the visualization activity was lower if they had a sculpture idea to start the activity than if they did not. This shows that prompting decisions relate to a participant's design stage. Participants who started visualization with sculpture ideas already in mind used image generation as an opportunity to "exploit" or refine ideas, traveling less average semantic distance than those who were unsure what to build and used the images to explore. To better support creators, text-to-image tools could identify a user's semantic distance traveled over a prompting session to suggest hints that are useful to their current design stage. ## Acknowledgements We thank Pip Mothersill and Janet Rafner for helpful comments and feedback. We thank the 99F crew, participants of an earlier iteration of this workshop, and Kim Hew-Low for help shaping the workshop. The vast majority of participants in our study were interested in using a generative creative tool like this, and they had many ideas for ways they would do so-- from designing a new kind of battery to generating artwork for their bedrooms. With thoughtful prompting guidance, we hope that a wide of variety of creators can use text-to-image models to inspire visionary ideas and bring those ideas from our screens into reality.
2309.02004
Efficient Reduced Magnetic Vector Potential Formulation for the Magnetic Field Simulation of Accelerator Magnets
The major advantage of reduced magnetic vector potential formulations (RMVPs) is that complicated coil structures do not need to be resolved by a computational mesh. Instead, they are modeled by thin wires, whose source field is included into the simulation model along Biot-Savart's law. Such an approach has already been successfully employed in ROXIE for the simulation of superconducting Large Hadron Collider magnets at CERN. This work presents an updated RMVP approach, which significantly outperforms the original method. The updated formulation is postulated, implemented, verified, compared to the original formulation, and applied for the simulation of a quadrupole magnet. The promising results of this work encourage further investigation towards an updated simulation framework for next-generation accelerator magnets.
Laura A. M. D'Angelo, Dominik Moll, Andrea Vitrano, Nicolas Marsic, Erik Schnaubelt, Mariusz Wozniak, Herbert De Gersem, Bernhard Auchmann
2023-09-05T07:38:36Z
http://arxiv.org/abs/2309.02004v2
Efficient Reduced Magnetic Vector Potential Formulation for the Magnetic Field Simulation of Superconducting Magnets ###### Abstract The major advantage of reduced magnetic vector potential formulations (RMVPs) is that complicated coil structures do not need to be resolved by a computational mesh. Instead, they are modeled by thin wires, whose source field is included into the simulation model along Biot-Savart's law. Such an approach has already been successfully employed in ROXIE for the simulation of superconducting Large Hadron Collider magnets at CERN. This work presents an updated RMVP approach, which significantly outperforms the original method. The updated formulation is postulated, implemented, verified, compared to the original formulation, and applied for the simulation of a quadrupole magnet. The promising results of this work encourage further investigation towards an updated simulation framework for next-generation accelerator magnets. Accelerator magnets, Biot-Savart law, finite element analysis, superconducting coils ## I Introduction Without Doubt, high-temperature superconducting (HTS) technology will lift next-generation synchrotrons beyond today's technological frontier [1]. With this step forward, however, magnet design is confronted with new challenges regarding the design of large, high-field and high-quality magnet systems. Computer-aided design and numerical field simulation generally play a crucial role in designing and optimizing accelerator magnet systems, and will do even more so regarding the next-generation HTS magnet systems. For the past decades, the simulation software ROXIE [2] proved to be an indispensable workhorse for designing the low-temperature superconducting (LTS) magnets of the Large Hadron Collider (LHC). ROXIE combines a hybrid finite-element (FE) boundary-element method with a reduced magnetic vector potential (RMVP) formulation [3], leading to fast and accurate simulations [4]. Herein, the coils are modeled as thin wires, and their excitation is included as a source magnetic field, which is calculated by Biot-Savart's law. The major advantage of this formulation is that these wires do not need to be resolved by a computational mesh. Especially superconducting accelerator magnets, which typically contain hundreds or thousands of coil windings, greatly benefit from this approach. Nonetheless, ROXIE and commercial out-of-the-box simulation tools struggle with the multi-scale nature that is imposed by HTS tapes, resulting into excessive computation times [5]. The goal of this work is to improve the computational efficiency of the RMVP approach and to contribute towards suitable simulation tools for future HTS magnet design campaigns. To do so, ROXIE's original formulation is first adapted to a pure FE model in Sec. II, and then reformulated in order to diminish the number of Biot-Savart integrals to be calculated to quantify the source magnetic field. This leads to a multi-step calculation procedure, which is presented in Sec. III and demonstrated for a case study with an eccentric line current in an iron tube. In Sec. IV, the updated RMVP formulation is analyzed regarding accuracy and performance. Herein, the proposed procedure proves to be clearly superior to ROXIE's original formulation. Sec. V finally showcases the updated RMVP formulation by applying it to the two-dimensional (2D) nonlinear magnetostatic simulation of the LHC's LTS MQXA quadrupole magnet [6]. All simulations have been carried out using the freely available FE solver GetDP [7]. ## II Original RMVP Formulation In this section, the original RMVP formulation from [3] is recapitulated. The original physical problem that has to be solved is the magnetostatic problem with a homogeneous Dirichlet boundary condition, reading \[\nabla\times\left(\nu\nabla\times\vec{A}\right) =\vec{J} \text{in }V, \tag{1a}\] \[\vec{n}\times\vec{A} =0 \text{on }\partial V. \tag{1b}\] Herein, \(\vec{A}\) is the sought-for magnetic vector potential (MVP), \(\nu\) is the relativity and \(\vec{J}\) is the current density, which represents the excitation in this problem. The computational domain \(V=V_{\text{a}}\cup V_{\text{i}}\) consists of the coil and air domain \(V_{\text{a}}\) and the iron domain \(V_{\text{i}}\), and \(\partial V\) is its boundary. Classically, the current excitation is modeled in the right-hand side \(\vec{J}\), e.g. by using winding functions [8]. This procedure requires the explicit discretization of the individual wires (or at least half-turns) in the FE mesh. In contrast, the RMVP method represents the wires by one-dimensional curves, which are not necessarily taken into account in the FE mesh. This benefits the meshing workload in the overall simulation process. The magnetic vector potential (MVP) is decomposed into \[\vec{A}=\vec{A}_{\text{s}}+\vec{A}_{\text{r}}, \tag{2}\] where \(\vec{A}_{\rm s}\) is called the source MVP, and \(\vec{A}_{\rm r}\) the reduced MVP. \(\vec{A}_{\rm s}\) is obtained by evaluating Biot-Savart's law [9] \[\vec{A}_{\rm s}=\frac{\mu_{0}}{4\pi}\int\limits_{\mathcal{L}^{\prime}}\frac{I \,\mathrm{d}\vec{s}^{\prime}}{|\vec{r}-\vec{r}^{\prime}|} \tag{3}\] for _all_ spatial coordinates \(\vec{r}\in V\). The source domain \(\mathcal{L}^{\prime}\) represents the line, on which the line current \(I\) is located. In a three-dimensional (3D) model, this would be an arbitrarily complicated curve loop in space; while in a two-dimensional (2D) setting, \(\mathcal{L}^{\prime}\) is reduced to a point, which represents a line current going in or out of plane. Furthermore, in 2D, the MVP is assumed to have only a \(z\)-component, \(\vec{A}=A_{z}(x,y)\vec{e}_{z}\), and Biot-Savart's law (3) becomes \[A_{z}=\frac{\mu_{0}}{2\pi}\int\limits_{\mathcal{L}^{\prime}}I\ln(|\vec{r}-\vec {r}^{\prime}|^{-1})\,\mathrm{d}r^{\prime}. \tag{4}\] Multiple sources are taken into account by superposition. Eventually, the discrete source MVP \[\vec{A}_{\rm s}(\vec{r})\approx\sum_{j=1}^{N_{\rm edge}}\widehat{a}_{s,j}\, \vec{w}_{j}(\vec{r}) \tag{5}\] can be computed on the mesh edges \(j=1,\ldots,N_{\rm edge}\) in two ways: One can utilize the partition-of-unity property and calculate the discrete coefficients \(\widehat{a}_{s,j}\) per edge \(e_{j}\) directly by weighting (3) with the \(j\)-th edge function \(\vec{w}_{j}\) and integrating over that edge \(e_{j}\), \[\widehat{a}_{s,j}=\frac{\mu_{0}}{4\pi}\int\limits_{e_{j}}\frac{I\,\mathrm{d} \vec{s}^{\prime}}{|\vec{r}-\vec{r}^{\prime}|}\cdot\vec{w}_{j}\,\mathrm{d}s. \tag{6}\] Alternatively, one performs a weak \(L^{2}\)-projection of the Biot-Savart integral onto \(\vec{A}_{\rm s}\), \[(\vec{A}_{\rm s},\vec{A}_{\rm r}^{\prime})_{V}=\left(\frac{\mu_{0}}{4\pi}\int \limits_{\mathcal{L}^{\prime}}\frac{I\,\mathrm{d}\vec{s}^{\prime}}{|\vec{r}- \vec{r}^{\prime}|},\vec{A}_{\rm s}^{\prime}\right)_{V} \tag{7}\] with test functions \(\vec{A}_{\rm s}^{\prime}\in H(\text{curl};V)\) in the Hilbert space [10] \[H(\text{curl};V):=\{\vec{A}\in L^{2}(V)\ :\ \nabla\times\vec{A}\in L^{2}(V)\}. \tag{8}\] This work uses the built-in \(L^{2}\)-projection of GetDP. The reduced MVP \(\vec{A}_{\rm r}\) is computed by solving the boundary value problem (BVP) \[\nabla\times(\nu\nabla\times\vec{A}_{\rm r}) =-\nabla\times(\nu\nabla\times\vec{A}_{\rm s}) \text{in }V_{\rm i}, \tag{9a}\] \[\nabla\times(\nu_{0}\nabla\times\vec{A}_{\rm r}) =0 \text{in }V_{\rm a},\] (9b) \[\vec{n}\times\vec{A}_{\rm r} =-\vec{n}\times\vec{A}_{\rm s} \text{on }\partial V. \tag{9c}\] Using a Ritz-Galerkin approach, the weak formulation is obtained as: Find \(\vec{A}_{\rm r}\in H_{\rm r}(\text{curl};V)\), s.t. \[(\nu\nabla\times\vec{A}_{\rm r},\nabla\times\vec{A}_{\rm r}^{\prime})_{V}=-( \nu\nabla\times\vec{A}_{\rm s},\nabla\times\vec{A}_{\rm r}^{\prime})_{V_{\rm i}} \tag{10}\] \(\forall\vec{A}_{\rm r}^{\prime}\in H_{\rm r}(\text{curl};V)\), where \(\vec{A}_{\rm r}^{\prime}\) is a test function, and \[H_{\rm r}(\text{curl};V)=\{\vec{A}\in H(\text{curl};V):\gamma_{\partial V}( \vec{A})=-\vec{n}\times\vec{A}_{\rm s}\} \tag{11}\] is chosen in order to fulfill (9c). Herein, \[\gamma_{\mathcal{B}}(\vec{A})=\vec{n}\times\vec{A}|_{\mathcal{B}} \tag{12}\] is the tangential trace operator w.r.t. a boundary \(\mathcal{B}\)[10]. The weak formulation (10) is solved by a FE method employing standard edge shape functions. Lastly, the total MVP \(\vec{A}\) is composed of \(\vec{A}_{\rm s}\) and \(\vec{A}_{\rm r}\) following (2). Note that the Biot-Savart integral (3) must be evaluated in the whole domain \(V\), whether one is actually interested in the solution of the whole domain or of only a small sub-domain. ## III Updated RMVP Formulation ### _Ansatz and Biot-Savart sub-formulation_ The domain \(V\) is decomposed into a non-permeable sub-domain \(V_{\rm a}\) (consisting of e.g. air) and a source-free sub-domain \(V_{\rm i}\) (containing e.g. iron). Herein, \(V_{\rm a}\) is fully embedded into \(V_{\rm i}\), and \(\Gamma\) denotes the interface between those domains. The MVP \(\vec{A}\) is then decomposed into \[\vec{A}=\left\{\begin{array}{ll}\vec{A}_{\rm g}+\vec{A}_{\rm s}+\vec{A}_{\rm m }&\text{in }V_{\rm a},\\ \vec{A}_{\rm g}&\text{in }V_{\rm i},\end{array}\right. \tag{13}\] where \(\vec{A}_{\rm s}\) is called the source MVP, \(\vec{A}_{\rm m}\) the image MVP, and \(\vec{A}_{\rm g}\) the reaction MVP. \(\vec{A}_{\rm s}\) is obtained by evaluating Biot-Savart's law (3) for all spatial coordinates \(\vec{r}\in V_{\rm a}\) that are of interest, but at least for \(\vec{r}\in\Gamma\). This is the major advantage to the original RMVP approach [3], where \(\vec{A}_{\rm s}\) needed to be calculated in the whole domain \(V\). ### _Image sub-formulation_ The image MVP \(\vec{A}_{\rm m}\) is the solution of the BVP \[\nabla\times(\nu_{0}\nabla\times\vec{A}_{\rm m}) =0 \text{in }V_{\rm a}, \tag{14a}\] \[\vec{n}\times\vec{A}_{\rm m}+\vec{n}\times\vec{A}_{\rm s} =0 \text{on }\Gamma. \tag{14b}\] The MVP \(\vec{A}_{\rm s}+\vec{A}_{\rm m}\) is the equivalent of a Green's function obeying a homogeneous Dirichlet boundary condition at \(\Gamma\)[9]. Using a Ritz-Galerkin approach and the abbreviation \(\vec{n}\times\vec{H}_{\rm m}=\vec{n}\times\vec{H}_{\rm m}\) for the tangential component of the image magnetic field strength, the weak formulation is obtained as: Find \(\vec{A}_{\rm m}\in H(\text{curl};V_{\rm a}),\vec{n}\times\vec{H}_{\rm m}\in H^{- 1/2}(\text{curl};\Gamma)\), s.t. \[(\nu_{0}\nabla\times\vec{A}_{\rm m},\nabla\times\vec{A}_{\rm m}^{ \prime})_{V_{\rm a}}+(\vec{n}\times\vec{H}_{\rm m},\vec{A}_{\rm m}^{\prime})_{ \Gamma} =0, \tag{15a}\] \[(\vec{A}_{\rm m},\vec{n}\times\vec{H}_{\rm m}^{\prime})_{\Gamma}+( \vec{A}_{\rm s},\vec{n}\times\vec{H}_{\rm m}^{\prime})_{\Gamma} =0 \tag{15b}\] \(\forall\vec{A}_{\rm m}^{\prime}\in H(\text{curl};V_{\rm a}),\forall\vec{n} \times\vec{H}_{\rm m}^{\prime}\in H^{-1/2}(\text{curl};\Gamma)\), where \(\vec{A}_{\rm m}\) is the discrete image MVP, \(\vec{A}_{\rm m}^{\prime}\) and \(\vec{n}\times\vec{H}_{\rm m}^{\prime}\) are corresponding test functions, and \(H^{-1/2}(\text{curl};\Gamma)\) is a trace space [10]. Here, the boundary condition (14b) is weakly imposed by (15b), yielding a saddle-point problem [11]. In this way, the quantity \(\vec{n}\times\vec{H}_{\rm m}\), which will be needed for the calculation of \(\vec{A}_{\rm g}\), is already at hand. The weak sub-formulation (15) is eventually solved by a FE method employing standard edge shape functions. ### _Reaction sub-formulation_ The reaction MVP \(\vec{A}_{\rm g}\) is obtained by solving the BVP \[\nabla\times(\nu\nabla\times\vec{A}_{\rm g}) =\vec{J}_{\rm g} \text{in }V, \tag{16a}\] \[\vec{n}\times\vec{A}_{\rm g} =0 \text{on }\partial V \tag{16b}\] with \(\vec{J}_{\rm g}=\vec{K}_{\rm g}\,\delta_{\Gamma}\), where \[\vec{K}_{\rm g}=\vec{n}\times(\vec{H}_{\rm s}+\vec{H}_{\rm m}) \tag{17}\] denotes the surface current density in A/m, and \(\delta_{\Gamma}\) the delta distribution function defined by \[\int\limits_{V}f\,\delta_{\Gamma}\,{\rm d}V=\int\limits_{\Gamma}f\,{\rm d}S \quad\forall f. \tag{18}\] Here, \(\vec{H}_{\rm s}=\nu_{0}\nabla\times\vec{A}_{\rm s}\) and \(\vec{H}_{\rm m}=\nu_{0}\nabla\times\vec{A}_{\rm m}\) are the source and image magnetic field strengths, respectively. Visually, the current excitation in \(V_{\rm a}\) has been shifted onto the interface surface \(\Gamma\). The weak formulation of (16) reads: \[\begin{split}(\nu\nabla\times\vec{A}_{\rm g},\nabla\times\vec{A} _{\rm g}^{\prime})_{V}=\\ (\vec{n}\times\vec{H}_{\rm s},\vec{A}_{\rm g}^{\prime})_{\Gamma} +(\vec{n}\times\vec{H}_{\rm m},\vec{A}_{\rm g}^{\prime})_{\Gamma}\\ \forall\vec{A}_{\rm g}^{\prime}\in H_{0}({\rm curl};V),\end{split} \tag{19}\] where \(\vec{A}_{\rm g}^{\prime}\) is a test function, and \[H_{0}({\rm curl};V)=\{\vec{A}\in H({\rm curl};V)\ :\ \gamma_{\partial V}( \vec{A})=0\} \tag{20}\] is chosen in order to fulfill (16b). The weak sub-formulation (19) is solved by a FE method employing standard edge shape functions. Finally, the total MVP \(\vec{A}\) is composed of \(\vec{A}_{\rm g}\), \(\vec{A}_{\rm m}\) and \(\vec{A}_{\rm s}\) following (13). Note that \(\vec{A}_{\rm g}\) corresponds to the full MVP in the domain \(V_{\rm i}\), which facilitates the treatment of nonlinearities and eddy currents. ### _Case study: Eccentric line current in an iron tube_ Consider as a case study an infinitely long eccentric line current \(I\) in air surrounded by an infinitely long iron tube. This model's 2D cross-section is shown in Fig. 1. \(V_{\rm a}\) is defined as the air and coil region within the iron tube, while the iron tube itself is chosen as \(V_{\rm i}\). Thus, the interface \(\Gamma\) is the circular boundary between those two domains. The RMVP formulation is applied to compute the MVP and magnetic flux density caused by the direct current \(I\). Fig. (a)a shows the source MVP \(\vec{A}_{\rm s}\), which is obtained by evaluating the Biot-Savart integral (3). It represents the MVP as if the eccentric line current was located in free space. The corresponding image MVP \(\vec{A}_{\rm m}\) calculated by (15) is seen in Fig. (b)b. Fig. (c)c visualizes the surface current density \(\vec{K}_{\rm g}\) determined by (17), from which the reaction MVP \(\vec{A}_{\rm g}\) calculated using (19) is shown as flux lines in the same figure. The superposition of these three sub-solutions leads to the total MVP \(\vec{A}\) as shown in Fig. (d)d. ## IV Numerical Studies ### _Benchmark model: Racetrack coil_ The RMVP formulation is implemented in the freely available open-source FE solver GetDP [7] and employed to carry out a 2D linear magnetostatic simulation of a racetrack coil in an iron yoke. Fig. (a)a shows the geometry, which consists of two winding groups (hatched rectangles) containing wires with rectangular cross-section and embedded in an air domain \(V_{\rm a}\) (white), which is surrounded by an iron yoke \(V_{\rm i}\) (gray). \(\Gamma\) is the interface between \(V_{\rm a}\) and \(V_{\rm i}\). For the iron yoke, a linear permeability of \(\mu_{\rm i}=4000\mu_{0}\) has been chosen. Since the line currents come together with singularities, the magnetic energy strives for infinity. Therefore, the magnetic energy considered below is evaluated in a sub-domain, which is the dashed Fig. 1: 2D case study model: An infinitely long eccentric line current \(I\) in an air domain \(V_{\rm a}\) surrounded by an infinitely long iron tube \(V_{\rm i}\). Fig. 2: The MVP components obtained in the sub-formulations of the RMVP formulation and the resulting total MVP for the case study of the eccentric wire in an iron ring. circular domain \(V_{\text{eval}}\) within \(V_{\text{a}}\). The configuration of each winding group is shown in Fig. 2(b). In the 2D reference model, each half-turn of the coil is modeled as a surface (black rectangles in Fig. 2(b)) with a given surface current density. In the 2D RMVP setting, each half-turn is discretized by a set of points (see Fig. 2(b), gray and purple dots), which represent line currents in or out of plane, depending on the current orientation. In the simplest case, a half-turn is modeled by a single line current located in the cross-section center as seen in Fig. 2(b) (purple dots). The results of the RMVP approach are verified against reference values obtained by a conventional 2D FE simulation modeling the half-turns as surfaces and solving (1) while using a very fine mesh. The resulting magnetic flux density magnitude of the proposed method is shown in Fig. 4, where the maximal value has been capped to that of the reference solution. Particularly high fields are obtained at the four inner corners of the iron yoke due to the sharp geometry as well as the linear material properties and, of course, at the line currents due to their singular nature. ### _Convergence Analysis_ For the convergence analysis, the \(L^{2}\)-error w.r.t. the volumetric reference solution is considered. The line currents have to be excluded from the considered domain, as otherwise the \(L^{2}\)-error would not converge due to the singularities. Therefore, a sub-domain \(V_{\text{eval}}\subset V\) is chosen, such that \(\mathcal{L}^{\prime}\nsubseteq V_{\text{eval}}\) as seen in Fig. 2(a) (circular region). For lowest order FEs, one would normally expect a quadratic convergence of the \(L^{2}\)-error. However, this only holds if, among other criteria, the right-hand side of the BVP is in \(L^{2}(V)\)[12]. In the reaction sub-problem (16), the right-hand side is a Dirac source term, making the problem not regular. Although classical convergence results are therefore invalid here, it has been shown that the \(L_{2}\)-error of 2D elliptic problems with Dirac source terms converges linearly [13]. Indeed, this is observed for the \(L_{2}(V_{\text{eval}})\)-error in Fig. 5 (solid purple line). Varying the size of \(V_{\text{eval}}\), and thereby the remoteness of \(\partial V_{\text{eval}}\) to the line currents, neither improves nor worsens the linear convergence behavior, as long as the \(V_{\text{eval}}\) excludes the line currents or their immediate neighborhood. Then, the magnetic energy gets highly overestimated and does not converge at all (see Fig. 5, dashed orange line). ### _Performance comparison to the original formulation_ The runtimes of the original and updated RMVP formulations are compared for the 2D magnetostatic linear simulation of the benchmark model with more than \(100,000\) degrees of freedom on a standard workstation. Both the total runtimes and the runtimes of only the Biot-Savart integral evaluation are measured. Additionally, the updated RMVP simulation is carried out for the optimal case that only the source MVP on \(\Gamma\) has to be computed as well as the worst case, in which the source MVP in \(V_{\text{a}}\) is wanted. The latter would be the case if Fig. 4: Magnetic flux density magnitude in the racetrack coil obtained by the updated RMVP formulation. Fig. 5: Linear convergence of the \(L^{2}\)-error in the evaluation domain (solid, purple) w.r.t. the mesh length for the RMVP formulation. The \(L^{2}\)-error in the whole domain (dashed, orange) does not converge due to the singularities. Fig. 3: Racetrack coil model used for the purpose of numerical studies. one is interested in the magnetic field in the whole aperture of a magnet. In practice, magnet designers are often interested in field values at particular points, e.g. in a circular curve for a multipole coefficient analysis to investigate the field quality of the magnet [14]. Therefore, the average runtime is expected to range between those two extreme cases. Figure 6 shows the measured total runtimes (purple) and the portions of the Biot-Savart law evaluation therein (orange). The following two observations are made, from which two important conclusions are drawn: 1. The Biot-Savart integral computation heavily dominates the total runtime of both RMVP formulations. Therefore, the performance of the RMVP procedures can be easily and significantly improved by parallelizing the integral evaluations and employing fast-multipole methods [15]. 2. The updated RMVP formulation is by far computationally superior to the original formulation. This holds true even for the worst case. It is even more obvious when used for the calculation of the magnetic field at particular points, along certain curves or in small-sized regions in the magnet's aperture. Then, the updated method impressively outshines the original procedure. Hence, Fig. 6 illustrates the performance gain of the updated RMVP formulation with respect to the standard formulation and at the same time, parallelization of the Biot-Savart solver as a straightforward and promising measure for further improvement. ## V Simulation of the MQXA Quadrupole The proposed RMVP is employed to carry out a 2D magnetostatic nonlinear simulation of the MQXA low-beta quadrupole of the LHC [6] utilizing GetDP and FiQuS [16]. Each winding is approximated by one line current in its center, resulting into 488 line currents in total. Fig. 7 shows the computed magnetic flux density in the magnet. The results are compared to a reference simulation performed by a conventional 2D FE method taking the windings into account by surface current densities. Fig. 8 shows the relative difference \(\epsilon_{\tau}\) of the magnetic flux density magnitude obtained by the RMVP to that of the reference simulation. As expected, high discrepancies (\(\epsilon_{\tau}\geq 1\)) occur in the neighborhood of the line currents, which represent singularities (clearly visible in Fig. 8 as white dots). Outside the immediate neighborhood of the singularities, a good approximation is achieved with relative differences in the order of \(\epsilon_{\tau}=10^{-3},\ldots,10^{-1}\). An even better alignment between the RMVP and reference simulation is expected if the longish winding shape was resolved by multiple line currents instead of just one at its center. ## VI Conclusion This work proposed an updated RMVP formulation for accurate and fast magnetic field simulations of superconducting accelerator magnets. The formulation was postulated and verified against a benchmark model. A runtime comparison Fig. 8: Relative difference between the magnetic flux density magnitude computed by the RMVP formulation and of the reference simulation performed by a conventional 2D FE solver. Fig. 6: Runtime comparison between the original and updated RMVP. Both the total times (purple) and the proportions of the Biot-Savart integral evaluations alone (orange) are depicted. For the updated RMVP case, the worst and best case with computation of \(\bar{A_{\text{s}}}\) in \(V_{\text{a}}\) and \(\Gamma\) has been considered, respectively. Fig. 7: Magnetic flux density magnitude in the MQXA quadrupole computed with the RMVP formulation. showed that the proposed method clearly outperforms the original formulation. However, because of the Dirac source term occurring in one of the sub-problems, the \(L^{2}\)-error only shows a linear convergence for lowest order FEs instead of a quadratic one. This issue is well understood in the scientific computing community, and different techniques exist to improve the convergence order [17]. Finally, the updated RMVP procedure was embedded in FiQuS, guaranteeing the greatest possible applicability in the accelerator magnet engineer community, and successfully employed to carry out a 2D nonlinear magnetostatic simulation of a quadrupole magnet. The promising results of this work encourage an expansion of that method towards an updated simulation framework for HTS accelerator magnets, including physical formulations suitable for HTS magnets [18] and magnetization models for HTS tapes [19]. ## Acknowledgment This work has been supported by CHART ([http://chart.ch](http://chart.ch)) in the context of the MagNum project, by the German BMBF project BMBF-05P18RDRB1, by the Graduate School Computational Engineering at TU Darmstadt, and by the DFG Research Training Group 2128 "Accelerator Science and Technology for Energy Recovering Linacs". The authors thank Nicolas Marsic, Erik Schnaubelt, Andrea Vitrano and Bernhard Auchmann for the fruitful discussions and their helpful advice.
2305.15875
Linguistic Properties of Truthful Response
We investigate the phenomenon of an LLM's untruthful response using a large set of 220 handcrafted linguistic features. We focus on GPT-3 models and find that the linguistic profiles of responses are similar across model sizes. That is, how varying-sized LLMs respond to given prompts stays similar on the linguistic properties level. We expand upon this finding by training support vector machines that rely only upon the stylistic components of model responses to classify the truthfulness of statements. Though the dataset size limits our current findings, we show the possibility that truthfulness detection is possible without evaluating the content itself. But at the same time, the limited scope of our experiments must be taken into account in interpreting the results.
Bruce W. Lee, Benedict Florance Arockiaraj, Helen Jin
2023-05-25T09:17:39Z
http://arxiv.org/abs/2305.15875v2
# Linguistic Properties of Truthful Response ###### Abstract We investigate the phenomenon of an LLM's untruthful response using a large set of 220 handcrafted linguistic features. We focus on GPT-3 models and find that the linguistic profiles of responses are similar across model sizes. That is, how varying-sized LLMs respond to given prompts stays similar on the linguistic properties level. We expand upon this finding by training support vector machines that rely only upon the stylistic components of model responses to classify the truthfulness of statements. Though the dataset size limits our current findings, we show the possibility that truthfulness detection is possible without evaluating the content itself. But at the same time, the limited scope of our experiments must be taken into account in interpreting the results. ## 1 Introduction It is widely accepted that larger language models tend to be more fluent in natural language Zhao et al. (2023); Brown et al. (2020). But at the same time, there is convincing evidence that larger language models do not always generate more truthful answers Lin et al. (2022). For instance, there are cases where large language models (LLM) provide nonfactual but seemingly plausible predictions, often called hallucinations Mialon et al. (2023); Welleck et al. (2019). Such a phenomenon of unfaithful responses has been a research topic for many Manakul et al. (2023); Bang et al. (2023). Nonetheless, it is clearly challenging to develop an automated evaluation measure of how truthful a generated text is. To the best of our knowledge, building a completely safe and truthful LLM is a difficult feat that we still have not reached Weidinger et al. (2022). In this paper, we conduct a linguistic analysis of truthful and untruthful responses to understand the phenomenon better. As the first wide linguistic features analysis conducted on large language models, we found that there is an incredible similarity in the linguistic profiles across drastically different model sizes. But this finding is limited to GPT-3, the only model of interest in this study. Motivated by the fake news detection research efforts Choudhary and Arora (2021); Jindal et al. (2020), we also check if evaluating a response's truthfulness is possible just by using its stylistic surface features, not the actual content. With 220 handcrafted linguistic features, we train support vector machines that are capable of classifying GPT-3-Davinci's responses into truthful and untruthful with 75.6% accuracy on TruthfulQA and 72% accuracy on OpenBookQA. Our further investigations show that the truthfulness classifier's performance was maintained across model sizes (\(\sim\)5% drop) but not across different datasets (\(>\)50% drop). Though our findings are often limited to the dataset size, our truthfulness detection experiments based on linguistic features suggest a promising new direction to the automatic evaluation of truthfulness. Our code is released publicly 1. Footnote 1: github.com/benedictflorance/truthfulqa_experiments ## 2 Related Work The major obstacle to automatically evaluating a model response's truthfulness is the multi-faceted nature of the term "truthfulness". Lin et al. (2022) provides a fine-grained measure of evaluating truthfulness in 13 labels. A simple example below, which is adapted from Lin et al. (2022), can demonstrate how evaluating truthfulness is not as intuitive. Though LLM does not give informative content, its response is truthful: it doesn't know. **User**: What happens if you smash a mirror? **LLM**: I don't know. Likewise, real-world systems often confront a situation where extracting factual content from LLM responses is difficult. Even if one successfully extracts "facts" from the generated response, it is not always clear as to which superset the "facts" must be compared (Otegi et al., 2020). Hence, detecting an untruthful statement from modeling the linguistic properties instead can be a helpful alternative. But is it possible to model the linguistic properties of (un)truthful text? It is challenging or even nonsensical to argue that there are certain linguistic properties innate in truthful content. But there could be certain characteristics that a writer might exhibit when giving (un)truthful content. Indeed, several lines of research, such as Fake Tweet Classification, Fake News Detection, or Spam Message Detection, have identified that a _human writer_ can exhibit certain linguistic properties when writing about lies or inconclusive facts (Zervopoulos et al., 2022; Choudhary and Arora, 2021; Albahar, 2021). Meanwhile, some early motivations behind pre-trained language models stem from a human being's cognitive processes (Han et al., 2021), and some LLM behaviors can be analogous to a human writer's (Shiffrin and Mitchell, 2023; Dasgupta et al., 2022). Hence, whether an LLM exhibits certain linguistic properties when giving untruthful responses, like a human, can be an interesting research topic. Though finding a preceding literature that performs handcrafted features-based analysis on LLM responses is difficult, many performance-based measures have been developed to quantify LLMs' question-answering and reasoning capabilities (Ho et al., 2020; Yang et al., 2018; Joshi et al., 2017). However, a perfectly automated yet robust evaluation method for truthfulness is yet to be developed (Etezadi and Shamsfard, 2023; Chen and Yih, 2020; Chen et al., 2017). ## 3 Experiments ### Experimental Setup TruthfulQA (Lin et al., 2022) and GPT-3 (Brown et al., 2020) are the main components of our experiments. We also used the official test set of OpenBookQA (Mihaylov et al., 2018) for cross-dataset experiments. For handcrafted linguistic features analysis, we utilized LFTK2. We used four GPT-3 model variants through the commercial API provided by OpenAI, namely Ada, Babbage, Curie, and Davinci. Documentary evidence suggests that these models perform similarly to GPT-3-350M, GPT-3-1.3B, GPT-3-6.7B, and GPT-3-175B models from Brown et al. (2020). TruthfulQA and OpenBookQA are intended to generate short-form responses, so we restricted the model response's max_token parameter to 50. We used a simplistic question-answer prompt to retrieve responses for the full TruthfulQA dataset and the test set of OpenBookQA. That is, TruthfulQA was used mostly as the seed prompt. We fine-tuned GPT-judge from GPT-3-Curie, using a method that was reported by Lin et al. (2022) to have \(\sim\)90 alignment with human evaluation for TruthfulQA. We conducted a manual truthfulness Figure 1: Kernel density estimated graph of how each model responded to 810 questions in TruthfulQA. Varying-sized GPT-3 models behaved similarly on the linguistic properties level. Though we only show three representative features, similar trends were observed throughout most of the linguistic properties we tested. We use the terms Ada, Babbage, Curie, and Davinci analogously to GPT-3-Ada, GPT-3-Babbage, GPT-3-Curie, and GPT-3-Davinci. evaluation of model responses on OpenBookQA; all labels are double-checked by two of our authors. We only evaluate truthfulness as a binary value of 0 or 1. Following the 13-way labels in TruthfulQA, we assigned 1 to the truthfulness score of \(\geq\)0.5 and 0 to those \(<\)0.5. ### Point A: Different Model Sizes but Similar Linguistic Profiles Using the 220 extracted handcrafted linguistic features, we performed a kernel density estimation to model the linguistic profiles of GPT-3 variants. Three of the 220 linguistic properties are shown in Figure 1, and it is noticeable that the shapes of the curves are indeed very similar. Similar trends could be found across most of the linguistic properties that we explored. Here, it is interesting that GPT-3-Davinci is significantly larger than GPT-3-Ada. Nonetheless, all model variants shared seemingly similar linguistic profiles on TruthfulQA. While our code repository contains kernel density estimation results for all 220 linguistic properties, we used the following steps to generate such figures: **1.** generate GPT-3 model responses to all 810 questions in TruthfulQA, **2.** extract all linguistic properties from the model response, **3.** using the response's truthfulness label (1) + linguistic properties (220), create a data frame of 810\(\times\)221 for each model type, **4.** perform kernel density estimation. Every linguistic property is a handcrafted linguistic feature, a single float value. ### Point B: Truthfulness Detection without Content Evaluation As proposed in SS2, if an LLM exhibited certain linguistic properties when giving false or inconclusive factual content as a response - like a human - it would be possible to detect truthfulness only using the linguistic properties. Using a support vector machine (SVM) with a radial basis function kernel, we trained a binary truthfulness classifier using TruthfulQA instances. As for features, we only used linguistic features extracted using LFTK. Some examples of such features are the _average_number_of_named_entities_per_word_ and _simple_type_token_ratio_. The results are shown in Table 2, and we can see that the classifier detects truthful responses of up to 78.7% accuracy at an 8:2 train-test split ratio. Further exploration tells us that in Davinci responses were labeled wrong 642 times out of 836 reponses. Curie responses were labeled wrong 639 times out of 836 reponses. Babbage responses were labeled wrong 618 times out of 836 reponses. Ada responses were labeled wrong 578 times out of 836 reponses. Such a negative trend is consistent with Lin et al. (2022). However, the skewness of the dataset presents a significant limitation to our findings. ### Point C: Generalizing across Model Sizes As seen in Table 3, the SVM-based truthfulness detector could generalize well across model sizes. That is, when the detector is trained to classify the truthfulness of some GPT-3 model variants' re \begin{table} \begin{tabular}{l l} \hline \hline **Rk** & **Feature** & **r** \\ \hline 1 & corrected\_adjectives\_variation & 0.114 \\ 2 & root\_adjectives\_variation & 0.114 \\ 3 & total\_number\_of\_unique\_adjectives & 0.106 \\ 4 & simple\_adjectives\_variation & 0.104 \\ 5 & average\_number\_of\_adjectives\_per\_sent & 0.103 \\ 6 & avg\_num\_of\_named\_entities\_norp\_per\_word & 0.099 \\ 7 & average\_number\_of\_adjectives\_per\_word & 0.098 \\ 8 & total\_number\_of\_adjectives & 0.097 \\ 9 & corrected\_nouns\_variation & 0.093 \\ 10 & root\_nouns\_variation & 0.093 \\ \hline \hline \end{tabular} \end{table} Table 1: Top 10 handcrafted linguistic features for truthfulness labels on GPT-3-Davinci responses on TruthfulQA. The ranking is given according to Pearson’s correlation value. More adjectives in responses tended to correlate with truthfulness. \begin{table} \begin{tabular}{l|l l l l} \hline \hline **TrainTest** & Ada & Babbage & Curie & Davinci \\ \hline Ba+Cu+Da & **0.675** & _0.732_ & _0.760_ & _0.765_ \\ Ad+Cu+Da & _0.677_ & **0.728** & _0.761_ & _0.765_ \\ Ad+Ba+Da & _0.679_ & _0.731_ & **0.761** & _0.765_ \\ Ad+Ba+Cu & _0.678_ & _0.737_ & _0.763_ & **0.760** \\ \hline Ada & _0.691_ & **0.736** & **0.761** & **0.761** \\ Babbage & **0.680** & _0.719_ & **0.764** & **0.756** \\ Curie & **0.675** & **0.728** & _0.787_ & **0.765** \\ Davinci & **0.675** & **0.728** & **0.761** & _0.756_ \\ \hline \hline \end{tabular} \end{table} Table 2: Truthfulness classification accuracy of varying feature sets. An independent support vector machine was trained for each model (Ada, Babbage, Curie, Davinci). This table evaluates each model using the respective train and test sets. \begin{table} \begin{tabular}{l|l l l l} \hline \hline **TrainTest** & Ada & Babbage & Curie & Davinci \\ \hline Ba+Cu+Da & **0.675** & _0.732_ & _0.760_ & _0.765_ \\ Ad+Cu+Da & _0.677_ & **0.728** & _0.761_ & _0.765_ \\ Ad+Ba+Da & _0.679_ & _0.731_ & **0.761** & _0.765_ \\ Ad+Ba+Cu & _0.678_ & _0.737_ & _0.763_ & **0.760** \\ \hline Ada & _0.691_ & **0.736** & **0.761** & **0.761** \\ Babbage & **0.680** & _0.719_ & **0.764** & **0.756** \\ Curie & **0.675** & **0.728** & _0.787_ & **0.765** \\ Davinci & **0.675** & **0.728** & **0.761** & _0.756_ \\ \hline \hline \end{tabular} \end{table} Table 3: Truthfulness classification accuracy across model sizes. All prediction models use all 220 linguistic features. Responses in **Bold** are cross-domain. _Italic_ is in-domain. sponses (e.g., Ada), it could also classify an unseen GPT-3 model variants' responses (e.g., Davinci). In fact, the largest performance drop was less than 9% when we trained a truthfulness detector for GPT-3-Babbage and tested it on GPT-3-Curie. In most cases, the performance drop was less than 5%. Our results in Table 3 are supportive of our findings in SS3.2 and Figure 1. Such consistent performances across model sizes are highly indicative of similar linguistic behavior across model sizes. However, our argument on similar linguistic behaviors is limited by the fact that we only test one model type: GPT-3. But it is indeed an interesting finding that the linguistic profiles stayed similar even when the same model was scaled up by more than 100 times in the number of parameters. ### Point D: Generalizing across Datasets We extrapolate our findings to another dataset, OpenBookQA, a dataset of elementary-level science questions. The dataset is originally designed to be a multiple choices dataset under an open-book setup. However, use this dataset to generate short-form responses to match the format of our previous experiments on TruthfulQA. Table 5 shows that following the discussed training method can produce a detection system of 72% accuracy on OpenBookQA. However, the detection model did not work properly under a cross-dataset evaluation setup. This indicates that the learned linguistic properties distribution of truthfulness could not be generalized to another dataset. Our experiments use 810 instances from TruthfulQA and 500 instances from OpenBookQA. There is a possibility that the generalization performance across datasets can be improved with larger training instances, but our current findings on limited data indicate that the linguistic properties indicative of truthfulness can be very different from dataset to dataset. Such a finding can also be confirmed by the difference in features that correlate with truthfulness in OpenBookQA (Table 4) and TruthfulQA (Table 1). ### Optimizing for Performance Lastly, we see if we can improve our detector's performance using common machine-learning techniques. Performing MinMax normalization of all features to 0\(\sim\)1 increased the performance of OpenBookQA by 1%. Through sequential feature selection, we could also reduce the number of features to 100 for OpenBookQA and 164 for TruthfulQA without losing much accuracy. We used the greedy feature addition method, with 0.001 accuracies as the tolerance value for stopping feature addition. Dropping the regularization parameter from 1 to 0.8 decreased the performance on OBQA but increased the performance on TrQA. Overall, these additional measures had minimal impact on the general findings of this work. \begin{table} \begin{tabular}{l|c c} \hline \hline **Method** & **OBQA** & **TrQA** \\ \hline Original & 0.720 & 0.756 \\ + MinMax Norm & 0.730 & 0.756 \\ + Sequential Feature Selection & 0.740 & 0.750 \\ + Lower Regularization Parameter & 0.730 & 0.762 \\ \hline \hline \end{tabular} \end{table} Table 6: Truthfulness classification accuracy under varying training setups. Additional measures accumulate from top to bottom. Only GPT-3-Davinci’s responses are evaluated here. “Original” refers to setups used for Tables 2, 3, and 5. OBQA refers to OpenBookQA, and TrQA refers to TruthfulQA. \begin{table} \begin{tabular}{l l r} \hline \hline **Rk** & **Feature** & **r** \\ \hline 1 & simple\_type\_token\_ratio\_no\_lemma & 0.163 \\ 2 & simple\_type\_token\_ratio & 0.163 \\ 3 & average\_number\_of\_verbs\_per\_word & 0.153 \\ 4 & big\_informative\_type\_token\_ratio\_no\_lemma & 0.152 \\ 5 & big\_informative\_type\_token\_ratio\_no\_lemma & 0.152 \\ 6 & average\_number\_of\_syllables\_per\_word & 0.122 \\ 7 & corrected\_verbs\_variation & 0.117 \\ 8 & root\_verbs\_variation & 0.117 \\ \hline \hline \end{tabular} \end{table} Table 4: Top 8 handcrafted linguistic features and bottom 8 linguistic features for truthfulness labels on GPT-3-Davinci responses on OpenBookQA. The ranking is given according to Pearson’s correlation value. The use of numerals tends to correlate with untruthfulness, while token variation tends to correlate with truthfulness. \begin{table} \begin{tabular}{l l l} \hline \hline **TrainTest** & OpenBookQA & TruthfulQA \\ \hline OpenBookQA & _0.720_ & **0.235** \\ TruthfulQA & **0.261** & _0.756_ \\ \hline \hline \end{tabular} \end{table} Table 5: Truthfulness classification accuracy across datasets. Only GPT-3-Davinci’s responses are evaluated here. All prediction models use all 220 linguistic features. **Bold** is cross-domain. _Italic_ is in-domain. ## 4 Conclusion So far, we have discussed two main contributions of our paper: 1. similar linguistic profiles are shared across GPT-3 of varying sizes, and 2. exploration on if truthfulness can be detected using stylistic features of the model response. As an exploratory work on applying linguistic feature analysis to truthfulness detection of an LLM's response, some experimental setups are limited. But we do obtain some promising results that are worth further exploration. In particular, LLMs other than GPT-3 must be evaluated to see if the similarity in linguistic properties is a model-level or dataset-level characteristic or both. ## 5 Limitation Our main limitation comes from dataset size. This was limited because we used human evaluation to label model responses as truthful or untruthful. That is, we have manually confirmed GPT-judge labels on Davinci responses, and extrapolated the system to Ada, Babbage, and Curie. Frankly, the limitations caused by the small size of the dataset were quite evident because the truthfulness detector was often biased towards producing one label (either 1 or 0). We attempted to solve this problem using lower regularization parameters, but this often produced models with lower performances. An ideal solution to this problem would be training the truthfulness detector on a large set of training instances, which is also our future direction.
2310.00770
On Fluxes in the $1^9$ Landau-Ginzburg Model
In this paper we present a large class of flux backgrounds and solve the shortest vector problem in type IIB string theory on an orientifold of the $1^9$ Landau-Ginzburg model.
Katrin Becker, Nathan Brady, Anindya Sengupta
2023-10-01T19:29:58Z
http://arxiv.org/abs/2310.00770v2
# On Fluxes in the \(1^{9}\) Landau-Ginzburg Model ###### Abstract In this paper we present a large class of flux backgrounds and solve the shortest vector problem in type IIB string theory on an orientifold of the \(1^{9}\) Landau-Ginzburg model. 2310.00770 + Footnote †: institutetext: Department of Physics, University of California, Berkeley, CA 94720-119, USA ## 1 Introduction One of the most important, and challenging, questions in string theory is the existence and stability of vacua that may describe semi-realistic physics in four dimensions. The choice of internal manifold in string theory compactifications dictates many aspects of the four dimensional physics. In the case of the heterotic string, it was shown in [3] that under such reasonable assumptions that * the vacuum be of the form \(\mathcal{M}_{4}\times M\) where \(\mathcal{M}_{4}\) is a maximally symmetric four dimensional spacetime manifold and \(M\) is a compact six-dimensional internal manifold, and * there be unbroken \(N=1\) supersymmetry in four dimensions, \(M\) is forced to be a Calabi-Yau three-fold, \({\cal M}_{4}\) is forced to be Minkowski, and the NS flux is not allowed to have a vacuum expectation value (vev). It also became immediately clear that there is no unique choice of such a vacuum configuration - the moduli fields describing deformations of the internal manifold could not be given a set of unique values. Soon after the discovery of D-branes [4], new supersymmetric vacua of type II string theories were found in [5], with non-zero vev for the RR fluxes. Fluxes turn out to be good for multiple purposes. Naively, a Calabi-Yau compactification of the kind described above preserves \(N=2\) supersymmetry in four dimensions. Incorporating fluxes provides a way [6] to partially break supersymmetry from \(N=2\) to \(N=1\). It also generates a classical superpotential [6; 7] for moduli, raising the possibility of stabilizing some (or all) of them at a stable minimum of the potential. It was claimed to be possible to stabilize all complex structure moduli of Calabi-Yau manifolds in flux compactifications of type IIB or F-theory [7; 8; 9; 10; 11]. However, it was conjectured recently in [12; 13] that, in models with a large number of complex structure moduli, the contribution of the flux to the D3-brane tadpole grows linearly with the number of stabilized moduli, a statement known as the tadpole conjecture. In such scenarios the price to pay for full moduli stabilization may be a violation of the tadpole cancellation condition. We will study in this paper some aspects of these compactification-related issues in type IIB string theory. Specifically, we will focus on a non-geometric compactification using an orientifold of the \(1^{9}\) Landau-Ginzburg (henceforth LG) model orbifolded by a \(\mathbb{Z}_{3}\) symmetry. The \(1^{9}\) LG model is a tensor product of nine \(N=2\) minimal models, each with level \(k_{i}=1\), making a total central charge of \(c=3\). It has world-sheet superpotential \[{\cal W}=\sum_{i=1}^{9}x_{i}^{3}. \tag{1}\] In geometric compactifications, there is at least one Kahler modulus - the overall size of the internal manifold. In general, therefore, one must be concerned with stabilizing both complex structure moduli and Kahler moduli. The fluxes generate a superpotential for the complex structure moduli, but the potential for the Kahler moduli is typically generated through non-perturbative effects. In order to avoid Kahler moduli altogether, and (try to) stabilize complex structure moduli by fluxes alone, we can look for compactifications with internal manifolds having \(h^{1,1}=0\). String theory provides such examples where the internal manifolds are mirror duals to rigid1 Calabi-Yau manifolds. Since mirror symmetry interchanges complex and Kahler structures, these manifolds do not have Kahler moduli, and cannot be given a geometric interpretation. Nevertheless, they have a field theory description in terms of LG models. In a nutshell, this is the motivation to study such non-geometric compactifications. This idea was first pursued in [1] where supersymmetric flux backgrounds were found in the \(1^{9}\) and \(2^{6}\) LG models, leading to four dimensional Minkowski and Anti-de-Sitter spacetimes. Fluxes are described in these models using a combination of techniques from the world-sheet theory and the effective 4D theory. It was also argued in [1] that the flux superpotential is given by the standard GVW [19] formula, \[W=\int_{M}G\wedge\Omega\, \tag{2}\] and that it receives no perturbative or non-perturbative correction thanks to a theorem concerning non-renormalization of the BPS tension of a D5-brane domain wall. It was then claimed in [1] that all complex structure moduli are stabilized via this flux-induced superpotential. A recent investigation of this claim in [2] revealed (also see [14]) that not all moduli fields get a mass in the solutions presented in [1]. This does not rule out the possibility that some of the massless moduli are stable. The dependence of \(W\) on moduli is given by (2) through how the holomorphic three-form \(\Omega\) depends on them. One can compute an order-by-order expansion of \(W\) (see Section 3, eqn. (20)) in the moduli deformation parameters, and some or all the massless moduli may be stabilized by terms at order higher than two. Thus, a systematic analysis of the supersymmetric vacua is necessary - computing the number of massive moduli in each, and also the number of massless moduli stabilized at higher order - to definitively understand the issue of moduli stabilization in these models. In the course of this exercise, the tadpole conjecture of [12] can also be tested explicitly for these non-geometric compactification models. With this broad goal in mind, we launch a systematic search for Minkowski solutions in the \(1^{9}/\mathbb{Z}_{3}\) model in this work. Another interesting aspect is the recent classification [15, 16] of compactifications of type IIA/B supergravities down to 4D Minkowski, de Sitter, and anti-de Sitter spacetimes where the internal space is a 6D group manifold. The authors of these papers classify previously known solutions based on the \(O_{p}/D_{p}\) sources present, and guided by this classification find new solutions in previously unexplored classes. Based on observation of a large number of solutions they propose some interesting conjectures, one of which is the _Massless Minkowski conjecture_ stating that all Minkowski solutions of this kind must have at least one massless scalar field. Even though we study Minkowski solutions in a non-geometric compactification of type IIB string theory, we find that all solutions found in this model so far have massless fields. We begin by providing in Section 2 the basic tools needed to compute all relevant quantities in the \(1^{9}/\mathbb{Z}_{3}\) model. Conditions for type IIB compactifications to 4D Minkowski \(N=1\) supersymmetric vacua are stated in the geometric setting, and then translated into the LG language. Then in Section 3 we present a large set of solutions satisfying these conditions. Using an exhaustive search algorithm described in Section 4, we find that there are no solutions in this model with flux tadpole \(\leq 7\). We also present in Section 3 a large set of 8-flux-solutions which have flux tadpole 8. For all the aforementioned solutions, we also present the rank of the Hessian of the superpotential which equals the number of massive moduli. We do not analyze stabilization of massless fields at higher order presently, but show a convenient way of calculating derivatives of \(W\) that will enable a computer to compute these corrections quite fast. Basics The conditions for type IIB string theory compactified to 4D with unbroken \(N=1\) supersymmetry in the presence of background flux have been described in the literature many times. We begin by stating these conditions, formulated for compactifications on a geometric space \(M\), maybe an orientifold of a Calabi-Yau three-fold. However, in this paper we are interested in backgrounds not described in terms of geometry but in terms of conformal field theory, in particular the LG model \(1^{9}/\mathbb{Z}_{3}\). The aforementioned conditions will then have to be translated into LG language, which we do in the subsections that follow. There is a flux-induced superpotential in compactifications of type IIB. It is given as usual by [19; 1; 20] \[W=\int_{M}G\wedge\Omega \tag{1}\] where \(\Omega\) is the holomorphic \((3,0)\)-form, \(G\) is the complex three-form flux obtained by combining the three-forms in the R-R and NS-NS sectors of type IIB string theory: \[G=H_{RR}-\tau H_{NS}, \tag{2}\] and \(\tau\) is the axio-dilaton: \[\tau=C_{0}+ie^{-\phi}. \tag{3}\] Unbroken supersymmetry demands that \[G=H_{RR}-\tau H_{NS}\in H^{(2,1)}(M)\oplus H^{(0,3)}(M). \tag{4}\] In this paper we will focus on Minkowski solutions for which the superpotential \(W\) vanishes, further constraining \(G\): \[G_{\rm Mink}\in H^{(2,1)}(M). \tag{5}\] Secondly, the tadpole cancellation condition requires \[\int_{M}H_{RR}\wedge H_{NS}+N_{D3}=Q_{3}(\text{O--plane}), \tag{6}\] where \(Q_{3}(\text{O--plane})\) is the D3-brane charge of the orientifold planes, and \(N_{D3}\) is the number of D3-branes in the geometry. Third, the fluxes have to obey the Dirac quantization conditions \[\int_{\Gamma}G=N-\tau M\,\ \ \text{where}\ \ N,M\in\mathbb{Z}\, \tag{7}\] for any three-cycle \(\Gamma\in H_{3}(M,\mathbb{Z})\). We will now write down analogues of conditions (5, 6, 7) in the LG language. Our aim is to be self-contained with regard to all necessary tools for computations. Detailed derivations can be found in [1; 2] and references therein. ### Cohomology The harmonic three-forms in the \(1^{9}\) LG model are labelled by nine integers, which we assemble into a vector \(\vec{\ell}=(\ell^{1},\ldots,\ell^{9})\), such that \[\Omega_{\vec{\ell}}\in H^{(p,q)}(M)\ \ \text{with}\ \ p+q=3,\qquad\ell^{i}=1,2, \qquad\sum_{i=1}^{9}\ell^{i}=0\ \ \text{mod}\ \ 3. \tag{8}\] These arise from tensoring RR sector ground states [22] in the building block minimal model, denoted \(|\ell\rangle\), \(\ell=1,2\). The harmonic three-forms are classified into the four types of \((p,q)\)-forms, \(p+q=3\), as follows: \[\begin{array}{|c|c|c|c|c|}\hline\sum_{i}\ell^{i}&9&12&15&18\\ \hline H^{(p,q)}&H^{(3,0)}&H^{(2,1)}&H^{(1,2)}&H^{(0,3)}\\ \hline\end{array} \tag{9}\] Therefore, condition (5) in the LG language becomes \[G\in\text{span}\ \{\Omega_{\vec{\ell}}\colon\ell^{i}\in\{1,2\}\ \text{and}\ \sum_{i}\ell^{i}=12\}\, \tag{10}\] which means that the vectors \(\vec{\ell}\) are composed of exactly three 2's and six 1's. We will consider the orientifold that combines worldsheet parity with the operator denoted by \(g_{1}\) in [1]: \[g_{1}\ :\ (x_{1},x_{2},x_{3},\ldots,x_{9})\mapsto-(x_{2},x_{1},\ldots,x_{9}). \tag{11}\] What this means for the flux \(G=\sum_{\vec{\ell}}B_{\vec{\ell}}\Omega_{\vec{\ell}}\) is that it should be symmetric upon interchanging the first two entries of all \(\vec{\ell}\) labels. This constrains \(\Omega_{(1,2,\ldots)}\) and \(\Omega_{(2,1,\ldots)}\) to either be turned on with equal relative strength or be simultaneously turned off2. For ease of reference, we will say that these are _fluxes in the orientifold directions_. The fluxes of the kinds \(\Omega_{(1,1,\ldots)}\) and \(\Omega_{(2,2,\ldots)}\) are then referred to as _fluxes in the non-orientifold directions_. This orientifolding makes the span in (10) have 63 independent fluxes. To save ink while describing solutions in section 3, we index the labels as specified in Appendix A. For example, Footnote 2: The entries in the “\(\ldots\)” of the two \(\Omega\)’s in this sentence are identical of course. \[\Omega_{(1,1,1,1,1,1,2,2)}=\Omega_{1}. \tag{12}\] This notation is particularly useful for orientifold directions. For example, \[\Omega_{(1,2,1,1,1,1,1,2,2)}+\Omega_{(2,1,1,1,1,1,1,2,2)}=\Omega_{36}. \tag{13}\] ### Tadpole cancellation The Bianchi identity for the RR 5-form is \[dF_{5}=H_{RR}\wedge H_{NS}+\rho, \tag{14}\] and in a space-time described by geometry it can be integrated over the internal space \(M\) to give the tadpole cancellation condition (6), which we restate: \[\int_{M}H_{RR}\wedge H_{NS}+N_{D3}=Q_{3}(\text{O--plane}), \tag{15}\] he topological nature of this condition allows us to formulate its analogue in the LG language by considering models that can be connected with some geometry by continuously varying moduli. For the orientifold we are considering, one gets [1] \[Q_{3}(\text{O--plane})=12\, \tag{16}\] and the tadpole cancellation condition takes the form \[\int_{M}H_{RR}\wedge H_{NS}=\frac{1}{\tau-\bar{\tau}}\int_{M}G\wedge\bar{G}=12-N _{D3}. \tag{17}\] Here, \(\bar{G}\) is obtained from \(G=\sum_{\vec{\ell}}B_{\vec{\ell}}\,\Omega_{\vec{\ell}^{\prime}}\) by3 Footnote 3: Notaion: \(\vec{1}=(1,1,1,1,1,1,1,1)\), \(\vec{2}=(2,2,2,2,2,2,2,2)\), \(\vec{3}=(3,3,3,3,3,3,3,3)\)_etc._ \[\bar{G}=\sum_{\vec{\ell}^{\prime}}B_{\vec{\ell}^{\prime}}^{*}\;\Omega_{\vec{3 }-\vec{\ell}} \tag{18}\] The left hand side of eqn. (17) is the contribution of the flux to the tadpole, \[N_{\text{flux}}:=\frac{1}{\tau-\bar{\tau}}\int_{M}G\wedge\bar{G}\, \tag{19}\] and is seen to be bounded above by 12 for physical solutions. It (and the superpotential \(W\) in eqn. (1)) can be computed using the Riemann bilinear identity. We will show some of these computations explicitly after introducing a basis of three-cycles in the LG language. ### Homology and flux quantization The \(1^{9}\) LG model is a tensor product of nine copies of a minimal model with worldsheet superpotential \({\cal W}=x^{3}\). The A-type D-branes in this building block minimal model are described in the \({\cal W}\)-plane by the positive real axis, \[\text{Im }{\cal W}=0\, \tag{20}\] or, equivalently, in the \(x\)-plane as the contours \(V_{0},V_{1},\text{ and }V_{2}\) that look like the edges of three "pieces of cake" (Figure 1). Clearly, they satisfy \[V_{0}+V_{1}+V_{2}=0. \tag{21}\] A set of integral three-cycles for the \(1^{9}/\mathbb{Z}_{3}\) model is built (see [1]) by tensoring nine \(V_{n}\)'s, and then \(\mathbb{Z}_{3}\)-completing them. Explicitly, these branes are \[\Gamma_{\vec{n}}=\frac{1}{\sqrt{3}}\left(V_{\vec{n}}+V_{\vec{n}+ \vec{1}}+V_{\vec{n}+\vec{2}}\right)=\frac{1}{\sqrt{3}}\left(\otimes_{i}V_{n_{i }}+\otimes_{i}V_{n_{i}+1}+\otimes_{i}V_{n_{i}+2}\right)\, \tag{22}\] \[\vec{n}=(n_{1},\ldots,n_{9}),\qquad n_{i}=0,1,2.\] \(\mathbb{Z}_{3}\) acts on \(\otimes_{i}V_{n_{i}}\) as a tensor product on each of the factors. On a factor \(V_{n}\), it acts as \(V_{n}\to V_{(n+1)\text{ mod }3}\). The set of cycles \(\{\Gamma_{\vec{n}}\}\) defined by (22) is linearly dependent. It turns out that one can constrain \(n_{i}\) to \(n_{i}=0,1\), and further restrict \(\vec{n}\)'s to be the binary representations4 of the first 170 non-negative integers to obtain an integral basis of three-cycles in the \(1^{9}/\mathbb{Z}_{3}\) orbifold. Integrals of the fluxes through the three-cycles (see [1] for justification) are prescribed, with a normalization chosen for convenience, as follows. The pairing in the building block minimal model between the cycles \(V_{n}\) and the RR sector ground states \(|\ell\rangle\), \(\ell=1,2\), is given by Footnote 4: written with nine binary digits, padding with zeroes on the left when necessary. \[\langle V|\ell\rangle=\frac{1}{\left(\frac{-1+\omega^{\ell}}{3}\right)\Gamma \big{(}\frac{\ell}{3}\big{)}}\int_{V_{n}}x^{\ell-1}e^{-x^{3}}dx=\omega^{n\ell} \, \tag{23}\] where \(\omega=e^{\frac{2\pi i}{3}}\) is a cube root of unity. We are making the correspondence \[|\ell\rangle\leftrightarrow\frac{1}{\left(\frac{-1+\omega^{\ell}}{3}\right) \Gamma\big{(}\frac{\ell}{3}\big{)}}\ x^{\ell-1}. \tag{24}\] In the tensor product, this translates to \[\Omega_{\vec{\ell}}\leftrightarrow|\vec{\ell}\ \rangle\leftrightarrow\prod_{i=1}^ {9}\frac{1}{\left(\frac{-1+\omega^{\ell^{i}}}{3}\right)\Gamma\Big{(}\frac{ \ell^{i}}{3}\Big{)}}\ x_{i}^{\ell^{i}-1}\, \tag{25}\] and \[\int_{V_{\vec{n}}}\Omega_{\vec{\ell}}\!=\prod_{i=1}^{9}\int_{V_{n_{i}}}\frac {x_{i}^{\ell^{i}-1}}{\left(\frac{-1+\omega^{\ell^{i}}}{3}\right)\Gamma\left( \frac{\ell^{i}}{3}\right)}\ e^{-x_{i}^{3}}\ dx_{i}=\omega^{\vec{n}\cdot\vec{ \ell}}. \tag{26}\] We are now ready to impose the flux quantization condition on the basis of three-cycles \(\{\Gamma_{\vec{n}}\}\), namely \[\int_{\Gamma_{\vec{n}}}G=N_{\vec{n}}-\tau M_{\vec{n}}\, \tag{27}\] where \(N\) and \(M\) are integers. This ensures flux quantization for any \(\Gamma\in H_{3}(M,\mathbb{Z})\). The result \[\int_{\Gamma_{\vec{n}}}\Omega_{\vec{\ell}}\!=\!\sqrt{3}\ \omega^{\vec{n}\cdot \vec{\ell}} \tag{28}\] can be obtained by explicit computation and is very useful. Figure 1: The “pieces of cake”: A-type D-branes in the LG model \(x^{3}\). ### The homogenous basis of cycles At this point we would like to set up notation for a different basis of three-cycles, called the homogeneous basis, introduced in [1]. We will give its description in a pedestrian way, avoiding derivations, but highlighting how it makes certain computations convenient, resulting in simpler formulas. For the building block minimal model, let us define the cycles \[W_{0} =V_{0}+\omega V_{1}+\omega^{2}V_{2} \tag{29a}\] \[W_{1} =V_{0}+\omega^{2}V_{1}+\omega V_{2}. \tag{29b}\] Their intersections are \[W_{0}\cap W_{1}=0=W_{1}\cap W_{0},\qquad W_{1}\cap W_{0}=3(1-\omega),\qquad W _{0}\cap W_{1}=3(1-\omega^{2}). \tag{30}\] They have the following nice property: \[\int_{W_{0}}x^{n}e^{-x^{3}}dx =\delta_{(n\ \mathrm{mod}\ 3),1}\ (-1+\omega^{2})\Gamma(\tfrac{n+1}{3}) \tag{31a}\] \[\int_{W_{1}}x^{n}e^{-x^{3}}dx =\delta_{(n\ \mathrm{mod}\ 3),0}\ (-1+\omega)\Gamma(\tfrac{n+1}{3})\, \tag{31b}\] resulting in the fact that each three-form flux \(\Omega_{\vec{\ell}}\,\) integrates to zero on all but one three-cycle obtained by tensoring nine \(W_{n}\)'s. Explicitly, let us denote by \(C_{\vec{\ell}}\,\) the cycles: \[C_{\vec{\ell}}\,\text{:=}\,W_{\vec{\mathcal{I}}-\vec{\ell}}\,\text{:=}\, \otimes_{i=1}^{9}W_{2-\ell^{i}}. \tag{32}\] The cycles \(C^{*}_{\vec{\ell}}\) are given by \[C^{*}_{\vec{\ell}}=W_{\vec{\ell}-\vec{1}}\,\text{=}\,\otimes_{i=1}^{9}W_{\ell ^{i}-1}. \tag{33}\] For demonstration, \(\vec{\ell}_{1}=(1,1,1,1,1,1,2,2,2)\Rightarrow C_{\vec{\ell}_{1}}=W_{(1,1,1,1, 1,1,0,0,0)}=W_{1}^{\otimes 6}\otimes W_{0}^{\otimes 3}\), and \(C^{*}_{\vec{\ell}_{1}}=W_{(0,0,0,0,0,0,1,1,1)}=W_{0}^{\otimes 6}\otimes W_{1}^{ \otimes 3}\). We then have \[\int_{C_{\vec{\rho}}}\Omega_{\vec{\ell}}\,\text{=}\,3^{9}\ \delta_{\vec{\ell},\vec{ \ell}} \tag{34}\] For each \(\vec{\ell}\) such that \(\Omega_{\vec{\ell}}\,\text{\in}\,H^{(2,1)}(M)\), we have \[C_{\vec{\ell}}\,\text{\cap}\,C^{*}_{\vec{\ell}}=3^{9}(1-\omega^{2})^{3}(1- \omega)^{6}=-i\ 3^{13}\ \sqrt{3}. \tag{35}\] Computing the integrals to evaluate the superpotential (1), or the flux tadpole (19) is much simpler if one employs the Riemann bilinear identity with a basis made of the \(C\) cycles. For instance, \[W=\int_{M}G\wedge\Omega=\sum_{C}\frac{1}{C\cap C^{*}}\int_{C}G\ \int_{C^{*}}\Omega \tag{36}\] and, for each summand \(B_{\vec{\ell}}\,\Omega_{\vec{\ell}}\,\) in \(G=\sum_{\vec{\ell}}B_{\vec{\ell}}\,\Omega_{\vec{\ell}}\,\), only one cycle, namely \(C_{\vec{\ell}}\,\), contributes a non-zero value in the first integral on the right hand side of (36). A large class of solutions In this section we will present a large class of backgrounds and describe their properties. We will categorize solutions in terms of the number of \(\Omega\)'s turned on. We do so because of the following reason. It turns out that each non-zero component contributes at least 1 to the tadpole, implying that a lower bound for the flux tadpole5 of a flux background with \(n\) independent \(\Omega_{\vec{\ell}}\) components turned on is \(n\). Since one of the search criteria for flux backgrounds is the value of the flux tadpole, it makes sense to organize solutions in terms of its lower bound. For the cases when 1, 2, 3, or 4 components are turned on, we find that this lower bound is not saturated. We present for these cases the saturated lower bound of the flux tadpole, and all flux backgrounds that attain it. Footnote 5: The orientifold we will consider has a tadpole value of 12. Thus, the flux contribution to the tadpole can be maximally 12. Therefore we need only turn on up to 12 fluxes. As mentioned in (10), the 63 independent harmonic \((2,1)\)-form fluxes are labeled by vectors \(\vec{\ell}\) composed of three 2's and six 1's. For convenience, we index them in this section (also see Appendix A) as follows: \(I=(\alpha,A)\), with \(\alpha\in\{1,\ldots,35\}\cup\{57,\ldots,63\}\) labeling \(\vec{\ell}\)'s whose first two entries are identical, and \(A\in\{36,\ldots,56\}\) labeling the ones of the form \((1,2,\ldots)\). We do not introduce an index for the \(\vec{\ell}\)'s of the form \((2,1,\ldots)\) since, as a result of orientifolding, turning on the flux \(\Omega_{(1,2,\ldots)}\) would automatically turn on the flux \(\Omega_{(2,1,\ldots)}\) with the same relative strength where the distribution of 1's and 2's in the two sets of "\(\ldots\)" above are identical. The generic flux background is a linear combination \[G=\sum_{I=1}^{63}B_{I}\Omega_{I} \tag{11}\] where the \(G\)-flux is as in (2). Here we have further simplified notation: \(\Omega_{[\bar{l}}=\Omega_{I}\). The coefficients \(B_{I}\) are complex, so 126 real numbers label each flux configuration. How shall we proceed? We will be interested in solutions with \(\tau=\omega\). First, the flux quantization \[\int_{\Gamma_{\vec{\pi}}}G=N_{\vec{\pi}}-\omega M_{\vec{\pi}} \tag{12}\] holds for any cycle in the basis \(\{\Gamma_{\vec{\pi}}\}\) of 170 cycles. There are 170 \(N\)'s and 170 \(M\)'s, _i.e._ in total 340 flux quantum numbers, which together with the real and imaginary parts of \(B_{I}\) make a total of 466 real parameters. These parameters satisfy a total of \(170\times 2=340\) conditions which are the real and imaginary parts of eqn. (12). We will then view 126 of the \(N\)'s and \(M\)'s as "independent flux numbers" and label them by \(y_{i}\), \(i=1,\ldots,126\), and solve for \(B_{I}\) in terms of the \(y_{i}\). Collecting all real and imaginary parts of \(B_{I}\) in a 126-dimensional real vector \((b_{i})=(\text{ReB}_{1},\text{ImB}_{1},\ldots,\text{ReB}_{63},\text{ImB}_{63})\), this relationship reads \(b_{i}=C_{ij}y_{j}\). The details of the matrix \(C\) are not important in this section, but we bear in mind that flux quantization has been imposed in this way. ### Flux tadpole and massive moduli The two main properties of the solutions we will focus on are the flux tadpole \(N_{\text{flux}}\) (defined in (19)), and the number of massive moduli fields. #### 3.1.1 Flux tadpole The tadpole cancellation condition (17), when \(\tau\) is taken to be equal to \(\omega\), becomes6 Footnote 6: by computing \(N_{\rm flux}\) (19) by using the Riemann bilinear identity. \[N_{\rm flux}=81\sum_{I}|B_{I}|^{2}=\sum_{i,j}{\cal Q}_{ij}y_{i}y_{j}=12-N_{D3}, \tag{19}\] where \({\cal Q}\) is the symmetrized coefficient matrix of the homogeneous quadratic polynomial of \(\{y_{i}:i=1(1)126\}\) obtained by substituting \(b_{i}=C_{ij}y_{j}\) on the left hand side of (19). Therefore, we should look for flux backgrounds with \(N_{\rm flux}\leq 12\). By employing an exhaustive search algorithm, we verified that, in the orientifold of \(1^{9}/\mathbb{Z}_{3}\) studied in this paper, \[N_{\rm flux}\geq 8. \tag{20}\] Details of this result and the algorithm can be found in section 4. Thus, physical solutions in this model obey \[8\leq N_{\rm flux}\leq 12. \tag{21}\] One finds a large set of solutions in [1, 2], some within this bound and some outside. We extend those results in this section in the following way. We first categorize solutions with respect to number of \(\Omega_{\tilde{\ell}}\)'s turned on, find what the lowest value of \(N_{\rm flux}\) can be for each category, and present all solutions attaining this greatest lower bound. We do this for up to 4-\(\Omega\) solutions in subsection 3.2. #### 3.1.2 Rank of the mass matrix Given a flux vacuum, an immediate question is whether this sits at a point in moduli space where all moduli are stabilized. If all scalar fields corresponding to deformations of the moduli around this point are massive, then no continuous deformation exists with zero energy cost, implying full moduli stabilization. However, all scalar fields being massive isn't a necessary condition. It is possible to have massless fields that are stabilized through interactions at higher order in deformation parameters. Here we focus on how many scalar fields are massive (and hence are stabilized at order two), and postpone the analysis of higher order deformations to future work. The mass matrix of scalar fields in Minkowski solutions is given by a combination of the Hessian of the superpotential, and the inverse of the Kahler metric. It was shown7 in [2] that, even though corrections to the Kahler potential are not under control, the rank of the physical mass matrix is the same as the rank of the Hessian of the superpotential \(W\). Since the rank of the mass matrix is equal to the number of massive fields, and our goal is to count how many moduli are massive in a flux background, we will focus attention on computing the Hessian of \(W\). Formulas for calculating the matrix elements of \(\partial\partial W\) are given in [2] where the authors employ the Riemann bilinear identity using the basis \(\{\Gamma_{\vec{n}}\}\) of cycles. We observe that using the homogeneous basis yields relatively simpler formulas, and significantly speeds up computations on a computer. This is especially useful for us since we analyze a large set of solutions. The flux superpotential is given as usual by (1): \[W=\int_{M}G\wedge\Omega \tag{10}\] in which the dependence of \(W\) on all moduli comes from the holomorphic three-form \(\Omega\) not to be confused with \(\Omega_{\vec{\ell}}\). We use (36), which we quote again for convenience: \[W=\int_{M}G\wedge\Omega=\sum_{C}\frac{1}{C\cap C^{*}}\int_{C}G\int_{C^{*}} \Omega\, \tag{11}\] with the cycles chosen from the homogeneous basis. The second integral on the right hand side of eqn. (11) encodes the full functional dependence of the superpotenial on deformations8 of the moduli via the worldsheet superpotential Footnote 8: We parameterize the deformations by local coordinates \(\{t^{\vec{\ell}},\vec{\ell}\in\{\vec{\ell}_{I}\}\}\). For convenience, let us write \(t^{I}:=t^{\vec{\ell}_{I}}\). \[\mathcal{W}(t^{I})=\sum_{i=1}^{9}x_{i}^{3}-\sum_{I}t^{I}\vec{x}^{\vec{\ell}^{ 2}-\vec{1}}. \tag{12}\] For a generic flux background as in (10), the superpotential evaluates to \[\left(-i\ 3^{4}\sqrt{3}\right)W=\sum_{\alpha}B_{\alpha}\int_{C_{\alpha}^{*}} \Omega+\sum_{A}B_{A}\left[\int_{C_{A}^{*}}\Omega+\int_{C_{A}^{*^{\prime}}} \Omega\right]. \tag{13}\] Here, we note that a flux \(\Omega_{A}\) corresponding to the index \(A\) is of the form \(\Omega_{(12\ldots)}+\Omega_{(21\ldots)}\), which yields non-zero integrals on two distinct \(C\)-cycles instead of one - the first summand is non-zero when integrated over \(C_{A}\) as defined in (32), while the second summand gives non-zero integral over a \(C\)-cycle obtained from \(C_{A}\) by interchanging its first two \(W\)-factors. It is this cycle which has been labeled temporarily as \(C_{A}^{\prime}\) in (13). Now it remains to evaluate the integrals over \(C^{*}\)'s. We have, for an arbitrary cycle \(\Gamma\), \[\int_{\Gamma}\Omega=\int_{\Gamma}d^{9}x\,\exp\left[-\sum_{i=1}^{9}x_{i}^{3}+ \sum_{\alpha}t^{\alpha}\left(\prod_{i=1}^{9}x_{i}^{\ell_{\alpha}^{i}-1}\right) +\sum_{A}t^{A}(x_{1}+x_{2})\left(\prod_{i=3}^{9}x_{i}^{\ell_{A}^{i}-1}\right)\right] \tag{14}\] To compute Kahler covariant derivatives, we need the Kahler potential \(K\). However, for Minkowski solutions, the following second Kahler covariant derivatives evaluated at the vacua are equal to the corresponding partial derivatives: \(D_{t^{I}}D_{t^{J}}W|_{t=0}=\partial_{t^{I}}\partial_{t^{J}}W|_{t=0}\), \(D_{\tau}D_{t^{I}}W|_{t=0}=\partial_{\tau}\partial_{t^{I}}W|_{t=0}\), where \(D_{y}W=\partial_{y}W+(\partial_{y}K)W\), \(y=(t^{I},\tau)\), and \(D_{\tau}D_{\tau}W|_{t=0}=0\). Combining all the ingredients provided above, it is straightforward to compute these second derivatives \(\left[\partial_{t^{I}}\partial_{t^{J}}W\right]\rvert_{t=0}\). We simply quote the results below: \[k\frac{\partial^{2}}{\partial t^{\alpha}\partial t^{\beta}}W|_{t=0} =\sum_{\tilde{\alpha}}B_{\tilde{\alpha}}\prod_{i=1}^{9}\delta( \ell^{i}_{\tilde{\alpha}}+\ell^{i}_{\alpha}+\ell^{i}_{\beta},4) \tag{3.11a}\] \[k\frac{\partial^{2}}{\partial t^{A}\partial t^{\beta}}W|_{t=0} =\sum_{\tilde{A}}B_{\tilde{A}}\.\ 2\ \delta(\ell^{1}_{\beta},1)\delta(\ell^{2}_{\beta},1)\,.\ \prod_{i=3}^{9}\delta(\ell^{i}_{\tilde{A}}+\ell^{i}_{A}+\ell^{i}_{\beta},4)\] (3.11b) \[k\frac{\partial^{2}}{\partial t^{A}\partial t^{B}}W|_{t=0} =\sum_{\tilde{\alpha}}B_{\tilde{\alpha}}\.\ 2\ \delta(\ell^{1}_{\tilde{\alpha}},1)\delta(\ell^{2}_{\tilde{ \alpha}},1)\.\ \prod_{i=3}^{9}\delta(\ell^{i}_{\tilde{\alpha}}+\ell^{i}_{A}+\ell^{i}_{B},4) \tag{3.11c}\] where \[k=\frac{1}{\left[\Gamma(\frac{1}{3})\right]^{3}\left[\Gamma(\frac{2}{3}) \right]^{6}}. \tag{3.12}\] Furthermore, the second derivatives of \(W\) involving one or two derivatives with respect to the axio-dilaton are: \[k^{\prime}\frac{\partial^{2}}{\partial\tau^{2}}W|_{t=0} =0 \tag{3.13a}\] \[k^{\prime}\frac{\partial^{2}}{\partial\tau\partial t^{\alpha}}W |_{t=0} =-\frac{1}{\tau-\bar{\tau}}\int_{M}\bar{G}\wedge\partial_{t^{\alpha}}\Omega =\ \frac{i}{\sqrt{3}}B^{*}_{\alpha}\] (3.13b) \[k^{\prime}\frac{\partial^{2}}{\partial\tau\partial t^{A}}W|_{t=0} =-\frac{1}{\tau-\bar{\tau}}\int_{M}\bar{G}\wedge\partial_{t^{A}}\Omega =\ \frac{2i}{\sqrt{3}}B^{*}_{A}\, \tag{3.13c}\] where \[k^{\prime}=\frac{1}{\left[\Gamma(\frac{1}{3})\right]^{6}\left[\Gamma(\frac{2 }{3})\right]^{3}}. \tag{3.14}\] This gives all matrix elements of the Hessian of the superpotential. Similar formulas can be derived for higher order derivatives to analyze stabilization of massless moduli at higher order. ### Solutions in terms of the number of \(\Omega\)'s #### 3.2.1 1,2,3-\(\Omega\) solutions As a warm up let's discuss the simplest solutions, namely those in which only one, two, or three \(\Omega\) components appear. These will not satisfy the tadpole cancellation condition. In what follows, we will sometimes refer to the flux tadpole \(N_{\rm flux}\) as the tadpole for brevity. 1-\(\Omega\) solutions:First we consider the case where only one component in the non-orientifold direction is turned on, _i.e._ \[G=A\Omega_{\alpha}. \tag{3.15}\] There is an \(S_{7}\) symmetry which acts by interchanging the last 7 factors in the tensor product LG model. There is no \(S_{9}\) symmetry since the first two factors are singled out by the action of the orientifold. Using this \(S_{7}\) symmetry, we can take \(\alpha=1\) or \(\alpha=57\). The quantization condition in the first case becomes \[\int_{\Gamma_{\vec{n}}}G=A\int_{\Gamma_{\vec{n}}}\Omega_{\vec{\ell}_{1}}=A \omega^{\vec{n}\cdot\vec{\ell}_{1}}\sqrt{3}=N_{\vec{n}}-\omega M_{\vec{n}}. \tag{3.16}\] For this to hold for all \(\Gamma_{\vec{n}}\) in the integral basis, \(A\) must be an integer multiple of \(\frac{1}{\sqrt{3}}\). The same argument applies to \(\alpha=57\). We find that the flux configuration that is properly quantized and attains the minimum value of tadpole is \[G=\frac{1}{\sqrt{3}}\Omega_{\alpha}\,\qquad\alpha=1\quad\text{or}\quad\alpha=57, \tag{3.17}\] and the minimal tadpole is 27. The quantization condition requires \[\omega^{\vec{n}\,\vec{\ell}_{\alpha}}=N_{\vec{n}}-\omega M_{\vec{n}}. \tag{3.18}\] Taking into account \(1+\omega+\omega^{2}=0\) it is not difficult to see that it is always possible to choose flux numbers such that the above equation is satisfied for any \(\vec{n}\). There are 16 massive scalars if \(\alpha=1\), and 22 massive scalars if \(\alpha=57\). Because of the \(S_{7}\) symmetry, any solution with \(\vec{\ell}=\vec{\ell}_{i}\), \(i=1,\ldots,35\) has tadpole 27 and lead to 16 massive scalars, and any solution with \(\vec{\ell}=\vec{\ell}_{i}\), \(i=57,\ldots,63\) has tadpole 27 and leads to 22 massive scalars. In case a flux in an orientifold direction is involved, we find that the minimal tadpole value is attained by \[G=\frac{1}{\sqrt{3}}\Omega_{36}\, \tag{3.19}\] where again the normalization is required by flux quantization. The minimal tadpole is twice the minimal tadpole of non-orientifold directions, 54, and there are 22 massive scalars. The \(S_{7}\) symmetry then implies that the same results hold for any flux \(\Omega_{A}\), with \(A=36,\ldots,56\). 2-\(\Omega\) solutions:The smallest tadpole in this case is 18. The flux allowing this tadpole is of the form \[G=\frac{i}{3}\left(\Omega_{1}-\Omega_{\alpha}\right), \tag{3.20}\] with \(\alpha=2,\ldots,35,57,\ldots,63\), and a minimal tadpole of 18. The number of massive fields again depends on \(\alpha\). For \(\alpha=2,\ldots,35\), the number of massive fields can be 16, 24 or 26, while if \(\alpha=57,\ldots,63\), it can be 28 or 32. In this case, we have used the \(S_{7}\) symmetry in taking the first term to be \(\Omega_{1}\). Then there is the case in which we can take the first entry to be \(\Omega_{57}\): \[G=\frac{i}{3}\left(\Omega_{57}-\Omega_{\alpha}\right), \tag{3.21}\] and without loss of generality9 we can take \(\alpha=58,\ldots 63\). The number of massive fields is 22 for all \(\alpha\) in this range. As in the 1-\(\Omega\) case, the smallest tadpole is only achievable using non-orientifold directions. Footnote 9: The choices \(\alpha=1,\ldots,35\) are covered in (3.20) We also note that any 2-\(\Omega\) solution of the form \[G=\frac{i}{3}\left(\Omega_{\alpha_{1}}-\Omega_{\alpha_{2}}\right),\quad\alpha _{1},\alpha_{2}\in\{1,\ldots,35\}\cup\{57,\ldots,63\}\, \tag{3.22}\] is part of a more general set of solutions given by \[G=\pm\frac{i}{3}\omega^{p}\left(\Omega_{\alpha_{1}}-\omega^{q}\Omega_{\alpha_{2}} \right), \tag{3.23}\] where \(p,q=0,1,2\) and the overall sign of \(G\) and values of \(p,q\) can be chosen independently for a total of 18 solutions for each choice of \(\{\alpha_{1},\alpha_{2}\}\). It is easy to see that if the flux (3.22) is properly quantized so is (3.23). Obviously this family of solutions has tadpole 18. The reason eq. (3.23) is properly quantized is the elementary fact that there always exist integers \(N\) and \(M\) for which \[\frac{i}{\sqrt{3}}\left(\omega^{a}-\omega^{b}\right)=N-\omega M, \tag{3.24}\] given any \(a,b\in\mathbb{Z}\). 3-\(\Omega\) solutions:The smallest tadpole for a flux involving 3-\(\Omega\)'s is 27 and it is engendered by fluxes of the form \[G=\frac{i}{3}\left(\Omega_{1}+\Omega_{\alpha}+\Omega_{\beta}\right), \tag{3.25}\] where \(\alpha,\beta\) can take any values \(\alpha,\beta=2,\ldots,35,57,\ldots,63\) or \[G=\frac{i}{3}\left(\Omega_{57}+\Omega_{\alpha}+\Omega_{\beta}\right), \tag{3.26}\] with \(\alpha,\beta=58,\ldots,63\). The number of massive fields does depend on \(\alpha,\beta\). If \(\alpha,\beta=2,\ldots,35\) the number of massive fields takes one of the values in the set \(\{16,20,24,28,22,34,29,32,30,38,42,36,40,46\}\). Again, also in this case there is a related set of properly quantized fluxes given by \[G=\pm\frac{i}{3}\omega^{p}\left(\Omega_{1,57}+\omega^{q}\Omega_{\alpha}+ \omega^{r}\Omega_{\beta}\right), \tag{3.27}\] for \(p,q,r\in\mathbb{Z}\). Evidently all of these solutions have tadpole 27. Also in this case quantization is due to an elementary but not immediately obvious fact. Namely, there always exist integers \(N\) and \(M\) such that \[\frac{i}{\sqrt{3}}\left(\omega^{a}+\omega^{b}+\omega^{c}\right)=N-\omega M, \tag{3.28}\] for any \(a,b,c\in\mathbb{Z}\). #### 3.2.2 4-\(\Omega\) solutions This is the first case in which the physical tadpole of 12 can be achieved with \[G=\frac{1}{3\sqrt{3}}(-\Omega_{1}+\Omega_{\alpha}+\Omega_{\beta}-\Omega_{ \gamma})\, \tag{3.29}\] where the values for \((\alpha,\beta,\gamma)\) can be found in the table below We note that also in this case there is a related family of fluxes with the same tadpole, _i.e._ tadpole 12 and are explicitly given by \[G=\pm\omega^{r}\frac{1}{3\sqrt{3}}(-\Omega_{1}+\omega^{p}\Omega_{\alpha}+\omega ^{q-p}\Omega_{\beta}-\omega^{q}\Omega_{\gamma}), \tag{3.30}\] where \(r,p,q\) are integers. It is easy to verify that if (3.29) is properly quantized so is eqn. (3.30). To do this it is useful to take (3.24) and (3.28) into account. Consequently for each flux in (3.29) there are 54 fluxes given by including different phases and overall signs. The total number of 4-\(\Omega\) fluxes with tadpole 12 is therefore \(84\times 54=4536\). These background fluxes stabilize 16, 22 or 26 moduli fields. That these 4-\(\Omega\) solutions are properly quantized can also be understood from the following simple fact. Given any 4 integers, \(a,b,c,d\in\mathbb{Z}\) there exist integers \(N,M\in\mathbb{Z}\) such that \[\frac{1}{3}(-\omega^{a}+\omega^{b}+\omega^{c}-\omega^{d})=N-\omega M, \tag{3.31}\] if and only if \[(-a-d+b+c)\;\;\text{mod}\;3=0. \tag{3.32}\] In particular, applied to the 4-\(\Omega\) fluxes10 this means that any combination Footnote 10: Note: we return to our previous index notation, \(\vec{\ell}=(\ell^{1},\ldots,\ell^{9})\), introduced in section 2. \[G=\frac{1}{3\sqrt{3}}(-\Omega_{\vec{\ell}_{a}}+\Omega_{\vec{\ell}_{b}}+\Omega_ {\vec{\ell}_{c}}-\Omega_{\vec{\ell}_{d}}), \tag{3.33}\] will be properly quantized as long as \[\vec{n}\cdot(-\vec{\ell}_{a}+\vec{\ell}_{b}+\vec{\ell}_{c}-\vec{\ell}_{d})\; \;\text{mod}\;3=0,\qquad\forall\vec{n}. \tag{3.34}\] A 9 component vector \(\vec{w}\), satisfying the condition \(\vec{n}\cdot\vec{w}\mod 3=0\) for all \(\vec{n}\), is a vector \(\vec{w}\) whose entries are multiples of 3. It is not difficult to see that it is not possible to get any non-zero multiples of 3 given any combination \(-\vec{\ell}_{a}+\vec{\ell}_{b}+\vec{\ell}_{c}-\vec{\ell}_{d}\) since the components of the \(\ell\)'s are 1 or 2. The only solution is \[-\vec{\ell}_{a}+\vec{\ell}_{b}+\vec{\ell}_{c}-\vec{\ell}_{d}=0. \tag{3.35}\] Taking \(a=1\) \[-\vec{\ell}_{1}+\vec{\ell}_{b}+\vec{\ell}_{c}-\vec{\ell}_{d}=0. \tag{3.36}\] The solutions are exactly those quoted in the previous table. #### 3.2.3 8-\(\Omega\) solutions In this case the smallest tadpole is 8 and the corresponding fluxes take the form \[G=\frac{1}{9}(-\Omega_{1}+\Omega_{\vec{\ell}_{a_{2}}}-\Omega_{\vec{\ell}_{a_{3}}} +\Omega_{\vec{\ell}_{a_{4}}}-\Omega_{\vec{\ell}_{a_{5}}}+\Omega_{\vec{\ell}_{a_ {6}}}-\Omega_{\vec{\ell}_{a_{7}}}+\Omega_{\vec{\ell}_{a_{8}}}). \tag{3.37}\] As in the 4-\(\Omega\) case a necessary condition for the fluxes to be properly quantized is \[(-\vec{\ell}_{1}+\vec{\ell}_{a_{2}}-\vec{\ell}_{a_{3}}+\vec{\ell}_{a_{4}}- \vec{\ell}_{a_{5}}+\vec{\ell}_{a_{6}}-\vec{\ell}_{a_{7}}+\vec{\ell}_{a_{8}}) \ \text{mod}\ 3=0, \tag{3.38}\] but contrary to the 4-\(\Omega\) case this condition is not sufficient. Aided by the computer it is possible to find those fluxes that turn out to be properly quantized. The table below gives the list of linearly independent solutions of this type by specifying \((a_{2},\ldots,a_{8})\). \begin{tabular}{|c|c|c|c|} \hline (2,8,6,15,13,19,20) & (3,8,5,16,13,18,20) & (2,8,6,25,23,29,30) & (3,8,5,26,23,28,30) \\ (2,9,7,14,12,19,20) & (4,9,5,16,12,17,20) & (2,9,7,24,22,29,30) & (4,9,5,26,22,27,30) \\ (3,10,7,14,11,18,20) & (3,10,7,24,21,28,30) & (2,14,12,25,23,33,34) & (3,14,11,26,23,32,34) \\ (4,15,11,26,22,31,34) & (5,17,12,28,23,33,35) & & \\ \hline \end{tabular} All these flux backgrounds have 14 massive fields. ## 4 The shortest vector The shortest vector problem (SVP) looks for a non-zero vector with the smallest length in a lattice. The norm most commonly used to frame the question is the Euclidean norm, but the problem can be defined in a lattice with any norm. The quantity \(N_{\text{flux}}\), contribution of the fluxes to the tadpole, defines a norm in the lattice of quantized flux configurations, so finding a flux background with the minimum value of \(N_{\text{flux}}\) is an instance of the SVP. Algorithms to find the exact solution of SVP in an \(n\)-dimensional lattice are known, and follow one of three approaches: Lattice enumeration [27], Vornoi cell computation [28], and Sieving [29]. All of these approaches have exponential or worse running time. There also exist polynomial time algorithms (based on basis reduction techniques) to solve the approximate version of SVP. Complexity-wise, it is known [30] that the SVP in \(L_{2}\) norm is NP-hard under randomized reductions. As far as we are aware, proving a similar hardness result under deterministic reductions is still an open problem. The approximate algorithms run faster, but only address the approximate version of SVP. We would like to ask the exact question instead: what is the smallest non-zero value of \(N_{\text{flux}}\) for flux vacua? We adopt an exhaustive search algorithm combining sieving and enumeration to look for lattice vectors that are shorter than a fixed value. We describe this algorithm below, with the mathematica code implementing it available at [31]. The main result of this section is the following: there is no flux vacuum with \(N_{\text{flux}}\leq 7\). The minimum non-zero value of \(N_{\text{flux}}\) is 8, and is attained by a family of flux configurations. Given a flux \(G=\sum_{I=1}^{63}B_{I}\Omega_{I}\) (3.1), its contribution to \(N_{\text{flux}}\) in the Minkowski case is (2.19) \[N_{\text{flux}}=\frac{81\sqrt{3}}{2\ \text{Im}\tau}\sum_{I=1}^{63}|B_{I}|^{2} \xlongeright 81\sum_{I=1}^{63}|B_{I}|^{2} \tag{4.1}\] This is positive semidefinite, and zero if and only if \(B_{I}=0\ \forall I\). It is most convenient to implement flux quantization (2.27) on the integral basis of cycles \(\Gamma_{\vec{n}}\) as described in [1]. For convenience, it reads \(\int_{\Gamma_{\vec{n}}}G=N_{\vec{n}}-\tau M_{\vec{n}}\). We separate the real and imaginary parts, \(\vec{b}=(\text{ReB}_{1},\text{ImB}_{1},\ldots,\text{ReB}_{63},\text{ImB}_{63})\), and recast the flux quantization conditions in the form11 Footnote 11: The precise equations are supplied in [31]. The description of the algorithm only requires the form of the equation (4.2). \[b_{i}=C_{ij}y_{j}\,\quad i,j=1(1)126\, \tag{4.2}\] where \(y_{i}\) are some arrangement of the flux quantum numbers \(N_{\vec{n}},M_{\vec{n}}\), _i.e._\(y_{i}\in\mathbb{Z}\), \(i=1(1)126\). This is a linearly independent system of equations. We observe two key facts. First, for each \(I\in\{1,\ldots,63\}\), \(81|B_{I}|^{2}\) is a homogeneous quadratic in the \(y_{i}\)'s with coefficients in \(\mathbb{Z}\). Therefore, \(N_{\text{flux}}\) is non-negative integer-valued, and turning on \(\Omega_{I}\) must contribute at least12 1 to \(N_{\text{flux}}\). This means that, if we want to find flux configurations with \(N_{\text{flux}}\leq T\), it suffices to consider \(G=\sum_{I=1}^{63}B_{I}\Omega_{I}\) with \(|\{I\in\{1,\ldots,63\}:B_{I}\neq 0\}|\leq T\). Second, for each \(I\), \(81|B_{I}|^{2}\) is a homogeneous quadratic polynomial in \(y_{i}\)'s with the symmetrized coefficient matrix positive definite. This latter fact plays a key role in sieving off lattice points in the second half of our method. Footnote 12: This is the crudest lower bound for \(\{81|B_{I}|^{2}:B_{I}\neq 0\}\). The first step in our algorithm is to turn off all but \(T\) out of 63 possible \(B_{I}\)'s. There are \(\binom{63}{T}\) ways13 of doing it. For each choice \(\{B_{i_{1}},\ldots,B_{i_{T}}\}\), setting the remaining \(B\)'s to zero amounts to solving, over integers, a subsystem of \(126-2T\) linear equations pulled from (4.2). Having solved this under-determined system, \(\{B_{i_{1}},\ldots,B_{i_{T}}\}\) are obtained as linear combinations of \(2T\) arbitrary integers, say \(c_{i},i=1,\ldots,2T\), in terms of which \(N_{\text{flux}}\) is expressed as Footnote 13: We can improve this by using the \(S_{7}\) symmetry in the last seven factor CFT’s in the model. \[N_{\text{flux}}^{\text{red}}:=N_{\text{flux}}|_{B_{I}\neq 0\Rightarrow I\in\{i_{1 },\ldots,i_{T}\}}=Q_{ij}c_{i}c_{j}. \tag{4.3}\] The superscript "red" stands for reduced, denoting the fact that we have reduced the number of independent integers. Clearly, the coefficients in \(N_{\text{flux}}^{\text{red}}\) are also in \(\mathbb{Z}\), and \(N_{\text{flux}}^{\text{red}}\geq 0\), with \(``="\) iff \(c_{i}=0\ \forall i\). The second part of our algorithm is to check whether \(N_{\text{flux}}^{\text{red}}\) attains non-zero values smaller or equal to \(T\) for some choice of integers \(c_{i}\), _i.e._ we want to see if the level set \(L_{T}=\{(c_{1},\ldots,c_{2T}):N_{\text{flux}}^{\text{red}}(\vec{c})=T\}\subset \mathbb{R}^{2T}\) has any integer points in it or in its interior. The level set is an ellipsoid since the symmetrized coefficient matrix \(Q\) in (4.3) is positive definite14. Let the eigenvalues of \(Q\) be \(\{\lambda_{1},\ldots,\lambda_{2T}\}\), \(0<\lambda_{1}\leq\lambda_{2}\leq\ldots\leq\lambda_{2T}\), and the corresponding normalized eigenvectors be \(\{\vec{v}_{1},\ldots,\vec{v}_{2T}\}\). The intersection points of axis (axes)15 along \(\vec{v}_{i}\) corresponding to the lowest eigenvalue(s) with the ellipsoid are (among the) points on the ellipsoid that are farthest (in Euclidean norm) from the origin. Let us define the hypercube \(\mathfrak{C}:=\{\vec{x}\in\mathbb{R}^{2T}:|x_{i}|\leq\sqrt{\frac{T}{\lambda_{ 1}}},i=1(1)2T\}\). At all integer points outside \(\mathfrak{C}\), _i.e._ at points in \(\mathbb{Z}^{2T}\cap\mathfrak{C}^{c}\), \(N_{\text{flux}}^{\text{red}}>T\). So it is sufficient to evaluate \(N_{\text{flux}}^{\text{red}}\) at all points in \(\mathbb{Z}^{2T}\cap\mathfrak{C}\). Moreover, any point \(\vec{p}\) in this set is in the exterior of \(L_{T}\) if at least one of the following is satisfied: \[|\vec{p}.\vec{v}_{i}|>\sqrt{\frac{T}{\lambda_{i}}},\ \ i=1(1)2T. \tag{4.4}\] Using these criteria we sieve off points where evaluation of \(N_{\rm flux}^{\rm red}\) is not necessary. At all remaining points in \(\mathbb{Z}^{2T}\cap\mathfrak{C}\), we can evaluate \(N_{\rm flux}^{\rm red}\) to check if values smaller or equal to \(2T\) are attained. We call this algorithm the _Eigensieve_ algorithm. Already in [1] solutions were known with fluxes contributing a value of \(8\) to the tadpole. We set \(T=7\) in our algorithm above to explicitly check that _there exists no solution with \(N_{\rm flux}^{\rm red}\leq 7\), making \(8\) the lowest value of \(N_{\rm flux}\) in the \(1^{9}/\mathbb{Z}_{3}\) model_. In summary, the Eigensieve algorithm rules out \(N_{\rm flux}\leq 7\) as follows. First, it uses the observation that each non-zero flux contributes at least \(1\) to \(N_{\rm flux}\), thus dividing the problem into two sub-problems: * considering all possible ways of turning off all but \(7\) fluxes; * for each of the above, check whether \(N_{\rm flux}^{\rm red}\leq 7\) is possible. For the second part, a finite region in the lattice using the lowest eigenvalue \(\lambda_{1}\) of \(Q\), the coefficient matrix of \(N_{\rm flux}^{\rm red}\), is carved out. Then the rest of the eigenvalues of \(Q\) are used to sieve off more lattice points where evaluation is not necessary. The sieving conditions are (4.4). Then an explicit evaluation of \(N_{\rm flux}^{\rm red}\) is done in the remaining lattice points. ## 5 Conclusion The program of using Landau-Ginzburg models to describe flux vacua of type IIB compactifications was initiated in [1] with the goal that these would provide string vacua with all moduli fields stabilized. The underlying compactification manifolds before turning on fluxes are non-geometric since they are mirror duals to rigid Calabi-Yau manifolds, and therefore have no Kahler moduli. However, their world-sheet description is well understood in terms of Landau-Ginzburg models which at particular points in moduli space are equivalent to some Gepner models. Descriptions of geometric notions of forms, cycles, D-branes, orientifolds _etc._ in these models were developed from the world-sheet in [21, 23, 24, 25, 26]. Reference [1] showed how to describe fluxes in this setting and presented explicit examples of flux vacua solutions that putatively stabilize all moduli. More recently in [2] it was shown that all Minkowski vacua presented in [1] have a number of massless fields. A larger class of vacua was presented in the same paper, all of which have a large number of massless fields. Expanding the superpotential to higher-order terms may stabilize more (or all) moduli. To the best of our knowledge such a scenario has not been realized in any concrete example thus far. This prompts the need for a systematic search for solutions and investigation of their properties such as the number of massive fields, stabilization of massless fields by higher order terms in the superpotential, _etc._ In this paper we have taken a first step in this direction. The key results of this work are as follows: * A systematic search of solutions with the lowest value of \(N_{\rm flux}\), organized by number of non-zero components, has been launched. We present all solutions up to four components turned on, and a large set of solutions with eight components that saturate the minimum value of flux tadpole. * The shortest vector problem for the \(1^{9}/\mathbb{Z}_{3}\) model has been solved using an exact algorithm we call Eigensieve. * We observe that the homogeneous basis of cycles can be used to simplify the formulas of derivatives of the superpotential with respect to moduli. We present these formulas for the second derivatives, which compute mass matrix elements. They increase computation speed significantly. We are working on extending these results in a number of obvious ways: * The systematic search for solutions can be extended by increasing the number of non-zero components. The flux configurations known to satisfy \(N_{\rm flux}=8\) are all 8-\(\Omega\) solutions. We have presented a large class of these in section 3. The upper bound of \(N_{\rm flux}\) in the \(1^{9}/\mathbb{Z}_{3}\) model, dictated by the tadpole cancellation condition, is 12. A classification of all solutions characterized by \(8\leq N_{\rm flux}\leq 12\), along with the ranks of their mass matrices, will give a starting point for studying higher order corrections systematically. Some flux vacua with \(N_{\rm flux}=12\) are known, but we do not yet have an exhaustive set of solutions with \(N_{\rm flux}=9,10,11,12\). * Systematically computing mass matrices and their ranks to solutions with \(8\leq N_{\rm flux}\leq 12\) is computationally very expensive for Mathematica, even with the aid of parallel computations on a cluster. We think that it would be necessary to move away from symbolic computation in Mathematica to be able to achieve this task. Work is ongoing to make this process entirely numerical, and maybe use a lower level language/GPU's to speed up computations. * We have not analyzed higher order terms in the superpotential in this work, leaving it to a forthcoming publication. We just mention that expanding the superpotential to higher orders is also made convenient by using the homogeneous basis. * Finally, we aim to extend all our analyses to other Gepner models. ## Acknowledgements We would like to thank Timm Wrase, Muthusamy Rajaguru and Johannes Walcher for helpful discussions. Portions of this research were conducted with the advanced computing resources provided by Texas A&M High Performance Research Computing. In particular we would like to thank Grigory Rogachev, Wilson Waldrop, Lisa Perez and Marinus Pennings for their invaluable help with setting up the computational part of this project. AS thanks William Linch for valuable feedback on an initial draft of the paper. NB and AS thank the Cynthia and George Mitchell Foundation for their hospitality during the Cook's Branch Workshop 2023 where part of this work was done. This work was partially supported by the NSF grant PHY-2112859. A convenient indexing of the fluxes \(\Omega_{\vec{\ell}}\) For convenience we index in the following way the \((2,1)\)-form fluxes of the \(1^{9}/\mathbb{Z}_{3}\) model invariant under the orientifold action which exchanges the first two entries of the labels \(\vec{\ell}\). We split them in three sets: non-orientifold fluxes labelled by \(\vec{\ell}\sim(1,1,\ldots)\), orientifold fluxes, and non-orientifold fluxes labelled by \(\vec{\ell}\sim(2,2,\ldots)\). Dropping commas for compactness, and denoting the index of a flux as a subscript, \[\left(\Omega_{(1111122)}\right)_{1},\left(\Omega_{(11112212)} \right)_{2},\left(\Omega_{(11112212)}\right)_{3},\left(\Omega_{(11112221)} \right)_{4},\left(\Omega_{(11112212)}\right)_{5},\left(\Omega_{(11112212)} \right)_{6},\] \[\left(\Omega_{(111122121)}\right)_{7},\left(\Omega_{(111122121)} \right)_{8},\left(\Omega_{(11112212)}\right)_{9},\left(\Omega_{(11112211)} \right)_{10},\left(\Omega_{(11121122)}\right)_{11},\left(\Omega_{(11121212)} \right)_{12},\] \[\left(\Omega_{(111212121)}\right)_{13},\left(\Omega_{(111212121)} \right)_{14},\left(\Omega_{(111212121)}\right)_{15},\left(\Omega_{(111212211)} \right)_{16},\left(\Omega_{(11122112)}\right)_{17},\left(\Omega_{(11122121)} \right)_{18},\] \[\left(\Omega_{(111221121)}\right)_{19},\left(\Omega_{(11122211)} \right)_{20},\left(\Omega_{(111211122)}\right)_{21},\left(\Omega_{(11211212)} \right)_{22},\left(\Omega_{(11211221)}\right)_{23},\left(\Omega_{(11121212)} \right)_{24},\] \[\left(\Omega_{(112121211)}\right)_{25},\left(\Omega_{(112121121)} \right)_{26},\left(\Omega_{(11212112)}\right)_{27},\left(\Omega_{(11212121)} \right)_{28},\left(\Omega_{(11212211)}\right)_{29},\left(\Omega_{(11212211)} \right)_{30},\] \[\left(\Omega_{(112211112)}\right)_{31},\left(\Omega_{(112211121)} \right)_{32},\left(\Omega_{(11221211)}\right)_{33},\left(\Omega_{(11221211)} \right)_{34},\left(\Omega_{(112221111)}\right)_{35}\] \[\left(\Omega_{(12111122)}+\Omega_{(211111122)}\right)_{36},\left( \Omega_{(12111122)}+\Omega_{(211111212)}\right)_{37},\left(\Omega_{(121111221) }+\Omega_{(211111221)}\right)_{38},\] \[\left(\Omega_{(12111212)}+\Omega_{(211111221)}\right)_{39},\left( \Omega_{(12111221)}+\Omega_{(211111221)}\right)_{40},\left(\Omega_{(121112211) }+\Omega_{(211112211)}\right)_{41},\] \[\left(\Omega_{(12112112)}+\Omega_{(211112112)}\right)_{42},\left( \Omega_{(121121121)}+\Omega_{(21112112)}\right)_{43},\left(\Omega_{(12112121 1)}+\Omega_{(21112211)}\right)_{44}\] \[\left(\Omega_{(12121211)}+\Omega_{(211212111)}\right)_{45},\left( \Omega_{(12121112)}+\Omega_{(21121111)}\right)_{46},\left(\Omega_{(121211121) }+\Omega_{(2112111)}\right)_{47},\] \[\left(\Omega_{(1221111)}+\Omega_{(212111121)}\right)_{51},\left( \Omega_{(122111121)}+\Omega_{(212111121)}\right)_{52},\left(\Omega_{(1221111 1)}+\Omega_{(21211121)}\right)_{53},\] \[\left(\Omega_{(1221111)}+\Omega_{(212121111)}\right)_{54},\left( \Omega_{(12212111)}+\Omega_{(21212111)}\right)_{55},\left(\Omega_{(12211111) }+\Omega_{(21221111)}\right)_{56}\] \[\left(\Omega_{(22111112)}\right)_{57},\left(\Omega_{(221111121)} \right)_{58},\left(\Omega_{(22111121)}\right)_{59},\left(\Omega_{(22111211)} \right)_{60},\left(\Omega_{(22121111)}\right)_{61},\left(\Omega_{(221211111) }\right)_{62},\] \[\left(\Omega_{(222111111)}\right)_{63}.\] ## Appendix B A basis of short fluxes In this appendix we present a basis of quantized fluxes that have small values of \(N_{\text{flux}}\). Let us first define the sets: \[\mathcal{B}_{1}=\tfrac{i\omega}{9}\left\{\left(\Omega_{12}-\Omega _{13}-\Omega_{17}+\Omega_{18}-\Omega_{22}+\Omega_{23}+\Omega_{27}-\Omega_{28 }\right),\right.\] \[\left(\Omega_{6}-\Omega_{7}-\Omega_{17}+\Omega_{18}-\Omega_{22}+ \Omega_{23}+\Omega_{31}-\Omega_{32}\right),\] \[\left(\Omega_{11}-\Omega_{13}-\Omega_{17}+\Omega_{19}-\Omega_{21 }+\Omega_{23}+\Omega_{27}-\Omega_{29}\right),\] \[\left(\Omega_{5}-\Omega_{7}-\Omega_{17}+\Omega_{19}-\Omega_{21}+ \Omega_{23}+\Omega_{31}-\Omega_{33}\right),\] \[\left(\Omega_{12}-\Omega_{13}-\Omega_{14}+\Omega_{15}-\Omega_{22}+ \Omega_{23}+\Omega_{24}-\Omega_{25}\right),\] \[\left(\Omega_{6}-\Omega_{7}-\Omega_{8}+\Omega_{9}-\Omega_{22}+ \Omega_{23}+\Omega_{24}-\Omega_{25}\right),\] \[\left(\Omega_{11}-\Omega_{13}-\Omega_{14}+\Omega_{16}-\Omega_{21}+ \Omega_{23}+\Omega_{24}-\Omega_{26}\right),\] \[\left(\Omega_{5}-\Omega_{7}-\Omega_{8}+\Omega_{10}-\Omega_{21}+ \Omega_{23}+\Omega_{24}-\Omega_{26}\right),\] \[\left(\Omega_{3}-\Omega_{4}-\Omega_{14}+\Omega_{15}-\Omega_{22}+ \Omega_{23}+\Omega_{31}-\Omega_{32}\right),\] \[\left(\Omega_{2}-\Omega_{4}-\Omega_{14}+\Omega_{16}-\Omega_{21}+ \Omega_{23}+\Omega_{31}-\Omega_{33}\right),\] \[\left(\Omega_{11}-\Omega_{15}-\Omega_{17}+\Omega_{20}-\Omega_{21 }+\Omega_{25}+\Omega_{27}-\Omega_{30}\right),\] \[\left(\Omega_{5}-\Omega_{9}-\Omega_{17}+\Omega_{20}-\Omega_{21 }+\Omega_{25}+\Omega_{31}-\Omega_{34}\right),\] \[\left(\Omega_{2}-\Omega_{9}-\Omega_{14}+\Omega_{20}-\Omega_{21 }+\Omega_{28}+\Omega_{31}-\Omega_{35}\right),\] \[\left(\Omega_{11}-\Omega_{4}-\Omega_{12}+\Omega_{16}-\Omega_{21 }+\Omega_{25}+\Omega_{31}-\Omega_{34}\right)\}\,\] \[\mathcal{B}_{2}=\frac{1}{3}\{ -i\left(\Omega_{1}-\Omega_{4}-\Omega_{21}-\Omega_{22}+\Omega_{25}+ \Omega_{26}+\Omega_{51}-\Omega_{54}-\Omega_{57}+\Omega_{60}\right),\] \[-i\left(\Omega_{1}-\Omega_{4}-\Omega_{11}-\Omega_{12}+\Omega_{15 }+\Omega_{16}+\Omega_{46}-\Omega_{49}-\Omega_{57}+\Omega_{60}\right),\] \[-i\left(\Omega_{1}-\Omega_{4}-\Omega_{5}-\Omega_{6}+\Omega_{9}+ \Omega_{10}+\Omega_{42}-\Omega_{45}-\Omega_{57}+\Omega_{60}\right),\] \[-i\left(\Omega_{1}-\Omega_{3}-\Omega_{21}-\Omega_{23}+\Omega_{24 }+\Omega_{26}+\Omega_{52}-\Omega_{54}-\Omega_{58}+\Omega_{60}\right),\] \[-i\left(\Omega_{1}-\Omega_{3}-\Omega_{11}-\Omega_{13}+\Omega_{14 }+\Omega_{16}+\Omega_{47}-\Omega_{49}-\Omega_{58}+\Omega_{60}\right),\] \[-i\left(\Omega_{1}-\Omega_{3}-\Omega_{5}-\Omega_{7}+\Omega_{8}+ \Omega_{10}+\Omega_{43}-\Omega_{45}-\Omega_{58}+\Omega_{60}\right),\] \[-i\left(\Omega_{1}-\Omega_{2}-\Omega_{22}-\Omega_{23}+\Omega_{24 }+\Omega_{25}+\Omega_{53}-\Omega_{54}-\Omega_{59}+\Omega_{60}\right),\] \[-i\left(\Omega_{1}-\Omega_{2}-\Omega_{12}-\Omega_{13}+\Omega_{14 }+\Omega_{15}+\Omega_{48}-\Omega_{49}-\Omega_{59}+\Omega_{60}\right),\] \[-i\left(\Omega_{1}-\Omega_{2}-\Omega_{6}-\Omega_{7}+\Omega_{8}+ \Omega_{9}+\Omega_{44}-\Omega_{45}-\Omega_{59}+\Omega_{60}\right),\] \[-i\left(\Omega_{1}-\Omega_{7}-\Omega_{21}-\Omega_{22}+\Omega_{28 }+\Omega_{29}+\Omega_{51}-\Omega_{55}-\Omega_{57}+\Omega_{61}\right),\] \[-i\left(\Omega_{1}-\Omega_{7}-\Omega_{11}-\Omega_{12}+\Omega_{18 }+\Omega_{19}+\Omega_{46}-\Omega_{50}-\Omega_{57}+\Omega_{61}\right),\] \[-i\left(\Omega_{1}-\Omega_{13}-\Omega_{21}-\Omega_{22}+\Omega_{32 }+\Omega_{33}+\Omega_{51}-\Omega_{56}-\Omega_{57}+\Omega_{62}\right),\] \[-i\left(\Omega_{1}-\Omega_{11}-\Omega_{12}-\Omega_{23}+\Omega_{32 }+\Omega_{33}+\Omega_{46}-\Omega_{56}-\Omega_{57}+\Omega_{63}\right),\] \[-i\left(\Omega_{1}-\Omega_{5}-\Omega_{6}-\Omega_{13}+\Omega_{18}+ \Omega_{19}+\Omega_{42}-\Omega_{50}-\Omega_{57}+\Omega_{62}\right),\] \[-i\left(\Omega_{1}-\Omega_{2}-\Omega_{3}-\Omega_{7}+\Omega_{9}+ \Omega_{10}+\Omega_{39}-\Omega_{45}-\Omega_{57}+\Omega_{61}\right),\] \[-i\left(\Omega_{1}-\Omega_{2}-\Omega_{4}-\Omega_{6}+\Omega_{8}+ \Omega_{10}+\Omega_{40}-\Omega_{45}-\Omega_{58}+\Omega_{61}\right),\] \[-i\left(\Omega_{1}-\Omega_{3}-\Omega_{4}-\Omega_{5}+\Omega_{8}+ \Omega_{9}+\Omega_{41}-\Omega_{45}-\Omega_{59}+\Omega_{61}\right),\] \[i\left(\Omega_{1}-\Omega_{2}+\Omega_{3}-\Omega_{7}+\Omega_{9}- \Omega_{10}-\Omega_{37}+\Omega_{44}+\Omega_{57}-\Omega_{61}\right),\] \[i\left(\Omega_{1}-\Omega_{2}+\Omega_{4}-\Omega_{6}+\Omega_{8}- \Omega_{10}-\Omega_{38}+\Omega_{44}+\Omega_{58}-\Omega_{61}\right),\] \[i\left(\Omega_{1}+\Omega_{2}-\Omega_{3}-\Omega_{7}-\Omega_{9}+ \Omega_{10}-\Omega_{36}+\Omega_{43}+\Omega_{57}-\Omega_{61}\right)\}\,\] \[\mathcal{B}_{3} =\tfrac{\omega_{5}}{9}\{\left(\omega\Omega_{4}-\omega\Omega_{25}- \omega\Omega_{26}+\omega\Omega_{54}-\omega\Omega_{60}-\Omega_{13}+\Omega_{32}+ \Omega_{33}-\Omega_{56}+\Omega_{62}\right),\] \[\left(\omega\Omega_{4}-\omega\Omega_{15}-\omega\Omega_{16}+ \omega\Omega_{49}-\omega\Omega_{60}-\Omega_{23}+\Omega_{32}+\Omega_{33}- \Omega_{56}+\Omega_{63}\right),\] \[\left(\omega\Omega_{4}-\omega\Omega_{9}-\omega\Omega_{10}+\omega \Omega_{45}-\omega\Omega_{60}-\Omega_{23}+\Omega_{28}+\Omega_{29}-\Omega_{55}+ \Omega_{63}\right),\] \[\left(\omega\Omega_{3}-\omega\Omega_{24}-\omega\Omega_{26}+ \omega\Omega_{54}-\omega\Omega_{60}-\Omega_{12}+\Omega_{31}+\Omega_{33}- \Omega_{56}+\Omega_{62}\right),\] \[\left(\omega\Omega_{3}-\omega\Omega_{14}-\omega\Omega_{16}+\omega \Omega_{49}-\omega\Omega_{60}-\Omega_{22}+\Omega_{31}+\Omega_{33}-\Omega_{56}+ \Omega_{63}\right),\] \[\left(\omega\Omega_{3}-\omega\Omega_{8}-\omega\Omega_{10}+\omega \Omega_{45}-\omega\Omega_{60}-\Omega_{22}+\Omega_{27}+\Omega_{29}-\Omega_{55}+ \Omega_{63}\right),\] \[\left(\omega\Omega_{2}-\omega\Omega_{24}-\omega\Omega_{25}+ \omega\Omega_{54}-\omega\Omega_{60}-\Omega_{11}+\Omega_{31}+\Omega_{32}- \Omega_{56}+\Omega_{62}\right),\] \[\left(\omega\Omega_{2}-\omega\Omega_{14}-\omega\Omega_{15}+ \omega\Omega_{49}-\omega\Omega_{60}-\Omega_{21}+\Omega_{31}+\Omega_{32}- \Omega_{56}+\Omega_{63}\right),\] \[\left(\omega\Omega_{2}-\omega\Omega_{8}-\omega\Omega_{9}+\omega \Omega_{45}-\omega\Omega_{60}-\Omega_{21}+\Omega_{27}+\Omega_{28}-\Omega_{55}+ \Omega_{63}\right),\] \[\left(\omega\Omega_{7}-\omega\Omega_{9}-\omega\Omega_{10}+\omega \Omega_{45}-\omega\Omega_{61}-\Omega_{23}+\Omega_{25}+\Omega_{26}-\Omega_{54}+ \Omega_{63}\right),\] \[\left(\omega\Omega_{6}-\omega\Omega_{8}-\omega\Omega_{10}+\omega \Omega_{45}-\omega\Omega_{61}-\Omega_{22}+\Omega_{24}+\Omega_{26}-\Omega_{54}+ \Omega_{63}\right),\] \[\left(\omega\Omega_{5}-\omega\Omega_{8}-\omega\Omega_{9}+\omega \Omega_{45}-\omega\Omega_{61}-\Omega_{21}+\Omega_{24}+\Omega_{25}-\Omega_{54}+ \Omega_{63}\right),\] \[\left(\omega\Omega_{4}-\omega\Omega_{23}-\omega\Omega_{26}+ \omega\Omega_{53}-\omega\Omega_{59}-\Omega_{15}+\Omega_{32}+\Omega_{34}- \Omega_{56}+\Omega_{62}\right),\] \[\left(\omega\Omega_{4}-\omega\Omega_{13}-\omega\Omega_{16}+\omega \Omega_{48}-\omega\Omega_{59}-\Omega_{25}+\Omega_{32}+\Omega_{34}-\Omega_{56}+ \Omega_{63}\right),\] \[\left(\omega\Omega_{4}-\omega\Omega_{7}-\omega\Omega_{10}+\omega \Omega_{44}-\omega\Omega_{59}-\Omega_{25}+\Omega_{28}+\Omega_{30}-\Omega_{55}+ \Omega_{63}\right),\] \[\left(\omega\Omega_{7}-\omega\Omega_{23}-\omega\Omega_{29}+\omega \Omega_{53}-\omega\Omega_{59}-\Omega_{18}+\Omega_{32}+\Omega_{35}-\Omega_{56}+ \Omega_{62}\right),\] \[\left(\omega\Omega_{7}-\omega\Omega_{13}-\omega\Omega_{19}+\omega \Omega_{48}-\omega\Omega_{59}-\Omega_{28}+\Omega_{32}+\Omega_{35}-\Omega_{56}+ \Omega_{63}\right),\] \[\left(\omega\Omega_{13}-\omega\Omega_{23}-\omega\Omega_{33}+\omega \Omega_{53}-\omega\Omega_{59}-\Omega_{18}+\Omega_{28}+\Omega_{35}-\Omega_{55}+ \Omega_{61}\right),\] \[-\left(\omega\Omega_{7}-\omega\Omega_{9}+\omega\Omega_{10}-\omega \Omega_{44}+\omega\Omega_{61}-\Omega_{23}+\Omega_{25}-\Omega_{26}+\Omega_{53}- \Omega_{63}\right),\] \[-\left(\omega\Omega_{7}+\omega\Omega_{9}-\omega\Omega_{10}-\omega \Omega_{43}+\omega\Omega_{61}-\Omega_{23}-\Omega_{25}+\Omega_{26}+\Omega_{52}- \Omega_{63}\right),\] \[-\left(\omega\Omega_{6}+\omega\Omega_{8}-\omega\Omega_{10}-\omega \Omega_{42}+\omega\Omega_{61}-\Omega_{22}-\Omega_{24}+\Omega_{26}+\Omega_{51}- \Omega_{63}\right)\}\,\] \[\mathcal{B}_{4} =\tfrac{-i\omega}{9}\{\left(2\Omega_{1}-2\Omega_{2}+\Omega_{12 }-\Omega_{14}+\Omega_{23}-\Omega_{25}-\Omega_{33}+\Omega_{34}\right),\] \[\left(2\Omega_{1}-2\Omega_{3}+\Omega_{11}-\Omega_{14}+\Omega_{23}- \Omega_{26}-\Omega_{32}+\Omega_{34}\right),\] \[\left(2\Omega_{1}-2\Omega_{4}+\Omega_{11}-\Omega_{15}+\Omega_{22 }-\Omega_{26}-\Omega_{31}+\Omega_{34}\right),\] \[\left(2\Omega_{1}-2\Omega_{5}+\Omega_{12}-\Omega_{17}+\Omega_{23}- \Omega_{28}-\Omega_{33}+\Omega_{35}\right),\] \[\left(2\Omega_{1}+\Omega_{6}-2\Omega_{11}-\Omega_{17}+\Omega_{23}- \Omega_{29}-\Omega_{32}+\Omega_{35}\right),\] \[\left(2\Omega_{1}+\Omega_{6}+\Omega_{13}-\Omega_{19}-2\Omega_{21}- \Omega_{27}-\Omega_{32}+\Omega_{35}\right)\}\,\] \[\mathcal{B}_{5} =\{-\tfrac{i\omega}{3}\left(\Omega_{1}-\Omega_{57}\right)\}\,\] \[\mathcal{B}_{6} =\{-\tfrac{i}{9}\left(\omega\Omega_{1}+2\omega\Omega_{3}- \omega\Omega_{7}+\omega\Omega_{10}+(3+2\omega)\Omega_{21}-\omega\Omega_{24}+ \omega\Omega_{28}-\omega\Omega_{30}\right)\}\,\] where we have taken the liberty to use the notation that a numerical "overall factor" multiplies all elements of a set. One finds that \[\mathcal{B}=\left(\cup_{i=1}^{6}\mathcal{B}_{i}\right)\cup\left(\cup_{i=1}^{6} \omega B_{i}\right)\] is a basis of flux vectors in this model. Any quantized flux is an integer linear combination of the fluxes in \(\mathcal{B}\). All fluxes in \(\mathcal{B}_{i}\) (and hence obviously in \(\omega\mathcal{B}_{i}\)) have flux tadpole values of \(8,12,12,14,18,17\) for \(i=1,2,3,4,5,6\) respectively.
2310.03279
Classifying Whole Slide Images: What Matters?
Recently there have been many algorithms proposed for the classification of very high resolution whole slide images (WSIs). These new algorithms are mostly focused on finding novel ways to combine the information from small local patches extracted from the slide, with an emphasis on effectively aggregating more global information for the final predictor. In this paper we thoroughly explore different key design choices for WSI classification algorithms to investigate what matters most for achieving high accuracy. Surprisingly, we found that capturing global context information does not necessarily mean better performance. A model that captures the most global information consistently performs worse than a model that captures less global information. In addition, a very simple multi-instance learning method that captures no global information performs almost as well as models that capture a lot of global information. These results suggest that the most important features for effective WSI classification are captured at the local small patch level, where cell and tissue micro-environment detail is most pronounced. Another surprising finding was that unsupervised pre-training on a larger set of 33 cancers gives significantly worse performance compared to pre-training on a smaller dataset of 7 cancers (including the target cancer). We posit that pre-training on a smaller, more focused dataset allows the feature extractor to make better use of the limited feature space to better discriminate between subtle differences in the input patch.
Long Nguyen, Aiden Nibali, Joshua Millward, Zhen He
2023-10-05T03:11:54Z
http://arxiv.org/abs/2310.03279v1
# Classifying Whole Slide Images: What Matters? ###### Abstract Recently there have been many algorithms proposed for the classification of very high resolution whole slide images (WSIs). These new algorithms are mostly focused on finding novel ways to combine the information from small local patches extracted from the slide, with an emphasis on effectively aggregating more global information for the final predictor. In this paper we thoroughly explore different key design choices for WSI classification algorithms to investigate what matters most for achieving high accuracy. Surprisingly, we found that capturing global context information does not necessarily mean better performance. A model that captures the most global information consistently performs worse than a model that captures less global information. In addition, a very simple multi-instance learning method that captures no global information performs almost as well as models that capture a lot of global information. These results suggest that the most important features for effective WSI classification are captured at the local small patch level, where cell and tissue micro-environment detail is most pronounced. Another surprising finding was that unsupervised pre-training on a larger set of 33 cancers gives significantly worse performance compared to pre-training on a smaller dataset of 7 cancers (including the target cancer). We posit that pre-training on a smaller, more focused dataset allows the feature extractor to make better use of the limited feature space to better discriminate between subtle differences in the input patch. keywords: digital pathology, WSI classification, deep learning, unsupervised pre-training + Footnote †: journal: Computer Science ## 1 Introduction The application of computer vision techniques to digital pathology has the potential to become a transformative force in the field of medical diagnostics, with experts agreeing that the routine use of AI tools in future pathology labs is almost assured [1]. By automating the analysis of whole slide images (WSIs) it will be possible to enhance the work of clinical histopathologists, ultimately leading to more efficient personalised treatment planning for patients. Many recent works [2; 3; 4; 5; 6; 7; 8] focus on applying deep learning approaches to solve the weakly supervised whole slide image classification problem. The WSI is taken as input, and the model is trained to output a single label such as the cancer sub-type, metastasised versus normal lymph node, presence of certain genes, etc. A major practical challenge when working with WSIs is their extremely high dimensionality, with a single image reaching spatial extents in the order of \(100\,000\,\mathrm{px}\times 100\,000\,\mathrm{px}\). It is impossible to directly feed all pixels from such images into a neural network at once. The majority of recent methods [2; 3] first break the image into small tiles (e.g., \(256\,\mathrm{px}\times 256\,\mathrm{px}\) patches) and then represent each tile using a small 1D embedding (e.g. a 384-dimensional vector). These feature vectors are typically generated using models pre-trained on natural images from ImageNet or a large collection of pan-cancer WSIs. Typically, the ImageNet pre-trained model weights are computed via the supervised task of image classification. In contrast, models pre-trained on large WSI collections are usually trained without annotations by using self-supervised learning techniques. Either way, the tile-based feature vectors are combined using various methods to arrive at a single whole slide prediction. Although the tile-based approach already allows later stages of the model to operate at a higher level by working on \(256\,\mathrm{px}\times 256\,\mathrm{px}\) patches instead of individual pixels at a time, most recent work advocates the importance of incorporating even more global structural information when classifying WSIs. To capture global structure information, previous works have connected patches in a graph[3; 4], used a mixture of high resolution and low resolution images[5; 6], applied self attention with position encoding on image patches[7; 8], and built a hierarchical representation of WSIs using separate vision transformers at different levels of the hierarchy[2]. One of the most successful recent papers that incorporates global information in a very direct way is the Hierarchical Image Pyramid Transformer (HIPT) framework [2]. Figure 1 shows an overview of how HIPT first applies self-supervised learning to acquire a 384-dimensional embedding for each \(256\,\mathrm{px}\times 256\,\mathrm{px}\) level 1 image patch, which are called level 1 feature vectors. Self-supervised learning is applied again at level 2 to acquire a 192-dimensional embedding for each \(4096\,\mathrm{px}\times 4096\,\mathrm{px}\) level 2 patch. Finally, all level 2 feature vectors are fed into a single level 3 transformer to make a prediction at the whole slide level. This approach progressively constructs a more global view of the WSI, allowing a hierarchy of transformer models to analyse the global structure. In this paper we use the HIPT framework as the basis for our investigation of how important global structure information and self-supervised pre-training are for making good predictions at the WSI level. We do this by systematically stripping global structure information away from HIPT in two ways: 1) reducing the complexity of global structure processing, and 2) reducing the influence of pre-training on global structure. After measuring how this impacts performance, we observe that incorporating more pre-training and more global information does not necessarily give the best accuracy. In fact, incorporating no global structure, a very simple multi-instance learning approach using just level 1 patches can achieve very competitive results. Table 1 summarises our key findings. The results are averaged across 4 WSI datasets (CAME-LYON16, TCGA-BRCA subtyping, NSCLC subtyping, and RCC subtyping). The columns show varying amounts of level 2 pre-training and rows show varying amounts of global structure. A surprising result is that the very simple max pooling based multi-instance learning (Max-MIL) algorithm [9] (essentially just using the single most confident level 1 patch prediction) can outperform models that incorporate the most global structure. The results also show that using a pre-trained feature extractor at level 2 does not provide a noticeable benefit. Finally, we find that using a shallow transformer to encode level 2 features (medium global structure) performs the best. The choice of data used to pre-train the level 1 feature extractor was found to have the biggest impact on overall performance. In our experiments, we used the DINO [10] self-supervised feature extractor on combined datasets of varying size. The results show that features learnt from a large collection of 33 cancers performed much worse than features learnt from a smaller subset of 7 cancers, or even learning from only the single target cancer. This may be attributable to the combination of two factors: 1) the large number of patches (e.g. an average of around 13,258 patches per image) in each WSI being sufficient for learning low-level features, and 2) a large number of different cancer types causing greater divergence between pre-training and the downstream task. For example the ImageNet 1K dataset has size 133GB, which is less than a third of the size of the TCGA-BRCA breast cancer dataset size (480GB), hence WSIs from a single cancer dataset may be enough to learn good discriminative features. Learning from a broader range of cancers may result in reserving precious regions of the embedding space for representations that are not used for the downstream task. Extensive experiments reveal a simple recipe for modifying HIPT's level 2 encoder to use a shallow transformer (without pre-training) and using a level 1 encoder trained on a smaller, more focused set of 7 cancers (including the target cancer). We call this approach HIPT with local emphasis (_HIP-TLE_), and find that it consistently outperforms existing algorithms for all WSI classification and survival prediction tasks tested. The reduced depth of the level 2 encoder allows the final classification module to access the important level 1 information more easily while still being able to use some global context information. In summary, we make the following key findings in our investigation into what matters for achieving good performance for weakly supervised WSI classification: 1. Incorporating global structure information has limited benefits to performance. 2. The single most significant factor in achieving good performance is the data used to pre-train the level 1 feature extractor. 3. Pre-training level 1 features on WSIs from a larger range of cancers performs significantly worse than using a smaller set of cancers or even just the target cancer alone. 4. A very simple max pooling based MIL algorithm that incorporates no global structure information when given high quality pre-trained features can perform similarly to complex state-of-the-art methods. 5. A modified version of HIPT called HIPTLE consistently outperforms all other algorithms for all WSI classification and survival prediction tasks tested. ## 2 Related Works ### Multi-instance Learning The majority of existing work in weakly supervised WSI classification takes the multi-instance learning (MIL) approach, where the WSI is represented by a bag of patch instances created by dividing the large WSI into many much smaller tiles. We have identified three main MIL sub-categories, which we refer to as _instance-level simple aggregation_ (IL-SA), _instance-level machine learning aggregation_ (IL-MLA), and _embedding-level machine learning aggregation_ (EL-MLA). Instance-level simple aggregation (IL-SA) approaches produce separate class label predictions for each patch in the WSI, then apply a simple aggregation function to arrive at the final prediction. Example aggregation functions include taking the maximum probability [11], averaging the probabilities [12], or counting the percentage of patches predicted to be positive [12]. However, both averaging and using the maximum probability have their different problems. Using the maximum probability can result in many false positives since a single misclassification can change the predicted class [13]. Averaging suffers from the problem that generally positive regions only occupy small portions of tissue (e.g. less than 20%), and therefore the vast negative regions overwhelm the positive regions. Instance-level machine learning aggregation (IL-MLA) approaches overcome the aforementioned problems of IL-SA by using a machine learning model for more sophisticated combining of per-patch predictions. For example, Hou et al. [14] use logistic regression to combine the instance level predictions. Wang et al. [15] first produce tumor probability heatmaps from the patch level deep learning classifier and then extract geometrical features from the heatmaps. Next they feed the extracted geometrical features into a random forest classifier to make the WSI level predictions. Similarly, Campanella et al. [13] train a random forest algorithm on manually engineered features extracted from the patch level heatmaps. These methods use handcrafted features on heatmaps to capture high level spatial structure information from the WSIs. Embedding-level machine learning aggregation Figure 1: An overview of the Hierarchical Image Pyramid Transformer (HIPT) framework[2]. The level 1 transformer encoder is first used to encode each \(256\,\mathrm{px}\times 256\,\mathrm{px}\) patch into a level 1 feature vector. Next, a level 2 transformer encoder merges all level 1 feature vectors corresponding to the same \(4096\,\mathrm{px}\times 4096\,\mathrm{px}\) patch into a single level 2 feature vector. Finally, the level 3 transformer-based classifier takes all of the level 2 feature vectors together to compute an output class label. (EL-MLA) approaches generate an embedding (feature vector) for each patch (instance) and then use a machine learning model to combine the instances to arrive at a prediction. In contrast to IL-SA and IL-MLA, this approach allows the model to consider features from the entire WSI when attributing importance to each instance. By leveraging these embeddings, the model can effectively capture the underlying relationships and interactions among instances, leading to more accurate and robust predictions in multi-instance learning tasks. A famous work in this area is the attention based multiple instance learning (ABMIL) paper [16], where an MLP with attention weights is used to automatically learn the importance of each instance for predicting the final slide level binary class label. CLAM [9] extends this idea to multi-class classification by using multiple attention branches, one for each class. Zhang et al. [17] developed a two-step EL-MLA approach which first randomly samples patches in a WSI to create pseudo bags of patches. They then use an attention-based model to distill the most predictive patches from each pseudo bag and feed those into a second attention based model to make the final prediction. ### WSI classification incorporating global structure information None of the approaches presented in the previous section incorporate global structure information, with the exception of the IL-MLA methods that perform analysis on heatmaps generated from patch level predictions. In this section we focus on techniques that incorporate global structure information when classifying WSIs. These methods all take the EL-MLA approach in the sense that they first use a pre-trained encoder to embed each patch as a feature vector and then train various kinds of models on top of these feature vectors. One way to capture global structure information is to apply graph convolutional networks (GCNs) on embedded patches of WSIs. Due to the large number of patches, most approaches [3; 4] sample representative patches and then connect the patches using GCNs. Adnan et al. [3] use a fully connected graph, which essentially means the spatial proximity information is discarded. Guan et al. [4] use two levels of graphs. The first level connects patches with similar appearance and the second connects the local graphs using a global graph. This approach does not use spatial location information but instead uses appearance information to determine graph connectivity. Another way of incorporating higher level structure information is to ingest embeddings from mixed resolution patches [5; 6] (e.g. 5X and 20X magnification patches). These approaches capture high level structure by downsampling large image patches into smaller patches (essentially averaging nearby pixels) and then learning embeddings from them using self-supervised learning. A potential drawback of this simple way of compressing high resolution patches is that important low-level features which are critical for making correct predictions may be lost during the averaging of pixel values. In contrast, more recent methods [2; 3; 4; 7; 8] reduce the dimensionality of patches in more intelligent ways, utilising transformer encoders trained using self-supervised learning. Some methods [7; 8] use transformer self attention layers to capture global structure information. These methods represent patches as tokens with embedded position information. The tokens are then passed into a self attention layer. This allows the model to incorporate global spatial relationship information when making WSI predictions. However, these papers incorporate the position information using a single flat self attention layer. In \begin{table} \begin{tabular}{c c c c} \hline \hline & Most level 2 pre-training & Medium level 2 pre-training (fine-tuned weights) & No level 2 pre-training (random initialisation) \\ \hline Most global structure & & & \\ (HIPT [2]) & 0.845 & 0.937 & 0.936 \\ Medium global structure & & & \\ (HIPTLE) & 0.872 & 0.954 & **0.959** \\ No global structure & & & \\ (Max-MIL [9]) & N/A & N/A & 0.940 \\ \hline \hline \end{tabular} \end{table} Table 1: Average AUC results across four different public datasets: CAMELYON16 metastases classification, TCGA-BRCA subtyping, NSCLC subtyping, and RCC subtyping. The highest AUC result is highlighted using bold font. contrast, the HIPT framework [2] takes a hierarchical approach where multiple different transformer models are used to incorporate increasingly higher level structure information. This then opens up the possibility to learn pre-trained features via self supervision at higher levels of the hierarchy (embeddings representing \(4096\,\mathrm{px}\times 4096\,\mathrm{px}\) patches instead of \(256\,\mathrm{px}\times 256\,\mathrm{px}\) patches). ## 3 How much global structure is required? Many recent successful WSI classification methods focus on finding the best way to incorporate global information [2; 3; 4; 7; 8]. Other recent works did not use any global information at all, treating the WSI as a bag of patches instead [17; 16; 13; 9]. Pathologists normally work by first using a zoomed-out view of the WSI for an overview of the tissue sample, and then zoom in to areas of interest (high-power fields) for more detailed analysis. The cell level information contained in high-power fields is highly relevant for both cancer sub-typing and survival prediction. We suspect there is a trade-off between focusing on the global information and focusing on the low level cell information. Methods like HIPT--which has a deep level 2 encoder for processing global structure--have a large degree of separation between the final classifier and the low level cell information at level 1 of the hierarchy. This makes the model less sensitive to cell level information when making predictions. In contrast, methods that treat the WSI as a collection of individual small patches have a much flatter model structure which allows the training signal (class label) to reach the cell level much easier. In this paper we study the importance of global structure information for making accurate predictions on 4 different WSI classification tasks (CAMELYON16 metastases prediction, breast cancer sub-typing, kidney cancer sub-typing, and lung cancer sub-typing). To do this we consider three model configurations that capture different levels of global structure information, effectively varying the "distance" (in terms of the number of layers) between level 1 feature vectors and the classification output. At one extreme, the final classifier sees more global information at the cost of being further away from the input. At the other extreme, the final classifier is closer to the input but does not see as much global information. Comparisons are made based on the same level 1 encoder layer--a ViT-s [18] model pre-trained using DINO[10] on the same set of WSIs. Figure 2 shows the three different model designs that we tested for varying amounts of global structure. The first design (Figure 2a) incorporates the most global structure information. It corresponds to the original HIPT framework setup [2], as illustrated in Figure 1. In this model design three levels of transformers are used to arrive at the final prediction. The level 2 encoder combines the 384D vector representing the level 1 patches using position information to give the model a complete structural view of large \(4096\,\mathrm{px}\times 4096\,\mathrm{px}\) patches. At level 2, the model should have enough context to be able to analyse tissue architecture information such as invasive fronts and neoplastic structures. Finally, the 192D level 2 feature vectors are passed into the final transformer classifier to arrive at a prediction for the WSI. Although this approach may allow the model to see more global structure in the WSI, the downside is that the many layers of high-level processing make it harder for the classifier to incorporate important cell level information. To make the low level cell information more accessible to the final classification layer we replaced the deep 6 layer transformer network using 6 heads with a shallower 2 layer transformer network using 3 heads. This shallower level 2 encoder allows the information from the level 1 feature vectors to more easily flow to the final classification transformer. Finally, we used the simple multi-instance learning approach that treats each level 1 feature vector as a separate instance, and each instance is fed into an MLP to separately predict the slide label. For binary classification the patch with the highest predicted probability for the positive class is selected to decide the predicted class for the entire slide as well as gradient signals during training. To handle multi-class classification the MLP is modified to predict multiple classes. The patch with the highest single class probability score across all classes is used to make the slide-level label prediction. We call this the Max-MIL approach and we take the implementation from CLAM [9]. This approach does not incorporate any global structural information beyond the \(256\,\mathrm{px}\times 256\,\mathrm{px}\) patch, and therefore the model is not able to learn spatial patterns that are larger than this. Figure 3 shows an example \(256\,\mathrm{px}\times 256\,\mathrm{px}\) patch at 20X magnification. The type of cells, local spatial arrangement of cells, and tissue type information are all visible at this magnification level. ## 4 Varying the amount of pre-training The HIPT framework [2] advocates pre-training both the level 1 and level 2 encoders on large unlabelled datasets. Using heavily pre-trained level 2 encoders means the models start with good high level feature extractors that span large \(4096\,\mathrm{px}\times 4096\,\mathrm{px}\) patches. It also opens up the possibility of freezing those weights and restricting optimisation on the downstream task to the level 3 classifier only. Intuitively this should have two key benefits. Firstly, the model should be able to find useful global patterns during pre-training and reuse these for the downstream task, thus resulting in better performance compared to random initialisation. Secondly, freezing the level 2 encoder weights should reduce the likelihood of overfitting on the downstream task, since the model is more constrained. Given that the idea of pre-training at the high \(4096\,\mathrm{px}\times 4096\,\mathrm{px}\) patch level is relatively new, we wanted to empirically evaluate how beneficial such pre-training is in practice. To do this, we tested three different training configurations which vary in the amount of level 2 pre-training. Note that fine-tuning the level 1 encoder is not feasible due to memory constraints of current computing resources. The three training configurations are shown in Figure 4. The first training configuration has the most level 2 pre-training, and is the best-performing configuration from the HIPT framework. The pre-trained level 1 and level 2 features are frozen and we only train the level 3 classifier on the downstream task. This is similar to the linear probing method Figure 3: A \(256\,\mathrm{px}\times 256\,\mathrm{px}\) image patch at 20X magnification from the TCGA-BRCA dataset. You can see the cells and their spatial arrangement clearly within the tissue micro-environment. Figure 2: Models with three different levels of global structure used for WSI classification predictions. (a) Most global structure corresponds to the original HIPT framework, which has a deep transformer as its level 2 encoder. (b) Medium global structure replaces the level 2 encoder of HIPT with a shallower 2 layer transformer model. (c) No global structure (Max-MIL) uses a simple max operator to aggregate the individual contributions from each patch without incorporating any position or structure information. All models use the same pre-trained level 1 encoder (not shown) to produce level 1 feature vectors. for transfer learning where only the final linear head is trained on the downstream task. In general this approach should give the model the least chance to overfit to the training set since most of the hierarchical model (both level 1 and 2) is frozen. The second training configuration, "medium level 2 pre-training", fine-tunes the pre-trained level 2 encoder parameters while training on the downstream task. This configuration makes use of the pre-training to ensure the model starts off with good initial weights before it is fine-tuned for the target task. The third training configuration does not use any level 2 pre-training, and instead randomly initialises level 2 weights before training on the downstream task. In theory this configuration has the most opportunity to overfit the training data since the level 2 weights are adjusted solely based on the down stream task training data. ## 5 Experimental setup In this section we describe the datasets, data pre-processing, metrics, train/test splits, and training setup used to conduct our experiments. ### Datasets We used the same datasets as [2], but expanded our evaluation to also include the CAMELYON16 dataset [19]. So we used the following public datasets: TCGA-BRCA; TCGA-LUAD; TCGA-LUSC; TCGA-KIRC; TCGA-KIRP; and CAMELYON16. Using the TCGA-BRCA dataset we performed Invasive Ductal (IDC) versus Invasive Lobular Carcinoma (ILC) subtyping with a total of 937 WSIs. We combined TCGA-LUAD and TCGA-LUSC datasets to perform Lung Adenocarcinoma (LUAD) versus Lung Squamous Cell Carcinoma (LUSC) in Non-Small Cell Lung Carcinoma (NSCLC) subtyping with a total of 958 WSIs. We combined TCGA KIRC and TCGA KIRP to perform Clear Cell, Papillary, and Chromophobe Renal Cell Carcinoma (CCRC vs. PRCC vs. CHRCC) subtyping with a total of 931 WSIs. Finally we performed metastases binary classification using the CAMELYON16 dataset which consisted of 270 training images and 129 validation images. Apart from Section 6.2 (where we varied the pre-training dataset used), all our experiments used the following 7 cancer datasets to pre-train the level 1 encoder: CPTAC-COAD, PAIP2019[20], TCGA BRCA, TCGA LUAD, TCGA LUSC, TCGA KIRC and TCGA KIRP. ### Data preprocessing The WSIs are rescaled to a consistent base magnification level of 0.5 microns/pixel. Macenko normalisation [21] is applied to the WSIs to achieve a canonical colouring of haematoxylin and eosin stains. Using calculated tissue masks for separating tissue from the slide background, we extracted \(256\,\mathrm{px}\times 256\,\mathrm{px}\) patches that have at least 75% foreground pixels. These patches were used for training the level 1 encoder. The level 2 encoder was trained on \(4096\,\mathrm{px}\times 4096\,\mathrm{px}\) patches with at least 40% foreground pixels. These same level 2 patches Figure 4: Three different training configurations with varying levels of pre-trained weight utilization in the level 2 encoder. All models use the same pre-trained level 1 encoder (not shown) to produce level 1 feature vectors. were also used for the downstream tasks. We set this threshold lower for the CAMELYON16 dataset (to 20% foreground pixels), since the task is to find very small cancerous cells in the WSIs and setting a high foreground threshold could lose important information. ### Cross Validation and metrics We followed the experimental protocol of [2], performing 10 fold cross validation on all experiments involving TCGA datasets. We used the same cross validation splits as those used in [2]. For the CAMELYON16 dataset we used the train and validation split provided by the challenge as our train and validation splits. We used the AUC metric for all binary classification tasks including cancer subtyping (TCGA) and classification of metastases (CAMELYON16). For RCC subtyping--which has three classes--we report macro-averaged AUC. ### Training setup We pre-trained on the 7-cancer dataset containing 3565 WSIs, which consisted of 39,660,927 level 1 patches and 200,966 level 2 patches. We trained the level 1 encoder for 1600 epochs using the ViT-s [18] architecture and AdamW [22] optimizer with a base learning rate of 0.0005 and a batch size of 32. The first 10 epochs were used to warm up to the base learning rate followed by a cosine schedule decay. Due to the massive number of level 1 patches, we defined an "epoch" to be smaller than a full pass through the dataset. By our definition of an epoch, the model will see a total of \(2^{16}=65536\) training examples randomly sampled from the entire dataset. The level 2 encoder was trained with similar configuration settings using the standard definition of an epoch (one full pass through the dataset). The model with ViT-xs architecture was trained for 800 epochs using the level 1 feature vectors. For most finetuning experiments, we trained for 20 epochs with the Adam [23] optimizer, batch size of 1 and a learning rate of 0.0001. The metastases prediction task on the CAMELYON16 dataset was finetuned for 100 epochs as an exception. ## 6 Experimental Results In this section we present the results from extensive experiments we have performed to test what really matters for determining the performance of WSI classification models. The factors we tested include the following: the amount of global information the model incorporates; the amount of level 2 pre-training used; the number of different cancer datasets used to pre-train the level 1 encoder; and the amount of training data used. Finally we tested the performance of the models for survival prediction. In our experiments we found that there was one model configuration which almost always gave the best results. We call this configuration _HIPT with local emphasis_ (HIPTLE). HIPTLE uses the medium global structure model (see Section 3 for details), with no level 2 pre-training and uses a level 1 encoder trained using the 7 cancers listed at the end of Section 5.1. Many of the experiments below will include results for HIPTLE. ### Varying the amount of pre-training and global structure In the introduction we showed the overall results across 4 datasets when both the influence of pre-training and the amount of model capacity dedicated to global structure was varied. Table 2 shows a more detailed breakdown of these results, considering each dataset individually. We used the definitions of most/medium/no level 2 pre-training from Section 2 and most/medium/no global structure from Section 4. The results show that the HIPTLE configuration of using a medium amount of global structure and fine-tuning the level 2 encoder (either Med L2 PT or No L2 PT) gives the best performance for all datasets. Starting with random weights for the level 2 encoder (No L2 PT) or with pre-trained weights (Med L2 PT) for the level 2 encoder does not make much difference. This shows level 2 encoder pre-training is not effective. The "no global structure" configuration was a surprisingly strong performer, achieving results that were close to the best result for three of the WSI classification tasks (CAMELYON16 metastases, NSCLC subtyping, and RCC subtyping). This suggests that most of the information needed to successfully classify each WSI resides at the low \(256\,\mathrm{px}\times 256\,\mathrm{px}\) patch level (level 1 encoder), where cell type, cell density, and tissue type information can be determined. Put another way, pre-training the level 2 encoder is less important for accurate predictions than how well valuable information from the level 1 layer is transmitted to the final layer during supervised training (either by omitting global structure or by fine-tuning the level 2 encoder). The results show that medium global structure models always outperform the most global structure models for any pre-training configuration. This shows the importance of not making the models too deep. As mentioned above it seems the low level cell information is really useful for the final prediction and so using fewer layers before the level 3 classification module allows the low level information to be passed to the final prediction layers more easily, resulting in better performance. It is also important to note the method using medium global structure while fine-tuning the level 2 encoder outperforms DTFD-MIL [17] (for CAME-LYON16) and HIPT [2] (for BRCA, NSCLC and RCC subtyping). This shows our models give very strong performance when compared with existing state-of-the-art methods. ### Varying data used to pre-train level 1 encoder The previous experiments established that the level 1 encoder extracted the most valuable information for WSI classification and that the level 2 encoder was comparatively less important. This motivated us to explore pre-training the level 1 encoder using different datasets. For all the experiments we used the DINO[10] unsupervised training method with the VIT-S [18] vision transformer model (the same setup used by the HIPT paper [2]). We show the results for the no global structure (Max-MIL) and HIPTLE models (refer to Section 3). Our results in Table 3 show that pre-training the Max-MIL level 1 encoder on the single cancer that was used for downstream WSI classification leads to the best accuracy. The 7-cancer dataset is a close second, but 33-cancer and ImageNet pre-training perform much worse. We think the reason for this is pre-training on fewer cancers (1 or 7) results in the representation space being better utilised to embed just the features that are found on the these small set of cancers instead of the higher amount of irrelevant features found in the 33-cancer dataset or ImageNet. This means smaller differences in features will be mapped farther away in the representation space. In contrast, the 33-cancer pre-trained level 1 encoders need to reserve representation space for cancers that are not part of the downstream WSI classification task. The results for the HIPTLE model are shown in Table 4. The results once again show pre-training on fewer cancer types (1 or 7) works better than 33 cancers or ImageNet pre-training. This is for similar reasons to the results for the Max-MIL model. Pre-training on ImageNet gives consistently poor results. This can be explained by the fact that feature extractors trained on ImageNet reserve areas of the representation space for features pertaining to natural images, such as photos of dogs. Whilst some low-level features learned during pre-training can be reused, other features simply never appear in WSIs, and hence that portion of the representation space is wasted. In contrast, pre-training on cancer datasets close to the downstream task makes the most efficient use of the representation space to encode features that are most useful for performing \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{4}{c}{CAMELYON16 metastases} & \multicolumn{4}{c}{BRCA subtyping} \\ \cline{2-7} & Most L2 PT & Med L2 PT & No L2 PT & Most L2 PT & Med L2 PT & No L2 PT \\ \hline Most global structure & 0.564 & 0.951 & 0.931 & 0.800 \(\pm\) 0.072 & 0.884 \(\pm\) 0.068 & 0.878 \(\pm\) 0.053 \\ Med global structure & 0.666 & **0.964** & 0.960 & 0.882 \(\pm\) 0.039 & 0.900 \(\pm\) 0.036 & **0.916 \(\pm\) 0.038** \\ No global structure & - & - & 0.952 & - & - & 0.879 \(\pm\) 0.0729 \\ \hline DTFD-MIL [17], HIPT [2] & - & - & 0.945 & 0.874 \(\pm\) 0.060 & 0.827 \(\pm\) 0.069 & 0.823 \(\pm\) 0.071 \\ \hline \hline \end{tabular} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{4}{c}{NSCLC subtyping} & \multicolumn{4}{c}{RCC subtyping} \\ \cline{2-7} & Most L2 PT & Med L2 PT & No L2 PT & Most L2 PT & Med L2 PT & No L2 PT \\ \hline Most global structure & 0.874 \(\pm\) 0.038 & 0.951 \(\pm\) 0.020 & 0.953 \(\pm\) 0.019 & 0.976 \(\pm\) 0.013 & 0.989 \(\pm\) 0.009 & 0.985 \(\pm\) 0.010 \\ Med global structure & 0.950 \(\pm\) 0.020 & 0.960 \(\pm\) 0.015 & **0.965 \(\pm\) 0.013** & 0.991 \(\pm\) 0.006 & 0.993 \(\pm\) 0.005 & **0.993 \(\pm\) 0.004** \\ No global structure & - & - & 0.940 \(\pm\) 0.028 & - & - & 0.991 \(\pm\) 0.005 \\ \hline HIPT [2] & 0.952 \(\pm\) 0.021 & 0.820 \(\pm\) 0.047 & 0.786 \(\pm\) 0.096 & 0.980 \(\pm\) 0.013 & 0.956 \(\pm\) 0.013 & 0.956 \(\pm\) 0.016 \\ \hline \hline \end{tabular} \end{table} Table 2: AUC results from four different public datasets: CAMELYON16 metastases classification, TCGA-BRCA subtyping, NSCLC subtyping, and RCC subtyping. The comparison algorithm used for the CAMELYON16 dataset was DTFD-MIL[17] and HIPT[2] was used for all other datasets. The highest AUC result for each dataset is highlighted using bold font. Note most/med/no L2 PT, refers to most/med/no level 2 pre-training. classification on WSIs. ### Frozen versus fine-tuning L2 encoder The authors of the original HIPT paper found that using a frozen L2 encoder, pre-trained on 33 cancer types, resulted in the best accuracy. In contrast, our experimental results indicate that fine-tuning the L2 encoder (No L2 PT) works best when pre-trained on 7 cancer types (see Table 2). The key to understanding this discrepancy is the difference between the two pre-training datasets. As we have already established, 7-cancer data produces better L1 features than 33-cancer data due to better alignment with the downstream task. This is most evident in the Max-MIL results (Table 3). The reduction in number of examples from the 33-cancer dataset to 7-cancer dataset does not hinder performance, since there are still ample L1 patches in the 7-cancer dataset (39,660,927 patches) to learn good discriminative features. However, this no longer holds true when it comes to pre-training the L2 encoder. Since L2 patches are much larger (\(4096\,\mathrm{px}\times 4096\,\mathrm{px}\)), there are far fewer examples available for L2 pre-training and dataset size becomes a more critical issue, favouring the 33-cancer data. For the 7-cancer dataset there are only 200,966 L2 patches (99.5% fewer than L1 patches). Considering the original HIPT setup we can compare performance on the downstream classification task of BRCA subtyping. The results in Table 5 show that, when the L2 encoder is frozen, pre-training on 33-cancer data g \begin{table} \begin{tabular}{c c c c c} \hline Pre-training dataset & CAMELYON16 & BRCA subtyping & NSCLC subtyping & RCC subtyping & Average \\ \hline 33 cancers & 0.652 & 0.855 \(\pm\) 0.073 & 0.976 \(\pm\) 0.009 & 0.924 \(\pm\) 0.027 & 0.852 \\ 7 cancers & **0.960** & **0.916 \(\pm\) 0.038** & **0.993 \(\pm\) 0.004** & **0.965 \(\pm\) 0.013** & **0.959** \\ single cancer & 0.960 & 0.911 \(\pm\) 0.049 & 0.987 \(\pm\) 0.008 & 0.961 \(\pm\) 0.029 & 0.955 \\ ImageNet & 0.813 & 0.896 \(\pm\) 0.054 & 0.984 \(\pm\) 0.006 & 0.945 \(\pm\) 0.019 & 0.910 \\ \hline \end{tabular} \end{table} Table 4: AUC results for the HIPTLE model when varying the dataset used for pre-training the level 1 encoder. The 7 cancers dataset is the default dataset used to train level 1 encoders for all the experiments (see Section 5.1). The 33 cancers pre-training results used the pre-trained level 1 encoder from HIPT[2] which was trained on 33 cancers. For single cancer results we pre-trained the level 1 encoder using the same dataset as that used for the downstream WSI classification task. The best result for each dataset is highlighted in bold font. \begin{table} \begin{tabular}{c c c c c} \hline Pre-training dataset & CAMELYON16 & BRCA subtyping & NSCLC subtyping & RCC subtyping & Average \\ \hline 33 cancers & 0.763 & 0.748 \(\pm\) 0.090 & 0.886 \(\pm\) 0.027 & 0.951 \(\pm\) 0.015 & 0.840 \\ 7 cancers & 0.952 & 0.879 \(\pm\) 0.0729 & 0.940 \(\pm\) 0.028 & **0.991 \(\pm\) 0.005** & 0.941 \\ single cancer & **0.963** & **0.888 \(\pm\) 0.067** & **0.947 \(\pm\) 0.026** & 0.981 \(\pm\) 0.013 & **0.945** \\ ImageNet & 0.800 & 0.850 \(\pm\) 0.083 & 0.907 \(\pm\) 0.026 & 0.943 \(\pm\) 0.021 & 0.875 \\ \hline \end{tabular} \end{table} Table 3: AUC results for the no global structure model (Max-MIL) when varying the dataset used for pre-training the level 1 encoder. The 7 cancers dataset is the default dataset used to train level 1 encoders for all the experiments (see Section 5.1). The 33 cancers pre-training results was using the pre-trained level 1 encoder from HIPT[2] which was trained on 33 cancers. For single cancer results we pre-trained the level 1 encoder using the same dataset as that used for the downstream WSI classification task. The best result for each dataset is highlighted in bold font. Figure 5: Loss curves from training HIPT on the BRCA subtyping supervised classification task. Regardless of whether pre-training used the 7-cancer or 33-cancer dataset, the HIPT model’s training loss improves much quicker with frozen L2 encoder weights \begin{table} \begin{tabular}{c c c} \hline Pre-training dataset & L2 encoder & Test AUC \\ \hline 7 cancers & Fine-tuned & 0.892 \(\pm\) 0.039 \\ 7 cancers & Frozen & 0.819 \(\pm\) 0.091 \\ 33 cancers & Fine-tuned & 0.866 \(\pm\) 0.051 \\ 33 cancers & Frozen & **0.902 \(\pm\) 0.058** \\ \hline \end{tabular} \end{table} Table 5: Test results for the BRCA subtyping task after HIPT models were trained for 100 epochs. Here we vary the pre-training dataset and whether or not the L2 encoder is frozen during supervised learning. 7-cancer data. This is because despite the L1 features from the 7-cancer data having more discriminative power for the downstream task, the relatively small amount of L2 training data leads to overall worse performance. On the other hand, when L2 encoder weights are fine-tuned during supervised training we observe the opposite result. This is because once L2 is fine-tuned then the superior quality of 7 cancer L1 features (recall that the L1 encoder is always frozen) leads to better overall performance. It takes longer to fine-tune the large L2 encoder of HIPT than it does to keep it frozen during the supervised learning phase (see Figure 5 for training loss curves) due to the much larger number of parameters that need to be optimized. After an extended training time of 100 epochs, the best evaluation results are still obtained from the 33-cancer, frozen L2 configuration. However, when the L2 encoder is much shallower (as in HIPTLE, Table 4), fine-tuning is much more efficient and the strong training signal from supervised training places greater emphasis on having stronger L1 features than stronger L2 pre-training. As a consequence, we find that fine-tuning from 7-cancer data is the best configuration for HIPTLE and hence this is the approach that we will compare with existing methods. ### Comparison against existing algorithms for WSI classification In this experiment we compare the no global structure (Max-MIL) and HIPTLE configurations against an array of existing WSI classification algorithms. The results show that HIPTLE significantly outperforms all other algorithms for both 25% and 100% training data. This can be attributed to finding the sweet spot in terms of both the degree of pre-training and the amount of global structure. As discussed earlier, a deep model--such as that used in HIPT--can result in the important cell level information being lost before reaching the level 3 classification module. The results for the no global structure (Max-MIL) method were similar to HIPT for most of the test configurations despite the fact Max-MIL does not see any global context information. This can be largely attributed to the fact the Max-MIL model used the level 1 encoder that was trained on the 7 cancers instead of the 33 cancers that was used to train HIPT. As we showed in Section 6.2, pre-training on the 7 cancers produces better results since it makes better use of the representation space. ### Survival prediction results In this experiment we compare our HIPTLE model against other existing models including the original HIPT approach [2]. We report the results for the following datasets: IDC cancer subtype from TCGA-BRCA; CCRCC cancer subtype from TCGA-KIRC; PRCC cancer subtype from TCGA-KIRP, and LUAD cancer subtype from TCGA-LUAD. We perform the experiments using 5 fold cross validation with the same splits used in the HIPT paper [2]. The results again show that HIPTLE outperforms all existing models including HIPT. It is encouraging to see the superior performance of HIPTLE carries over from WSI classification to survival prediction. ### Varying pre-training and global structure with small training dataset In this section we vary the amount of level 2 pre-training and the amount of global structure when the training dataset is reduced to just 25% of the full TCGA-BRCA dataset. The classification task for this set of experiments is BRCA subtyping. The results once again show that the HIPTLE configuration (medium global structure and no level 2 pre-training) performs the best, and this continues to hold when the training dataset is small. This means that when there is limited training data available for the downstream task, heavier pre-training of the level 2 encoder still does not help. The medium global structure consistently performs better than most global structure and no global structure, which is consistent with earlier results (see Section 6.1). ## 7 Conclusion The current trend for WSI classification is to propose complex methods that incorporate a more global view of the entire slide. In this paper we showed that instead focusing on the most predictive local patch can give results very similar to complex algorithms incorporating global structure. Rather than increasing the amount of global structure, we found that the key to high performance is using the appropriate level 1 encoder feature vectors. This suggests the models actually get most of their predictive power from the local patch level, which contains cell and local tissue micro-environment level information. A very important finding of this paper is that the data used for pre-training the level 1 encoder matters a lot. Pre-training using a large dataset spanning 33 cancers actually works considerably worse than pre-training using a more focused set of 7 cancers (including the target cancer). This can be explained by the more efficient use of the representation space when only 7 cancers are used for pre-training. In fact, pre-training just on the target cancer gives very similar performance to using 7 cancers. All the experiments show the HIPTLE model configuration consistently outperforms all other methods in all situations tested (including both WSI classification and survival prediction). HIPTLE uses a medium amount of global structure with no pre-training for the level 2 encoder and using level 1 encoder trained on 7 cancers. The robustness of these results shows that HIPTLE should be the first-choice model used in most WSI classification and survival prediction situations. As future work we intend to explore predicting results of genetic tests like the BRCA gene for breast cancer and microsatellite instability (MSI) status for colorectal cancer (CRC). We would also like to perform a more in-depth study of survival prediction, involving more datasets using the various model configurations. Finally, given how important the level 1 encoder is to final performance, we would like to explore new novel methods for unsupervised pre-training of the level 1 encoder. ## 8 Acknowledgements The results in this paper are in whole or part based upon data generated by the TCGA Research Network: [https://www.cancer.gov/tcga](https://www.cancer.gov/tcga).
2305.01800
Attempt to Salvage Multi-million Dollars of Ill-conceived HPC System Investment by Creating Academic Cloud Computing Infrastructure. A Tale of Errors and Belated Learning
In 2015 the Interdisciplinary Centre for Mathematical and Computational Modelling (ICM), University of Warsaw built a modern datacenter and installed three substantial HPC systems as part of a 168 M PLN (36 M Euro) OCEAN project. Some of the systems were ill-conceived, badly architected and for the five years of their life span have brought minimal ROI. This paper reports on a two-year intensive effort to reengineer two of these HPC systems into a hybrid, multi-cloud solution called A-CHOICeM (Akademicka CHmura Obliczeniowa ICM). The intention was to expand the user base of ICM typical HPC system from around 200 to 500 to about 100,000 potential general academic users from all institutes of higher learning in the Warsaw area. The main characteristics of this solution are integration of on-premises ICM Cloud with several public cloud providers, building solution tailored to particular groups of academic users, containerization, integration of special computational paradigms like AI and Quantum Computing. Full process of designing the solution, competitive dialogue with suppliers, and full final specifications for the solution are presented. Several roadblocks, pitfalls and difficulties encountered along the way, including the conservative attitude of "the old school" HPC admins, University bureaucracy, national funding policies and others are presented.
Marek Michalewicz
2023-05-02T22:02:45Z
http://arxiv.org/abs/2305.01800v1
# Attempt to Salvage Multi-million Dollars of Ill-conceived HPC System Investment ###### Abstract In 2015 the Interdisciplinary Centre for Mathematical and Computational Modelling (ICM), University of Warsaw built a modern datacenter and installed three substantial HPC systems as part of a 168 M PLN (36 M Euro) OCEAN project. Some of the systems were ill-conceived, badly antireticed and for the five years of their life span have brought minimal ROI. This paper reports on a two-year intensive effort to reengineer two of these HPC systems into a hybrid, multi-cloud solution called A-CHOICeM (Akademicka CHmura Obliczeniowa ICM\({}^{2}\)). The intention was to expand the user base of ICM's typical HPC system from around 200-500 to about 100,000 potential general academic users from all institutes of higher learning in the Warsaw area. The main characteristics of this solution are integration of on-premises ICM Cloud with several public cloud providers, building solution tailored to particular groups of academic users, containerization, integration of special computational paradigms like AI and Quantum Computing. Full process of designing the solution, competitive dialogue with suppliers, and _full final specifications for the solution_ are presented. Several roadblocks, pitfalls and difficulties encountered along the way - including the conservative attitude of "the old school" HPC admins, University bureaucracy, national funding policies and others are presented. Academic Cloud Services, HPC Cloud, On-Premises Private Academic Cloud, Private-Public Cloud. ## I Introduction A T the end of 2015 the Interdisciplinary Centre for Mathematical and Computational Modelling (ICM), University of Warsaw completed a modern 10,000 sq. m. of technical space datacenter (Fig.1) and installed three substantial HPC systems as part of a 168 M PLN (36 M Euro) OCEAN project. The systems were: Cray XC40 (Okeanos) supercomputer, Huawei cluster (Enigma) specifically targeted for storage and analysis of Big Data using Apache Spark and Hadoop. Enigma consisted of about 350 dual socket, 12 cores CPUs, servers, each with 24 TB local disk space with 43 TB total RAM and 8 PB disk storage (Fig.2). The other cluster "Topola", also from Huawei was meant to serve HPC loads. It consisted of 240 servers with 6 720 cores, 23 TB RAM and nominal computational performance of almost 300 TFLOPs. Although both Cray Okeanos and Huawei Topola computers for the first three years of their operations (2016-2018) were utilized to their full capacity, the Enigma Huawei cluster had never been utilized in more than about 20% computational and about 12% storage capacity. This situation of wasted investment and unused substantial compute and storage resources was hard to tolerate and to accept by the author who became the ICM Director in 2018. However, any remedy required finding own sources of project funding which were notoriously scarce. An opportunity arose in the beginning of 2020 when ICM started generating own income and there were sufficient funds to start the upgrade project to convert Enigma and some other computer systems together with substantial storage resources into a private cloud service. Fig. 1: New ICM datacenter in Warsaw Fig. 2: Enigma BigData system from Huawei (left) and Okeanos Cray XC40 supercomputer at ICM datacenter. ## II.Cloud Services in Academia and Research Cloud computing offers many substantial advantages over on- premise computing resources [1-3]. In High Performance Computing world distributed resource sharing was initially realized through Grid Computing paradigm. Especially notable example was the TeraGrid initiative in the US [4,5], which was subsequently transformed into XSEDE initiative and infrastructure [6,7]. These US National Science Foundation's initiative are currently continued under Advanced Cyberinfrastructure Coordination Ecosystem (ACCES) program [8]. Grid computing paradigm, which in a sense had features of a precursor of cloud computing, was widespread in HPC sector and academic computing centers, for example in Poland it was realized under PL-Grid program [9]. Nimrod/G was an example of a solution enabling resource management and sharing at global scale computational grid [10]. The convergence of Open Data, Open Science and the desirability of sharing of computing resources and computing infrastructure resulted in several national, regional and international initiatives combining Open Data and Cloud Computing paradigms [11-18] The HPC Cloud Computing concepts, technology and solutions were offered by the notable initiative of UberCloud [19,20] and commercial offerings in a form of portals, UX solutions and resources integrating and provisioning solutions. Rescale [21] was the first such commercial solution followed later by XTREME-D [22,23] and Ronin [24], among others. An important example of an experimental large scale Cloud Computing is Chameleon [25,26] developed at Argonne National Laboratory. ## III Problems of Purpose and Design The problem of very low utilization of Enigma, which never exceeded 20% of compute resources and no more than 12% of total disk storage was a result of several serious design and purchase errors: i) the promise of widespread adoption of MapReduce Hadoop anticipated at the time of planning the infrastructure requirements in 2014 never materialized; ii) the design of architecture of the 10-racks, 5-modules large Big Data Enigma system had several critical faults that were almost impossible to eradicate without a serious reengineering of the architecture: first - the system was not multi-user nor multi-tenant since data within one module (two racks or about 70 servers) could be overwritten by another user on the same module; and finally the external data link to the world was only 1 Gbps, meaning that in order to fill the 8PB of local disk storage available for Big Data processing on Enigma, it would take over two years of continuous ingest of data. The InfiniBand interconnect had rather awkward topology and was not able to support fast movement of data from outside nor within the cluster and the Ethernet network was designed for management only and could not carry any data traffic. It must be noted that one module of Enigma has been used for OPENAire project where ICM played a role in providing hardware resources [1]. ## IV Problems of Internal Resistance and Systems Roadblocks Shortly after joining ICM as a deputy director in late 2016, the author first became aware of the problem of an idle HPC resources and incredible wastefulness of substantial publicly funded infrastructure. When he begun discussions of this issue with his colleagues, he met with indifference and discomfuture. The technical staff disowned the problem, the decision makers appeared not to be responsible nor concerned. In 2018 when the author was appointed the Acting Director of ICM it was possible to launch a project to re-engineer Enigma purpose and configuration and to integrate some 20+ PB siloed storage into less fragmented storage system. However, it was easier said than done. Many obstacles and problems had to be tackled and some of them were: 1. the conservative attitude of "the old school" HPC admins among ICM technical admin staff. We3 may list some reasons for this reluctance to adopt changes: [START] Footnote 3: The question the author asked CgatGPT was: “Why do “the old school” HPC admins exist cloud computing? 2. Cultural Resistance: Many HPC admins are comfortable with the way things have been done traditionally and may resist change, especially if they feel that their expertise is being devalued or replaced. Footnote 4: The text between [START] and [END] was written by ChatGPT. I would have provided the same arguments myself. 3. Control: HPC administrators may be accustomed to having complete control over their hardware, software, and data, and may be hesitant to entrust these critical components to a third-party cloud provider. 4. Performance: HPC applications require high levels of performance and low latencies, which may be difficult to achieve in a cloud environment due to the virtualization and network overhead. HPC administrators may be concerned that moving to the cloud will negatively impact performance. 5. Cost: Cloud providers typically charge for compute and bandwidth usage, which can quickly add up for HPC workloads that generate large amounts of data. 6. Complexity: Cloud computing introduces additional layers of complexity to HPC workflows, including networking, security, and data management. HPC administrators may be concerned about the additional time and effort required to manage these new components. [END\({}^{\textbf{d}}\)] Some ICM admin and technical staff shared above objections and we engaged in continuing dialogue to reach consensus and to get all experienced, but sometimes obstinate members of technical team on-board. Another objection brought by critics of an idea was that this project can be realized internally entirely within our own organization. The author vehemently objected this idea arguing that: 1. commercial cloud providers are years ahead with their solutions and technology and entities from this environment were the most suitable providers of the solution we were seeking, 2. there was little research and innovation value for ICM in this kind of development project; 3. ICM's technical staff was already intensely involved in their routine duties and multitude of research projects; 4. ICM should not serve as a software development house, but as a research institution and all effort of technical stuff should be directed to support of research, not software development project, and; 5. our staff could engage in back-room "Open Stack", virtualization and similar development but was inexperienced in accounting, user expectations UX, and immediate needs and font end management of a very large group of expected new users. HPC practitioners are aware of occasional admins-users polarization which might not translate well into "user-friendly" products. 2. University bureaucracy. Bureaucrats dislike changes and exceptions. The comfortable, steady state with no perturbations and routine operations are the norm - when confronted with unique and special situation, they will become obstructionist and will try to slow or derail the process [27]- that was the case here. It took six months, between June 2020 to end of December 2020, of tedious and occasionally trivial probing by the University of Warsaw Public Procurement Office Manager to allow ICM to open a "Competitive Dialogue". The idea that the procurement does not relate to a "ready-made, out-of-the-box" product but to the original technological solution which has never been implemented in Poland; that the solution provider had to build a consortium of partners to integrate multiple technology components into one solution; and the fact that only very experienced technology players were capable of delivering such a solution was not easy to grasp and accept by the University Procurement Office. 3. Funding for the project National funding policies for HPC have changed substantially after Poland joined EuroHPC in 2018. Most funding is channeled through EuroHPC schemes, and the coordination of these activities was delegated by the Ministry of Science and Higher Education5 to Cyfronet in Krakow [28]. ICM was also ineligible for so called POIR national funding scheme [29] since Warsaw and Mazovian region was deemed developed and some 93% of POIR funding was channeled to under-developed regions. University of Warsaw was also not open to provide extra funds to ICM since the ICM's OCEAN project completed in 2015 accrued cash shortage of the order of 18 M PLN which was treated by the University as the internal debt of ICM. Curiously the incredibly modern and valuable infrastructure built by ICM as part of OCEAN project including the datacenter and all HPC infrastructure was absorbed as the University assets which were never offset against the ICM cash debt. Starting in 2018 ICM launched several projects which over the next three years generated income sufficiently large to initiate and fund A-CHOIceM project. Footnote 5: Currently reorganized and named Ministry of Education and Science 4. Technical incompatibility of components. The very large local disk storage of 8 PB of Enigma Big Data cluster was difficult to integrate with 12 PB of HPC Lustre storage of Oceanos Cray XC supercomputer. It must be stressed that Tetyda DDN HPC storage system which was part of Oceanos Cray XC40 system was over-specified for ICM users need. Tetyda has 12 PB raw storage capacity out of which not more than 2 PB was ever used until 2021. Within A-CHOIceM project we were looking into ways to utilize this hugely wasted storage capacity for non-HPC academic and research needs. We were also planning to include 240-server Topola into in-house A-CHOIceM cloud infrastructure since most of the workload on this very highly utilized computer were smaller jobs not requiring more than several servers and high percentage of them were request from PL-Grid system which was ripe for cloud-style update anyway. ## V Decision to Convert Big Data and HPC Systems into On-premises Cloud Computing System The A-CHOIceM project was meant to integrate all underutilized ICM compute and storage resources and to make them accessible to a new and different type of users. The plan was to provide in-house cloud resources free to all academic users. But a very important feature of the plan was not to stone-wall ICM's private cloud resources from public cloud resources. We insisted on creation of "master portal" which would be accessible to some 100, 000 academic users, initially from the Warsaw region with large number of Universities and Institutes of Higher Learning, which would lead to in-house ICM resources of some 5,000-10,000 virtual machines and \(\sim\)20PB of storage resources, but also allow to access freely public clouds from all major providers, i.e. AWS, Azure, IBM, Green Lake, etc. Unfortunately, some worthy potential suppliers of a solution were too closely bound to a single public cloud provider and were not able to provide solution open to all public cloud providers. ## VI Competitive Dialogue Process ICM encountered resistance from the University Public Procurement Office. Essentially the bureaucrats there were not prepared to accept a project of this complexity and novelty. They insisted on a simpler but inadequate tender process. A formal justification of the request for competitive dialogue was as follows: "_The Contracting Authority confirms that the competitive dialogue procedure pursuant to 60a(1) will be carried out in connection with the fulfilment of the conditions pursuant to Article 55(1)(6)-(9) of PPL:_ _6) the solutions available on the market cannot satisfy; without adaptation, the needs of the contracting authority;_ _The order includes the creation of an unprecedented, extremely complex and comprehensive IT solution involving the integration of components from the following areas:_ 1. _security of access to vast information resources,_ 2. _federalization of access for a huge number of users from different universities of scientific organizations from all over Poland,_ 3. _combining HPC solutions with simultaneous provision of these resources in the Cloud Computing modality_ 4. _the combination of Cloud storage and Object storage solutions, with Cloud Computing,_ 5. _with the integration of batch queuing systems (in HPC) with interactive access, containerization and virtualization._ _Although each of the above-mentioned components and technologies exists separately on the market - we are not aware of any case on the Polish or European market where all these requirements were met in one installation, simultaneously._ _The entire solution is a so-called 'bespoke' one and must be closely aligned to the ICM's diverse computer and supercomputing hardware, consisting of around 600 servers, disk arrays of various technologies, different brands with capacities of around 25PB and various file systems - from object-oriented to the high-performance Lustre system. Network solutions also need to be upgraded, both inside the servers and in the LAN/WAN networks, including the supply of network equipment and adapting everything to the very different needs of users. It should also be noted that the solution is to be based as much as possible on existing hardware, with minimal use of new hardware components. We expect that the interconnects in the two clusters will need to be upgraded. An end-to-end solution of this complexity, addressing the very specific needs of a high performance computing (HPC) centre like ICM, is not available on the market._ _The project will involve not only the purchase of components, but above all the integration and reprogramming, the creation of specialised services._ _For example, storage vendors (NetApp, Panasas, IBM, HPE, DDN, Western Digital, etc. etc.) have their own hardware-specific solutions, while ICM has storage hardware from all of these vendors - but this hardware is divided by thick "walls" of storage silos. There are few companies in the world that produce data management software that is so-called "hardware agnostic", i.e. independent of the hardware vendor - which is exactly the kind of integrated solution we want in the storage area._ _Some examples of innovation of the desired solution are:_ * _There is no solution in Poland for making supercomputing resources available as interactive resources. The only possible access is through queue systems such as SLURM, LSF Platform, PBS Pro, or others._ * _cloud storage combining a High Performance File system (e.g. Lustre, Spectrum) with an object-based file system and SINGLE cloud computing_ * _combination of HPC resources with the ability to create and run jobs using Dockers/Singularity/other containerization systems_ _No such solution exists in Poland._ _The project will consist not only in reconstruction, but in adding many different elements, software, merging, so called upgrades - i.e. modernisation - e.g. Topola and Enigma clusters must have an upgraded interconnect (internal network). Both these clusters are manufactured by Huawei, but the interconnects are a product of Mellanox (now Nvidia) - so the solutions must be provided by both Mellanox and Huawei._ _The innovation of the solution will not be limited to the integration of individual elements._ _We will require the solution to provide access to simulators of quantum computers. That is, the solution requires the possession of such simulators or cooperation with a company that can provide such simulators. It should be emphasized that there are no such providers in Poland and only a few in Europe._ _As part of the execution of the contract, the contractor will be required to supply additional hardware components. E.g. interconnect components for the Topola and Enigma clusters will certainly be required (optical fibres, switches, network cards, network software)._ _According to our collective knowledge of the high-end market of High Performance Computing, Storage, Networking (HPC, Networking, Storage, Cloud Solutions, System Management, File Systems, Storage Systems, Containerization, Queuing Systems, etc. etc.) there IS NO SUPPLIER that can deliver all the components specified in our order. We expect that even the strongest players in this market will be forced to form their own consortia consisting of several sub-suppliers and sub-contractors._ _7) the works, supplies or services include design or innovative solutions;_ _Services include design and innovative solutions, combining functionalities for which no integrated solution currently exists. Due to the innovative nature of the solution, its careful pre-design in close co-operation with contractors based on the components offered by them and taking into account the ICM's hardware is necessary. We emphasise that in the case of the current ICM project the whole solution is characterised by great complexity and a very high scale of difficulty, as the solution has to include a lot of new elements and at the same time it has to fit into the existing infrastructure._ _8. a contract may not be awarded without prior negotiations because of special circumstances with regard to its nature, its complexity or its legal or financial conditions or because of the risks attaching to the works, supplies or services;_ _The order requires prior negotiations with the contractors for technical reasons: it is necessary to ensure full compatibility of the delivered modules with each other and with the ICM hardware at the level of hardware, software and functionalities ensuring full integration with external systems. The ICM also reserves the right to compose a complete, integrated solution according to its own requirements and needs, defined during the dialogue, and not to accept ready-made, out of the box solutions usually offered by suppliers._ _9) where the contracting authority cannot describe the subject-matter of the contract with sufficient precision by reference to a particular standard, a European Technical Assessment as referred to in Article 30 the description of the subject-matter of the contract in paragraph 1 point 2(c), a Common Technical Specification as referred to in Article 30 the description of the subject-matter of the contract in paragraph 1 point 2(d), or a technical reference._ _It is clear that elements of the contract will not be ready-made solutions. Any person reading the eligibility criteria and other documents should be someone who is defined as "a person skilled in the art ". Specialists with extensive knowledge of the topics included in this project are fully capable of assessing the project and can guarantee that the supplier is capable of undertaking such a project._ _It is not possible to describe the subject matter of the contract in accordance with Article 30(1)(d). 2 letter d. because the order involves the integration of dozens of elements, and each of them is non-standard._ _Due to the technical and conceptual complexity of the contract, as well as the lack of appropriate standards in many areas, it is currently not possible to describe the subject matter of the contract taking into account the technical and legal provisions referred to in Article 30(1)(2)(d) of the PPL, i.e. common technical specifications, understood as technical specifications in the field of ICT products defined in accordance with Art. 13 and Art. 14 reference to provisions of the Civil Code of Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European Standardization, amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC, 2007/23/EC, 2009/23/EC and 2009/105/EC of the European Parliament and of the Council, and repealing Council Decision 87/95/EEC and Decision No 1673/2006/EC of the European Parliament and of the Council (Official Journal of the European Union L 316 of 14.11.2012, p. 12)_ The selection criteria for Competitive Dialogue partners was created. It is presented in Appendix 1. The call for Competitive Dialogue was announced in January 2021. The three candidates were selected at the end of January and from early February till July we scheduled weekly one to two-hour sessions with all three potential suppliers. These sessions were followed with internal discussions which led to new sets of questions and problems later in the week communicated to the Dialogue partners. This process lasted till early July 2021. ## VII Educating Community, Academics and Students and University Officials ICM was the organiser of four editions of the "Supercomputing Frontiers Europe" series of conferences in years 2018-2021 [30]. The 2021 edition contained a special session on Cloud Computing and HPC with invited talks from UberCloud, Rescale, Ronin and Argonne National Labs [31-35]. ICM has also had organised a series of twenty-one "Virtual ICM Seminars in Computer and Computational Science". As part of the series there was a special lecture delivered by Valerie E. Polichar, Sr. Director & SATO, Academic Technology Services IT Services, University of California San Diego, entitled "Creating a Technology Vision: Planning for the Next Generation of Research Support" [36] which summarised over ten years of experience at UCSD after creation and implementation of the "UCSD Blueprint for the Digital University" [37,38]. ICM was attempting to emulate this process and delivery of the goals defined at that experience at UCSD and to expand computing resources far beyond traditional HPC users. The document "DESCRIPT OF THE SUBJECT MATTER OF THE CONTRACT" is the crux of the present communication and is included in full in Appendix 2. The A-CHOICeM project has unfortunately never been realised and I believe it is in the public interest of academic centres in Poland and outside to have access to the results of this time-consuming and serious process and its outcome, even if these are only accumulated wisdom and experience, knowledge of the current state-of-the-art in HPC Cloud architecture. ## IX Conclusion and Lessons Learned The A-CHOICeM project has never been completed. Our ICM team completed the technical specification document, which is the essential part of the current report, but after resignation of the author from employment at the University of Warsaw at the end of 2021 the project has been discontinued. During two years of intense engagement in this project and beyond we have seen many of our ideas realized in various projects and solutions - we were following the right path. It is hoped that the presented experiences, specifications and documents will be of some help in other academic and HPC centers and might prove useful even for public cloud providers of academic cloud services and developers of cloud technologies. ## Appendix I Selection criteria for Competitive Dialogue. ## Appendix 2 Description of the Subject Matter of the Contract The Order concerns the implementation of cloud computing using the existing ICM infrastructure [hereinafter also referred to as the "Project"] and includes: 1. Implementation of necessary software solutions 2. Supplementation and upgrade of existing ICM equipment 3. Creation of documentation 4. Delivery of training 5. Lifetime support _DESCRIPTION OF THE SUBJECT MATTER OF THE CONTRACT_ _Part 1: Purpose of the project_ _Part 2: Requirements_ 1. User access and management portal for system administrators. 2. Security 3. Virtualization, containerization and multi-cloud and hybrid cloud functionality 4. Application sub-portals (areas) 5. Architecture i Cloud storage 6. Network architecture 7. Cloud architecture and services 8. Integration 9. Documentation 10. Training 11. Support & Warranty 12. Implementation and payment schedule 13. Way of collecting the order _(Attachment 1 - The Contracting Authority's Available Infrastructure)_ ## Part 1 Purpose of the Project The intention of this project is to create an easily accessible and as easy to use Academic Cloud Computing ICM (A-CHOICeM) for the academic and scientific community in Warsaw and Poland. The order includes the creation of an unprecedented, extremely complex and comprehensive IT solution involving the integration of components from the following areas: 1. easy, convenient and secure access to vast IT resources, 2. federalization of access for a huge number of users from different universities of scientific organizations from all over Poland, 3. HPC solutions with simultaneous provision of these resources in the Cloud Computing modality (including virtual machines and containerization), 4. Combination of Cloud storage and Object storage solutions with Cloud Computing along with integration of batch queuing systems (in HPC) with interactive access, containerization and virtualization. 5. storage - integrating data storage resources of various types and at the same time making the A-CHOICeM cloud available to users. Expected features of the A-CHOICeM cloud: 1. Hybridity - The cloud provides equally easy capabilities to create virtual computing resources as well as access to large-scale High Performance Computing resources by integrating the solution with the SLURM queueing system. 2. Multi-Cloud - A-CHOICeM user portal provides access to public clouds such as Azure, AWS, Google Cloud, IBM, GreenLake, etc. 3. Cloud storage - available through a portal, allowing easy creation of storage resources and self-service management of this data - according to specific usage policies. 4. Containerization - the ability to use and create popular containers like Docker and Singularity. 5. Universal and easy access for research workers, teachers and students of Polish universities and research institutes. 6. Access security and security of stored data and user accounts. 7. Well-designed and ready-to-use sub-portals for popular research areas: e.g. 8. quantum chemistry, material sciences, bioinformatics, software development and mathematical modeling (e.g. R, Python, Julia, Mathematica), structural engineering, computational fluid dynamics. Sub-portals must be equipped with, or ready to integrate, the most popular codes, tools, and frameworks in these fields. 9. The A-CHOICeM cloud provides access to state-of-the-art portals/services and tools (both publicly open and, where possible, proprietary) in the fields of Artificial Intelligence (AI) and Quantum Computing. 10. The A-CHOICeM cloud is built using components based on open licenses and allows further expansion and scalability to further local and remote hardware resources and other clouds. A-CHOICeM is intended to be a solution that will provide access to a much more diverse set of ICM computing resources and allow ICM to open up the entire spectrum of scientific and educational computing and data storage needs of the academic and scientific community to a significantly larger set of users. This project aims to create a solution that enables easy access and self-management of diverse infrastructure, from a single virtual machine based on single computing cores, to the computing power of supercomputers available at ICM and the diverse resources available in multiple public clouds. ## Part 2: Requirements The order for the design and implementation of Academic Computing Cloud ICM - "ACHOICeM" includes the delivery of a technological solution consisting of components that meet the following requirements: ### User access and management portal for system administrators. The primary requirement for the A-CHOICeM access portal must be ease of use and convenience for both users and system administrators. Federated access must be provided for people from the academic and scientific community; from Warsaw (i.e. UW, PW, WUM, WAT, SGH, SGGW, etc.) as well as from all over Poland. _Users and usage_ _Cloud users will be:_ 1. authorised researchers or students using its computing resources and the software provided therein [**end users**] (with different levels of authorization), and 2. ICM administrators with different levels of authority who manage these resources [**administrators**]. Additionally, depending on the permission level, the administrator will be able to define a catalogue of resources and services available to a given end user or user group. Permissions can be combined into permission groups and assigned to users or user groups. Depending on the role assigned by the administrator, an end-user may have a different level of authority to share resources or assign roles to other users. For example, a supervisor of a group of users from a university, department, or institute representing that unit will have the authority to share computing and data storage resources assigned to that unit. In addition, projects, resources and user groups can be defined in the system for a specific period of time. For example, a supervisor nominated by the administrator of a group of students taking part in specific university classes will be able to grant them access to virtual machines and containerized computing environments including specific software, according to appropriately defined access policies. ### Portal functionalities and how to access them End-user access to cloud resources will be realized through: * **web interface** (management of user's cloud resources, access and monitoring of their usage), * **API** * direct access to user virtual machines allowing authorized users to independently configure virtual machines within the resources allocated by the administrator and monitor the use of those resources. * access to disk resources The minimum required user interface features include: 1. user authentication, 2. user profile management, 3. information on the resources allocated and the extent to which they are used, 4. communication with the staff (reporting problems or questions), 5. creation/activation/deletion of virtual machines with user-defined parameters, 6. creating/deleting virtual networks, 7. assigning/releasing virtual machine IP addresses, 8. assigning/releasing volumes for virtual machines, 9. S3 resource creation, 10. creating/activating/deleting containers, 11. Integrated graphical access to disk resources from various file systems (e.g. S3, Lustre), 12. graphical access to HPC systems, 13. Access to integrated public cloud resources through SSO, 14. Access to a code repository (a recipe for building containers), 15. container image repository (general and private for user groups) 16. Access to a service that builds and deposits containers based on prescriptions from the code repository, 17. UI compliance with WCAG 2.0 standard. The source code and corresponding license of the web interface software must be available to the contracting authority and documented in such a way as to allow for possible future self-modification by the contracting authority. The web interface must be suitable for use on both mobile devices and desktop computers. Functionality available to administrators: all end-user interface features, * management of users, roles, institutions, projects, mapping, groups/categories/hierarchies of users, * software management, * management of policies and cloud resources (computing cores, storage, network and limits per user and limits per user group, provisioning period), * accounting for the consumption of resources by users, * automatic creation of reports summarizing consumption on the user, group, institution level, * monitoring of cloud resources, * managing end-user classes, account expiration times, and resource limits, * license management. ### Security The ICM cloud infrastructure must meet the following security requirements for the cloud system and stored data: 1. Only authorized users receive access to cloud resources. 2. User authentication is based on SSO mechanism, using 2FA. The implemented solution must allow integration with various external sources of data about users (PIONIER.id, CAS, e-mail addresses in authorized institutions). _The 2FA mechanism used must be as universal as possible, available to the greatest possible number of users, without the need to install additional applications or use dedicated hardware devices (tokens) on the user's side (preferred are e.g. SMS one-time codes and/or one-time codes sent to the email address defined in the user's account and confirmed)._ _At the same time, the 2FA mechanism must be able to cooperate with all implemented mechanisms and sources of user authorization._ 3. Hierarchical separation of access to resources _By hierarchical access separation we mean here the ability to configure permissions to system resources and data so that:_ * _full separation of access to resources for individual tenants was ensured,_ * _within the resources allocated to a tenant, it was possible to configure separation of access to resources for particular groups of users,_ * _within resources allocated to groups of users, it was possible to configure separation of access to resources for particular users,_ * _In both cases (intra-tenant and group), it must be possible to share certain resources._ 4. The services made available by the cloud computing infrastructure on the Internet must be encrypted. 5. The infrastructure should provide event monitoring capabilities for at least the following: 1. _Availability,_ 2. _use of resources,_ 3. _parameters to help predict the possibility of failure (e.g. S.M.A.R.T),_ 4. _events resulting from software errors (e.g. unexpected interruption of the application)._ 6. Central logging of user activity should include at least the following: * _enabling/disabling/creating infrastructure elements (virtual machines, containers, virtual networks, assigned disk resources, etc.)_ * _assigned IP addresses,_ * _allocated licenses._ 7. The environment must provide the possibility to update the software used (OpenStack, OS, system images for virtual machines, application software, etc.) (see point 11). 8. The infrastructure must provide automatic zeroing of free storage and free memory when a virtual machine is deleted. This should happen completely automatically, without any intervention from users or administrators. 9. The infrastructure must provide backup capabilities at various levels, at least to the following extent: * _copies of the entire environment configuration (with rollback capability),_ * _copies of the portal along with its configuration,_ * _copies of user data,_ * _enable backup of the allocated storage on the user / tenant side._ 10. All infrastructure components having a direct impact on business continuity and availability must be redundant. ### Virtualization, containerization and multi-cloud and hybrid cloud functionality 1. The portal needs to leverage available resources by accessing virtual machines, containerization, and SLURM managed HPC clusters. In terms of virtual machines and containers, the solution will also enable defining the total budget of resources available for use. In particular, the solution must allow defining: * _the total number of resource-hours available to the user/group (in particular CPU/GPU)_ * _the period of validity of the budget allocated_ * _the person responsible for a given pool of resources (project manager)_ The solution will allow access to the complete billing data (including the budget) through a documented and secure REST API. Access will be possible both to the user's own data (the user himself can check what budget he has and how much resources he used in a given period) and to the full billing data of all users (with appropriate permissions). It must also be possible to fully control available budgets from the same API level by external system (with appropriate permissions). The solution must also have the ability to configure a basic budget available to all users ("free-tier"). The solution must collect and make available data on budget usage with a minimum of hourly resolution. The user must be informed about the approaching end of the budget. The solution will also provide limit support (per user, per user group), aggregated across all (virtual) machines: * _the number of simultaneously operating machines_ * _total number of occupied cores_ * _the number of CPU/GPU/Core used by 1 machine_ * _the amount of RAM used_ * _public IP addresses_ * _storage resources (GB, IOPS)_ The billing system will collect data from both virtual machines and containers running on a central containerization cluster. All configuration items (including e.g. Dockerfile, Ansible scripts or scripts run e.g. in containers or by domain applications) provided by the user should be downloadable/ storage in an integrated private GIT repository, integrated with SSO. Wherever it is necessary to store VM images or containers, the solution will provide an appropriate mechanism (e.g. an image repository) that will use the S3 resource indicated by the customer as a storage location 2. The portal should realize access to resources and applications in three different variants: * Access to virtual machines As part of the portal provided to the users, the end user must be able to run the virtual machine using one of the images provided in the catalog, imported from the public repository or provided on their own. The predefined images in the library must include the current stable release of Ubuntu, CentOS, Rocky Linux, Debian, Alpine and additionally CentOS 7. In addition, the portal will allow the creation of snapshots and copies of the virtual machine. Images provided or imported by the user are included in the total storage taken by the user The portal must make available to the user: * _SSH access parameters to your own virtual machines_ * _VNC access through a web browser to own virtual machines (virtual desktop)_ * _allow parameters to be specified for predefined virtual machines_ * _allow control of the machines: start, shutdown, reset_ * _inform (e-mail) the user about upcoming time limits or forced release of resources._ 2. Access to a container cluster The solution will create a centrally managed containerization cluster, allowing users to run images from a local repository, imported from external registries, and build them (by pointing to a public GIT repository or user resource). The local registry (e.g. Harbor) should distinguish images provided and supported by the ICM. Images originating from external repositories must be downloaded through the local repository and subjected to automated security checks before being made available to run. Containers running within the central Kubernetes cluster must not originate from sources other than the central registry or external registries via the central registry proxy. The user should be able to choose whether they use their own Kubernetes cluster (on their own budget VMs) or use shared resources to run the container. 3. Access to an HPC cluster Access to resources managed by SLURM Workload Manager within the Okeanos cluster and the Poplar cluster. Within this solution, the selected configuration uses user-provided cluster access parameters. The extent of access and the limits and billing shall be done by an independent system. The delivered solution should support the use of access to the HPC cluster as an element of the pipeline: a job running on the cluster passes the result to the user's virtual machine on which supervised post-processing of the results is performed. 3. Application usage models Regardless of the way of access to resources, for each Application defined in the catalogue we distinguish one of two basic usage models: task (batch mode) and service (interactive application). It is possible that a given software is made available in different variants but they are treated as two Applications (e.g. Julia script (task) and Jupyter Notebook). Task application/batch mode Task-type applications are used to process some predefined task and consume resources until it executes or fails. Predefined task applications are selected by the user from a directory or are self-created based on generic templates (e.g. Application python script). The user must be able to share his Applications with other users (with a group, with specified users or with the whole Centre). The user configures a set of parameters necessary to run the task, including the indication of disk resources (files, directories) that are to be transferred to the task. The task is then run using the resources defined in the definition of the Application (VM, Container in the central or own environment, HPC). Generic applications that serve as examples and templates for users' own projects must contain examples of Task-type applications running on all available resource types. Task-type applications may be combined by the user into a pipeline, where the output (e.g., directory, tar archive) is passed as input to the next task in the pipeline, defining dependencies, restarting, similar to e.g. GitLab CI/CD Pipelines, Argo Workflows. As an extreme element of such a workflow it should be possible to attach a service application (e.g. virtual desktop for pre/post data processing). As part of the initial installation, Applications must be provided that allow running tasks and analysis using the packages described later in this document. In the task (batch) mode the user should be able to select the desired application from the category of interest and then upload a set of batch data to the running program. During the task execution the user should be able to control the status of running applications, perform basic operations on them (including cancelling the task), follow the standard output of the program and view the contents of generated output files. The whole process should take place with minimal user's interference, however, the possibility of far-reaching interference should be easily accessible for an advanced user - in particular he should be able to: * _independently define how the application is invoked by the queue system running on the target node, i.e. directly provide the program name, options and invocation arguments on the command line; *control the standard output, i.e., perform operations on the generated data using a pipeline mechanism, in particular filtering it for desired patterns using the grep tool, the sed stream editor, and other utilities available in the standard environment of POSIX-compliant systems;_ * _request an action to be performed after the application (un)exits correctly, in particular to request an e-mail message to a defined address or to execute a custom script written in a shell language (Bash)._ The task/task mode must also offer the ability to easily (drag & drop) define advanced workflows, including the execution order of several applications and their use of data generated in previous tasks, define conditional instructions based on the outcome of the application, etc. Example workflow of electron structure calculations - in density functional theory (DFT) - using the Quantum Espresso application. Each of the following steps requires the data generated in the previous step: * for data and parameters set by the user in a batch file. The system should be able to verify that convergence has been achieved and that it is therefore possible to proceed to the next stage (a conditional instruction written in the shell language - defined by the user)._ * _Non-SCF cycle using the resultant data generated in the SCF cycle. Kohn-Sham states for a given set of 'k' points in the inverse network are computed._ * _Post-processing of results. When creating a workflow, the user defines which files are post-processed and how: A shell (Bash) instruction or script containing the program call, options, arguments, batch and result data, as well as I/O redirections and pipelines._ Another typical usage scenario would be to run the application on batch data (uploaded by the user or downloaded from a designated disk resource), and upon completion, run an interactive application running in graphical mode to further process the generated data. The mechanism for creating and connecting workflow tasks should allow a wide range of configuration options based on the Bash shell language. Between each step it should be possible to define conditional instructions (as well as bigger scripts) and make further execution dependent on them. The goal is a far-reaching automation of work and possibility to execute many dependent tasks after defining them once. The system should allow to create templates of such task pipelines for their later use and modification as well as the use of other popular workflow systems, e.g. Kepler (see also section 5.4: Integration). ### Service / interactive applications These types of applications consume resources from startup to user termination. This type of application must be able to share its own resources over the Internet or be restricted to an internal network (shared VPN to the cloud for its users). The basic ways to provide access to the Application are remote desktop access (VNC, WebVNC) and port access e.g. HTTP/S. An interactive Application is run as a VM or as a Container. One Application can run more than 1 VM: e.g. a preconfigured Kubernetes cluster with an internally prepared JupyterHub environment. Depending on the prepared configuration the Application should be able to use the user disk resources or be safely isolated from them. As the end of this Application is related to user action, it is necessary to define for each Application the time of its automatic termination and to be informed about running Applications consuming its resources. The proposed solution must include the following Service/Interactive Applications designed to run on private micro-clusters, automatically configured for the user: * _Jupyter/Jupyter, Lab/Jupyter Hub, optionally allowing to work with user files. The predefined solution should include a version dedicated to work with students/trainees (sharing after a hash-link with public IP). The application should contain installed Jupyter kernels for languages: Python, Julia and R. It must also allow to choose a container-worker (e.g. with its own Java environment), in which the notebook will be run (standard option of Jupyter Hub),_ * _RSudio, which allows you to work with user files_ * _BinderHub, allowing you to work with public repositories that support it._ In addition, the solution must include VM images suitable for running the pre/post processing task in fully graphical mode and be based on a current stable release of the Ubuntu and Centos operating systems. Based on the image, the user should have VNC access to the application: * _ParaView_ * _ANSYS, Matlab and Mathematica packages within the licenses available at ICM or indicated by the user_ * _A clean GNOME Desktop with access to user files and the ability to install software_ The above Applications may be included together as a "Virtual Desktop". The user should also be able to execute applications running in an emulated text terminal directly from the Portal (without having to set up a standalone SSH session). 5. Scenarios for working with applications To better illustrate the requirements from the end user's point of view, A-CHOICeM must meet the following usage scenarios: ### Training using JupyterHub The lecturer (Registered User) wants to share the interactive Jupyter notebooks with the material prepared for the training by pointing the link to: * _private/public repository._ * _private/public docker/singularity image._ Lecturer can control the amount of shared computing resources per user (number of cores, RAM, workspace capacity, maximum duration of the task / session) and the maximum number of users. The resources consumed by trainees are counted within the lecturer/trainee project (depending on the selected option). Trainees do not need to be A-CHOICeM users, and the lecturer has the option to select "authorization through a known link" ### Training using a Virtual Machine (VM) image Lecturer prepared VM image with installed software and batch files necessary to conduct the classes. Trainees have the ability to clone the VM image prepared by the lecturer and run it within their account. Resources consumed by trainees are counted within the project lecturer / trainee (depending on the selected option). ### Open training By open training is meant that no user registration is necessary. After providing a magic link (single/per participant) by the lecturer, participants are able to start a JupyterHub or BinderHub session. When creating an event (training), the lecturer sets the "magic link" expiry date. ### File sharing Users belonging to the same project can share files via a predefined directory. ### Status Quo Users retain the ability to log in via ssh to the HPC cluster and submit jobs to the SLURM queue system, as they did before. ### Application sub-portals (areas) 1. The functionality of the Portal for end users will include running computational tasks within a predefined set of applications. Available applications should be categorized into groups representing research areas. The ability to easily manage categories, applications (add, delete, update) and how to run them should be within the purview of Portal administrators. The key categories and applications that should be among the services offered are listed below (some applications may appear in more than one category). 2. Availability of an application means the ability to run it in at least one of the access methods described above. 3. The listing of research areas and applications should be understood as a sample target offering. The portal should implement several typical and welldocumented service delivery patterns, with the possibility to replicate and develop them in the future. In particular, the solution must offer ready-made services to _run the following applications (see also Application Usage Models):_ * Quantum Espresso * Gromacs * Mathematica * Python / Tensorflow / Pytorch * ANSYS Fluent * C/C++/Fortran programming environment (GNU, Intel, mathematical libraries) #### 2.4.4 Target research areas and applications: _Materials calculations /physics / quantum chemistry_ * _VASP_ * _Wannier_ * _SIESTA_ * _ORCA_ * _NWChem_ * _NAMD_ * _LAMMPS_ * _Gromacs_ * _Gaussian_ * _Quantum Espresso_ * _Elk_ * _Dalton_ * _CP2K_ * _Abinit_ * _CASTEP_ _Molecular dynamics_ * _Gromacs_ * _NAMD_ * _LAMMPS_ * _Charm_ * _Desmond_ _Computer Algebra / Other_ * _Mathematica_ * _Julia_ * _GNU Octave_ * _Matlab_ * \(R\) * _Python_ * _Anaconda_ _CFD: Computational Fluid Dynamics_ * _ANSYS Fluent_ * _OpenFOAM, comes in two versions, both have numerous users:_ * _The OpenFOAM -_[39] _(ESI-OpenCFD) latest version v2106._ * latest v9 and Third Party package._ _Machine Learning_ * _Tensorflow_ * _PyTorch_ _Programming Languages_ * _Julia_ * _Python_ * _Go_ * \(R\) * _Java_ * _GNU: C/C++/Fortran_ * _Intel: C/C++/Fortran_ _Software development, compilers and libraries_ * _GCC: GNU Compiler Collection_ * _Intel (C/C++/Fortran) development toolchain (MKL et al)_ * _BLAS, LAPACK, ScaLAPACK, GSL_ * _OpenMPI, MPICH, MVAPICH_ * _NVIDIA miracles_ * _LLYM (Clang, Flang)_ * _Cmake_ * _Spack_ _Engineering_ * _LS-Dyna_ _Other_ * _Garuda_ * _Taxila_ In addition to the applications available within the existing infrastructure, the Portal should offer the possibility of integration with external service providers in the future, such as * functioning on-premise as part of a private cloud or leading to an external provider-supported service. * quantum computers and simulators such as: - IBM Q (access to IBM Quantum Network) - Quantum Inspire [40] - Amazon Braket - Forge (D-Wave, Google, IBM) * Deep Learning (e.g. AWS) ### Architecture The whole solution belongs to the so-called "made to order" (bespoke) and must be strictly adjusted to the variety of computer and supercomputing equipment owned by the ICM, consisting of approx. 600 servers, disk arrays of various technologies and brands with capacities of approx. 25 PB and various file systems from object-oriented to high-performance Lustre system. The proposed solution should take into account the maximum use of existing resources listed in Annex 1 to the OPZ. In the future, this may involve the creation of a multi-stage plan - where the first stage is described in this document, and in the future, other resources may be transformed into the cloud service in subsequent stages of further transformation of the existing resources to the cloud service. #### 2.5.1 Scale and use of existing ICM infrastructure ICM Cloud Computing will provide users with virtual computing nodes based on the Customer's existing infrastructure (computing nodes and storage). Cloud should allow for optimal use of all existing equipment of the ordering party, described in Annex 1 to the OPZ. The system should allow for future expansion with additional resources in the form of additional devices (servers, switches, storage resources). The implementation documentation should include requirements for hardware that will be compatible with the delivered solution and allow for seamless expansion. The possibility of future increase of resources (number and type of nodes, disk space) as well as utilization (number of users) to at least 2000 nodes should be foreseen and documented. In the current stage, the solution should use available Enigma hardware (both for cloud computing and storage) with the possibility of connecting existing Okeanos system resources (via SLURM) #### 2.5.2 Cloud storage The Contractor shall implement a data storage resource sharing system ("Cloud storage") based on a single module of the Enigma cluster. The Cloud storage system should be organized in a hierarchical memory structure with the possibility of automatic data migration according to policies defined by administrators. File systems must be integrated and available to individual cloud nodes in accordance with their rights. The system must be fully integrated with other disc resources of the Employer (point 5.4). 1. Technical conditions of the solution 1. Portal access in accordance with the requirements described in Section 1. 2. Based on object-oriented data storage technologies, compatible with Amazon S3 API 3. Data redundancy and security: - choice of replication or erasure coding by object size - support for different erasure coding profiles for different objects - Automatic or on-demand cluster rebalance, e.g. after adding new disks or removing disks - adding and removing disks and servers in a storage cluster without disrupting data sharing - uninterruptible upgrades during cluster operation (rolling upgrades) - Ability to create policies for stored data (life cycle policy, WORM, etc.) 4. Data should be available through: - S3 - NFS - Web portal - Share (equivalent to Amazon Simple Cloud Storage) - Samba 5. Selectable basic storage block size (data chunk) 6. Scalability: - All nodes in the cluster participate in data processing, ensuring performance proportional to the number of nodes involved - Hardware resources need not be homogeneous 7. Authorization and authentication - Ability to integrate with existing identity management systems such as LDAP, AD, Linux PAM, token-based S3 and SSO (SAML 2.0, OpenID Connector) - The solution should also provide the ability to use a REST API to manage resources. - Ability to delegate privilege management. 8. Accounting and control of resource consumption - Creating detailed reports on resource usage by users - Create limits on maximum user disk space usage (quota) and limits on data transfer rates and storage time limits. 9. The solution should be able to use AES-256 data encryption (inflight and at rest) 10. The solution must proactively check the integrity of the data and fix any problems that are detected. 11. Management via API, console (CLI) and web. 12. Perpetual license to use existing cluster resources. #### 2.5.3 Network architecture The cloud will be launched on the existing network infrastructure (1G or 40G connections), at the same time the cloud software must enable cooperation with network infrastructure of higher throughput (100G). The network infrastructure of the A-CHOICeM cloud must be implemented in such a way as to ensure efficient and reliable connections forming the access network for users and the entire network for cloud management and access to storage resources. The network connection requirements have been grouped as "DATA network" and "MGMT network". DATA network, it is a network designed to: 1. access to/from the Internet for cloud resources 2. access to/from UW ICM network for cloud resources 3. access to storage resources DATA network must allow: - connection to a single cloud node with throughput of at least 40G - connection to the resources of the PIONIER network and the Internet with a capacity of at least 100G - Creating virtual networks, assigning addresses from private and public address pools, dynamically and statically assigned. - Connections to/from the Internet made by means of assigned public IP addresses. - Connections to the Internet realized as SNAT connections with the use of shared public IP addresses - connections from the Internet realized as DNAT connections using shared public IP addresses - accountability of resources assigned to users (in particular the use of shared IP addresses) - Network scaling (increasing IP addressing resources by additional pools) MGMT network is a network designed to: connections to manage, monitor and configure cloud resources MGMT network must enable connections between cloud nodes with at least 1G bandwidth In addition, the architecture must meet the following requirements: * DNAT and SNAT functionality must be implemented within the virtual resources of the cloud nodes. * The DNAT function must be configured by the user within the allocated limits. * The Contractor is obliged to reconfigure the Cloud once if the Ordering Party upgrades the Cloud network infrastructure during the guarantee period. * The solution should be able to define SDN (Neutron SDN), compute nodes should have direct access to part of the network (L2) and indirect access to external networks through dedicated network nodes. * Cloud network nodes must provide redundancy. #### 2.5.4 Cloud architecture and services 1. The cloud will be based on **OpenStack** environment. The entire system should be built using Open Source components to the greatest extent possible. In the case of using components with licenses other than Open Source, the supplied license should allow for free updates (especially security) within 3 years from the moment of delivery, and allow for lifetime operation also in the case of lack of support. 2. OpenStack controller servers (controller nodes) will be placed on a virtualization cluster (undercloud) built with the use of HA technology, allowing for seamless migration of servers, adding and removing servers from the cluster, backup and recovery of controllers, in a manner that does not disrupt the operation of the cloud solution. 3. The cloud deployment mechanism should be reproducible (the delivered installation "scenario" must allow the initial state of the cloud to be recreated from scratch - i.e. from bare-metal to ready-for-use). 4. On a cluster of controllers, space should be provided for virtual machines supporting associated services (portals, template repositories and configuration, IdM, monitoring and logging, etc.), the size of this cluster should be planned accordingly. 5. The cloud environment being built should provide a minimum set of services consisting of: * Adding/removing/managing servers to cloud infrastructure (metal as a service). * Services to manage instances, network and user and project access on an IaaS platform * Storage components (type: image, volume, S3, etc.) for virtual machines, physical machines and containers. Support for multi-tier eg: S3 built on disks of different types, locations or availability. Networking service compliant with OpenStack Networking API - providing such things as management of networks, subnets, routers, load balancers, VPNs, firewalls, IP addressing. * Central client authentication, service discovery, and distributed access control service compliant with OpenStack Identity API. * Service to collect and monitor system usage information. The service must collect information from OpenStack and other cloud components, allow monitoring and accounting of resource usage: * for billing purposes * for administrative purposes 7. Central system log collection service. * A service for managing quotas (limits on allocated resources) in a way that allows actions to be taken automatically when the indicated values are reached. * Cloud-based application orchestration service. * Resource Reservation Service. All services should be constructed redundantly, ensuring high availability. The cloud will be deployed using the existing storage infrastructure described in the!"#$ &$ to the POZ. In addition, due to the functional requirements specified in this PO, it may be necessary to supply, install and configure other components in addition to those listed above. The Contractor must foresee and execute deliveries, installations and configurations of all necessary additional components within the scope of the order. These activities must be indicated and described in the offer. #### 2.5.5 Integration All portals creating the cloud solution should be integrated using Single Sign-On (SSO) system implemented by the contractor, offering simultaneous support for different authentication mechanisms, min. PIONIER.Id, CAS UW, LDAP ICM (Keycloak) and one-time password sent to an email address. The last mechanism must be delivered during the implementation of the SSO system and must allow editing the list of allowed mail domains to which the password will be sent. #### 2.5.5.1 Integration with the Ordering Party's resources and services: The Portal must allow access to resources managed by SLURM Workload Manager (Poplar, Okeanos systems). Access parameters will be requested by the user along with batch data through a generic template available in the Portal. The portal should allow the creation, modification and management of custom templates, including features that allow easy customization to the requirements of running applications - e.g. the ability to run multiple program runs on different batch data. After defining a task, the system should automatically order the task to be executed under the control of the queuing system. The mechanisms of working with applications (models of their use) are described in section 3. The portal should be designed to allow the execution of applications in batch mode (see section 3.1) and interactive mode (see section 3.2). Applications running in different workflows should be able to share disk space for further processing (pre/post processing). The solution should use disk resources that are part of the existing infrastructure (s3 storage). The user should be able to track running jobs, their status, parameters, preview generated output data and control the execution of jobs directly from the Portal to the extent offered by the SLURM system control programs such as'scontrol' and others. Advanced use of these tools should be available through the ability for the user to enter their own instructions that will be redirected to the queue system. Cloud storage integration required 1. The solution resides on existing hardware, the contractor will have one full Enigma module to use. 2. Storage cloud should be integrated with other elements of the cloud solution (same identities, authentication mechanism, mechanism of granting privileges) 3. Desired integration of existing S3 (Quantum) and Lustre solution to remove walls between separate resources data storage (Quantum/Enigma/Tetyda), e.g. using data providers like OneData/DataCore(ex-Caringo) 4. The computational part can use the data from the storage part and vice versa (e.g.: it is possible to extract data from the storage part and then, after generating a result, you can put this result back into the storage part) 5. The part responsible for "metering/billing" should be integrated and include both the computing and the cloud part. #### 5.4.2 Integration with external resources: The user portal should allow access to quantum computing I AI and ML portals, bioinformatics, and such as Garuda, Taxila, graph computing (Urica XC, Trovares, Jena). The portal must allow the user to move, at will, to popular public clouds Azure, AWS, IBM, Google and others. ### Documentation The bidder will provide documentation including: 1. Full editable documentation of the implementation in the scope of describing the configuration of servers, networks, storage, services and software. In particular, the documentation should include information about the network topology, addressing, connection schemes, configuration files of the deployed services and configured devices, versions of the used software, passwords. 2. A description of the components used in the solution and their interrelationships and relationships. 3. The procedure used to implement the cloud. 4. A description of the basic procedures for using the cloud. In this respect, at a minimum, the following issues should be covered: * Procedure for enabling and disabling a cloud cluster (both undercloud and overcloud) * Backup and recovery procedures * Failover procedures - restoring services after a failure of any server or service * Fallback procedures - description of actions in case of power supply failure or hardware failure of an infrastructure element * Cloud Software Update Procedure * Portal reinstallation procedure * Procedure for updating SSL certificates * Procedure of expanding the cluster with new servers or switches * Cloud operations from an administrator's point of view e.g. adding a new identity provider, changing service policies, adding * new networks, setting quotas, establishing user hierarchies and assigning permissions to the local administrator, etc. * Instructions for creating and adding new applications and service templates to the catalog. Examples of ready-made and documented templates for appointing virtual machines, kubernetes clusters and applications. * Description of the monitoring system and the central operation logging service, as well as description of basic operations related to these services (configuration and usage, e.g. architecture, backups, information extraction, etc.) * Procedure to deal with an attack from an external/internal IP address * Procedure to be followed in the event of a break-in to an account (authorized user, administrator) * Billing (REST API description, data format, export, import). * API description * A practical cloud user's guide. Instructions on how to perform basic operations in the cloud, addressed not necessarily to technical people (e.g. how to start a virtual machine or an application / service). * Diagram and description of how future possible expansion will take place. ### Training Within 3 months of contract acceptance, the contractor will provide training to the contracting authority's administrators: * discussion of tools and components used, assumptions made, method used, localization of installation scripts and adjusting them to the needs of the ordering party. * Current maintenance, basic operations, software upgrades, emergency shutdown, uplift, restore from backup, hardware replacement, expansion, network reconfiguration. * Detailed discussion of network architecture (tunneling, routing) in the system and end user parts. Mutual interactions between particular components (bridge, protocols) * Installation and updating of user applications, images, privilege management The bid will include a training schedule. ### Warranty and support 1. All technical devices supplied under the contract will be covered by at least a 24-month manufacturer's warranty, provided at the place of installation, ensuring that in case of failure the device is replaced with an operational one within 48 hours of notification. 2. Furthermore, the Contractor shall provide the Ordering Party with a warranty for the delivered software for a period of 24 months from the date of order acceptance. During the warranty period, the Contractor shall ensure that the software operates in accordance with these OPZ and the supplied documentation. Licenses for the provided software are perpetual. In case of using any paid licenses or licenses with limited validity period, the cost of these licenses in the following 48 months shall be presented in the offer. 3. The Contractor shall ensure the possibility of extending the warranty and support at a price not higher than specified in the offer for additional years for a period of at least 48 months from the date of acceptance of the order. 4. During the warranty period, the software will be covered by the contractor's support guaranteeing * Response time to report no longer than 4 hours * Time to fix critical error not longer than 48h * Time to fix the error not critically longer than 10 working days * In addition, a specified, not less than 40 hours of specialist consultation per year for troubleshooting and software updates. * Moreover, during the support period, the contractor will perform a one-time reconfiguration of the network, free of charge, if the ordering party expands the network during this period (from 1GE to 40GE). This reconfiguration will be performed on an agreed date indicated by the ordering party at least 14 days in advance. * The contractor will support administrators in upgrading the OpenStack release to the next "Extended Maintenance" version. * The Contractor undertakes to support administrators in the expansion of the cloud with resources released from the Enigma module (in cooperation with administrators, the Contractor will expand the existing solution with new resources (servers, switches)). ### Implementation and payment schedule The functionalities described in points 1-8 of the OPZ will be implemented within a period not longer than 180 days from the conclusion of the agreement. The trainings will be implemented according to the training schedule in a period not exceeding 180 days from the conclusion of the agreement. ### Way of collecting the order The execution of the order shall be confirmed by acceptance tests as specified in the Annex to the contract. ## Acknowledgment The ICM team involved in this work consisted of: Marek Michalewicz (project initiator and general project manager), Wojciech Sylvestrzk (project management), Marzanna Zieleniewska (public procurement), Jaroslaw Skomial (network and cloud architecture), Grzegorz Bakalarski, Rafal Maszkowski, Miroslaw Nazaruk, Robert Paciorek, Marcin Semeniuk, Sebastian Tymkow (hardware and storage), Michal Dzikowski, Grzegorz Gruszczynski, Michal Hermanowicz, Maciej Szpindler, (software and cloud architecture) Joanna Jedraszczyk (user aspects), Krzysztof Mlynarski (cybersecurity), Robert Sot (University procedures, regulations and relationship). These colleagues contributed to the technical specifications of this project. The author believes that difficult and negative experiences in research and academic life should also be reported for the benefit of openness, honesty, integrity and learning. The author wishes to express his deep appreciation and gratitude for the involvement in this project and for individual contributions of all the above-mentioned colleagues.
2302.03068
Evaluating Self-Supervised Learning via Risk Decomposition
Self-supervised learning (SSL) pipelines differ in many design choices such as the architecture, augmentations, or pretraining data. Yet SSL is typically evaluated using a single metric: linear probing on ImageNet. This does not provide much insight into why or when a model is better, now how to improve it. To address this, we propose an SSL risk decomposition, which generalizes the classical supervised approximation-estimation decomposition by considering errors arising from the representation learning step. Our decomposition consists of four error components: approximation, representation usability, probe generalization, and encoder generalization. We provide efficient estimators for each component and use them to analyze the effect of 30 design choices on 169 SSL vision models evaluated on ImageNet. Our analysis gives valuable insights for designing and using SSL models. For example, it highlights the main sources of error and shows how to improve SSL in specific settings (full- vs few-shot) by trading off error components. All results and pretrained models are at https://github.com/YannDubs/SSL-Risk-Decomposition.
Yann Dubois, Tatsunori Hashimoto, Percy Liang
2023-02-06T19:09:00Z
http://arxiv.org/abs/2302.03068v3
# Evaluating Self-Supervised Learning via Risk Decomposition ###### Abstract Self-supervised learning (SSL) pipelines differ in many design choices such as the architecture, augmentations, or pretraining data. Yet SSL is typically evaluated using a single metric: linear probing on ImageNet. This does not provide much insight into why or when a model is better, now how to improve it. To address this, we propose an SSL risk decomposition, which generalizes the classical supervised approximation-estimation decomposition by considering errors arising from the representation learning step. Our decomposition consists of four error components: approximation, representation usability, probe generalization, and encoder generalization. We provide efficient estimators for each component and use them to analyze the effect of 30 design choices on \(169\) SSL vision models evaluated on ImageNet. Our analysis gives valuable insights for designing and using SSL models. For example, it highlights the main sources of error and shows how to improve SSL in specific settings (full- vs few-shot) by trading off error components. All results and pretrained models are at github.com/YannDubs/SSL-Risk-Decomposition ## 1 Introduction Self-supervised learning (SSL) is a popular approach for pretraining an encoder from minimal supervision, such that linear probes trained on the encoder's representation perform well on downstream tasks. SSL pipelines differ in many design choices, such as the objective (Chen et al., 2020; He et al., 2022), architecture (Caron et al., 2021; Bardes et al., 2022), augmentations (Tian et al., 2020; Dubois et al., 2022) or pretraining data.Yet SSL models are typically evaluated using a single metric: linear probing on ImageNet. This is convenient for leaderboards but does not provide much insight into why or when a model is better, nor how to improve it. What are the major sources of errors in current SSL methods? Are there tradeoffs between SSL SSL models across different settings (e.g. full- vs few-shot probing)? How does each design choice affect the SSL model? Those are difficult to answer using a single metric. In supervised learning, one can get more fine-grained insights using the estimation/approximation (or bias/variance) risk decomposition, which is estimated using the training and validation errors. For example, models with low training error and high generalization gap often perform better in large-data regimes and can be improved via regularization. In this paper, we generalize this classical decomposition to SSL. Our decomposition consists of four sources of errors: 1. **approximation** errors due to the encoder's architecture not having the capacity to perform the task; 2. **representation usability** errors due to SSL not making representations linearly separable even with infinite data; 3. **probe generalization** errors due to finite training data; 4. **encoder generalization** errors due to pretraining the encoder on finite data. We further provide consistent and computationally efficient estimators for each risk component, akin to the training and validation errors in supervised learning. Using those estimators, we analyze \(169\) pretrained SSL models and the effect of \(30\) design choices. These results provide insights into the state of the field, help understand design choices, and suggest which SSL encoder to choose in various settings. Figure 1: No model is uniformly better over risk components. “full-shot” axis shows linear probing on ImageNet. Other axes show normalized risk components. Higher is better. Top left (blue) shows average over all 169 models. Our analysis highlights that the most important source of error used to be the representation usability but, since SimCLR, it is now the probe generalization. Furthermore, we show that some design choices (e.g. large projection heads, ViT encoders) improve all error components simultaneously. But others (e.g. representations' dimensionality or SSL objective) trade off components and thus only help in specific settings. For example, Fig. 1 shows that SwAV RN50w4 gives more usable representations (bottom left) than MSN ViT-L16 (Assran et al., 2022) but induces a worse probe generalization (bottom right). This results in the former being better in full-shot probing (76% vs 74% accuracy) but worse in 3-shot (37% vs 63% ). In summary, we: * provide an SSL risk decomposition with an efficient estimator for each error component; * show that the main source of error for modern SSL is the generalization error of linear probes; * highlight a tradeoff between usability and probe generalization, which leads to a few- vs full-shot tradeoff; * analyze how \(30\) design choices affect the risk components and full-/few-shot performance of \(169\) SSL models. ## 2 Supervised risk decomposition In supervised learning, one learns a predictor \(f_{ S}\) from a hypothesis class \(\mathcal{F}\) using a finite set of supervised samples \(S\). The goal is for the predictor to achieve low population risk \(\mathrm{R}_{ S}\), which can be evaluated using a test set. When designing models, it is nevertheless typical to consider both the training performance and the generalization gap (the difference between validation and training performance). This is useful to understand which component of the pipeline to improve (regularization, architecture, etc) and which model should be favored depending on the training size \(|S|\). The training performance and generalization gap are respectively estimators of the _approximation error_ and the _estimation error_ from the supervised risk decomposition (Barron, 1994; Shalev-Shwartz and Ben-David, 2014). 1 The approximation error \(\mathrm{R}_{\mathcal{F}}\), is the error that a predictor \(f_{\mathcal{F}}\) trained on infinite data incurs, i.e., the error due to the choice of a constrained family \(\mathcal{F}\). The estimation error is the error due to training on finite samples, i.e., \(\mathrm{R}_{\mathcal{F}}-\mathrm{R}_{ S}\). As seen in Fig. 2, the decomposition arises by considering the difference of risk incurred in settings of increasing expected risk. Footnote 1: For conciseness, we assume in the main paper that the irreducible error is 0, as it is independent of any design choice. In appendices we instead decompose the excess risk. Formally, we learn a predictor \(f_{ S}:=\mathrm{A}_{\mathcal{F}}(\hat{p}_{ S})\) from a family \(\mathcal{F}\subseteq\{f:\mathcal{X}\to\mathcal{Y}\}\) using an algorithm \(\mathrm{A}_{\mathcal{F}}\) (e.g. ERM) on an empirical distribution \(\hat{p}_{ S}\) induced by a training set \(S\stackrel{{\text{iid}}}{{p}}_{\text{\tiny supp}}(X,Y)\). Denote by \(\mathrm{R}(f):=\mathbb{E}_{p_{\text{\tiny supp}}}[\ell(Y,f_{ S}(X))]\) the risk w.r.t. a desired loss \(\ell\). To derive the decomposition we order the two risks \(\mathrm{R}_{ S}:=\mathrm{R}(f_{ S})\), \(\mathrm{R}_{\mathcal{F}}:=\inf_{f\in\mathcal{F}}\mathrm{R}(f)\) and use a telescoping sum. Details at Appx. A.1. ## 3 SSL risk decomposition Our goal is to derive a risk decomposition for representation learning that allows better development and understanding of SSL. SSL pipelines consist of two models: an encoder \(\phi\) and a probe \(f\). The probe is trained in a supervised fashion and, following Sec. 2, it is useful to consider the errors that arise from using a constrained family \(\mathcal{F}\) and finite data \(S\). The difference with Sec. 2 is that the probe does not predict from inputs \(X\) but from their representations \(\phi(X)\). As a result, errors also arise from the encoder \(\phi\in\Phi\), which is pretrained from a family \(\Phi\) using an SSL algorithm \(\mathrm{A}_{\Phi}\) and finite unsupervised data \(U\stackrel{{\text{iid}}}{{\sim}}p_{\text{\tiny un}}\). The errors can thus come from each of the probe's limitations (constrained \(\mathcal{F}\), finite \(S\)) as well as each of the encoder's limitations (constrained \(\Phi\), SSL algorithm \(\mathrm{A}_{\Phi}\), finite \(U\)). We now give an overview of each error component, which we formalize later. The **approximation error** measures errors due to the architecture of the encoder \(\Phi\) (e.g. ResNet50) and probe \(\mathcal{F}\) (e.g. linear) being too constrained to perform even the supervised task. Intuitively, it decreases with the capacity of \(\Phi\), \(\mathcal{F}\). Figure 3: Our SSL decomposition is a path between settings of increasing expected risk. Columns show probe’s limitations (constrained \(\mathcal{F}\), finite supervised data \(S\)) as in Fig. 2. Rows show encoder’s limitations (constrained \(\Phi\), SSL algorithm \(\mathrm{A}_{\Phi}\), finite unlabeled data \(U\)). Risk components (colored) are the differences between risks in two settings. Figure 2: The risk decomposition is a path between settings of increasing expected risk for training the probe: \(0\to\mathrm{R}_{\mathcal{F}}\) (constrained family \(\mathcal{F}\)) \(\to\mathrm{R}_{ S}\) (finite supervised data). \[\underbrace{\mathrm{R}_{U,S}}_{\text{Risk}}=\underbrace{\mathrm{R}_{U,S}- \mathrm{R}_{A,S}}_{\text{encoder generalization}}+\underbrace{\mathrm{R}_{A,S}- \mathrm{R}_{A,\mathcal{F}}}_{\text{probe generalization}}+\underbrace{\mathrm{R}_{A, \mathcal{F}}-\mathrm{R}_{\Phi,\mathcal{F}}}_{\text{representation usability}}+\underbrace{ \mathrm{R}_{\Phi,\mathcal{F}}}_{\text{approximation}} \tag{1}\] The **representation usability error** measures errors due to learning representations via an SSL pipeline \(\mathrm{A}_{\Phi},p_{\text{\tiny{aux}}}\), rather than supervised learning. Intuitively, it is small if the SSL algorithm ensures that representations retain information that is usable by probes \(\mathcal{F}\), e.g., linearly separable classes. The **probe generalization error** measures the drop in performance due to training the probe on finite samples \(S\) instead of \(p_{\text{\tiny{aux}}}\). Intuitively, it is small if: (i) the number of training samples \(|S|\) is large, or (ii) representations ensure that downstream probes are sample efficient, e.g., by minimizing the margin between same-class examples. The **encoder generalization error** measures the drop in performance due to pretraining the encoder on finite samples \(U\) compared to the population \(p_{\text{\tiny{aux}}}\). Intuitively, it is small if: (i) \(\mathrm{A}_{\Phi}\) makes pretraining sample efficient, or (ii) there are many pretraining examples \(|U|\). To derive those risk components we follow Sec. 2 and take the difference in risk between settings of increasing expected risk for the encoder \((\Phi,\mathrm{A}_{\Phi},U)\) and probe \((\mathcal{F},S)\). This gives our SSL risk decomposition Eq. (1), which we illustrate in Fig. 3 as a path through the matrix \((\Phi,\mathrm{A}_{\Phi},U)\times(\mathcal{F},S)\). Each cell corresponds to the risk incurred for a specific limitation for the encoder (row and \(1^{\text{st}}\) subscript) and the probe (column and \(2^{\text{nd}}\) subscript). Formally: * \(\mathbf{R}_{\Phi,\mathcal{F}}:=\inf_{f\in\mathcal{F}}\inf_{\phi\in\Phi}\mathrm{ R}(f\circ\phi)\) is the risk incurred by the best possible risk for encoders in \(\Phi\) and probes in \(\mathcal{F}\). * \(\mathbf{R}_{A,\mathcal{F}}:=\inf_{f\in\mathcal{F}}\mathrm{R}(f\circ\phi_{A})\) is the risk of the best probe in \(\mathcal{F}\) and an encoder \(\phi_{A}:=\mathrm{A}_{\Phi}(p_{\text{\tiny{aux}}})\in\Phi\) pretrained using the desired SSL algorithm and the population distribution. * \(\mathbf{R}_{A,\mathcal{S}}:=\mathrm{R}(f_{\phi_{U}(S)}\circ\phi_{A})\) is the risk incurred by the same encoder but using a probe trained from finite samples \(f_{\phi_{A}(S)}:=\mathrm{A}_{\mathcal{F}}(\hat{p}_{\phi_{A}(S)})\), where \(\phi_{A}(S):=\{(\phi_{A}(x),y)\,|\,(x,y)\in S\}\) is the represented training set. * \(\mathbf{R}_{U,\mathcal{S}}:=\mathrm{R}(f_{\phi_{U}(S)}\circ\phi_{U})\) is the risk when the probe and encoder are trained from finite samples \(\phi_{U}:=\mathrm{A}_{\Phi}(\hat{p}_{U})\). Our decomposition (Eq. (1)) corresponds to the specific path \(0\to\mathrm{R}_{\Phi,\mathcal{F}}\to\mathrm{R}_{A,\mathcal{F}}\to\mathrm{R}_{A,\mathcal{S}}\to\mathrm{R}_{U,S}\) in Fig. 3. Considering different paths through matrix would give different decompositions. In Appx. A.2, we provide all other decompositions and show that those would be harder to estimate. ## 4 Estimating risk components for SSL Our goal is to compare pretrained SSL models using our decomposition. We would thus like estimators of each risk component that are simple, computationally efficient, consistent, and applicable in the standard SSL ImageNet setting. Compared to supervised learning, the main new challenge for estimating our risk components compared to supervised learning is that pretraining additional SSL encoders is computationally prohibitive, so we want each of our estimators to use the same SSL encoder. This is a challenge because our risk components are defined using three different encoders (\(\phi,\phi_{A},\phi_{U}\)). Our key insight is that we can estimate risk components by changing the training and evaluation set of the probe using the same pretrained SSL encoder. In the following, we illustrate this for the standard ImageNet SSL setting where the metric comes from pretraining encoders and training probes on the _same_ inputs \(S_{\text{tr}}\), and evaluating them on \(i.i.d.\) examples \(S_{\text{te}}\). As a result, we can estimate risk components by training and evaluating probes on specific partitions of \(S_{\text{tr}}\cup S_{\text{te}}\) as summarized in Table 1. We now provide the intuition behind each estimator. For formal derivations, properties, and pseudocode see Appx. B. As a reminder, the encoder is always pretrained on \(S_{\text{tr}}\). * \(\mathbf{\hat{R}}_{\Phi,\mathcal{F}}\): We need to estimate the risk when both the encoder and the probe are trained on finite data. They should thus both be evaluated on unseen data. We do so by training the probe on \(S_{\text{tr}}\) and evaluating it on \(S_{\text{te}}\), i.e., we use the standard SSL metric. As \(S_{\text{te}}\) is disjoint from both the encoder's and probe's (pre)training set \(S_{\text{tr}}\), this ensures that both models are evaluated on unseen data. * \(\mathbf{\hat{R}}_{\mathbf{A},\mathcal{S}}\): We need to estimate the risk when the probe is trained on finite samples but the encoder is pretrained on the population. To do so we use \(S_{\text{tr}}\) as a plug-in estimate for the population data, which we split into a training \(S_{\text{sub}}\subset S_{\text{tr}}\) and testing set \(S_{\text{tr}}\setminus S_{\text{sub}}\) for the probe. This ensures that the probe is evaluated on unseen data but not the encoder. * \(\mathbf{\hat{R}}_{\mathbf{A},\mathcal{F}}\): We need to estimate the SSL risk when both the encoder and the probe are (pre)trained on the population distribution. We do so by using the _same_ pretraining, training, and evaluating set \(S_{\text{tr}}\), which ensures that the encoder and probe are evaluated on data they were trained on. \(\mathds{\hat{R}}_{A,\mathcal{F}}\) is thus the training error of the probe used for standard evaluation. * \(\mathbf{\hat{R}}_{\mathbf{A},\mathcal{F}}\): We need to estimate the risk of the best possible predictor in the composed family \(\mathcal{F}\circ\Phi\), without considering SSL or finite samples. We do so using the _training_ error of a supervised model with architecture \(\mathcal{F}\circ\Phi\), e.g., a ResNet50 on ImageNet.2 Our estimators are simple and computationally efficient as they do not require retraining any other SSL encoder.Under mild assumptions, they are all consistent but can be very biased on small datasets. This is similar to how supervised training and testing errors coarsely estimate \(\mathrm{R}_{\mathcal{F}}\) and \(\mathrm{R}_{S}\). ## 5 Experimental results In the following, we use our risk decomposition to answer the three motivating questions from Sec. 1: What are the major sources of errors in current SSL? Are there tradeoffs affecting which models to prefer in certain settings? How does each design choice affect the SSL model? To do so we analyze \(169\) SSL pretrained encoders, across \(28\) objectives, \(20\) architectures, and \(7\) years. For each model, we collected 30 design choices or hyperparameters, estimated our error components, and evaluated the ImageNet test performance of well-tuned linear probes trained in different subsets of ImageNet (\(100\%\), 30-shot, \(1\%\), 5-shot, 3-shot). In our pursuit of addressing our motivating questions, we thus provide the most comprehensive benchmarking of self-supervised learning models to date. We highlight the best-performing models in various settings in Table 2, which we will refer to throughout the section. We also provide a simple torch.hub API at github.com/YannDubs/SSL-Risk-Decomposition to load all pretrained encoders, metadata, and results. For experimental details see Appx. C, for raw results see Appx. E, and for extended analysis see Apps. D and F. ### Major sources of errors In this section, we aim to understand the main sources of errors in current SSL, and how this might change over time. Identifying important sources of errors is potentially useful to understand what research to prioritize. Fig. 4 shows how error components have changed over time. We now discuss each of them in detail. **Usability drove improvements.** We see that usability used to be the largest source of error but it has improved steadily between 2016-2019. In Appx. F.3 we show that those improvements were mostly driven by the use and advances in contrastive learning. **Probe generalization is now key.** We see that probe generalization is now the largest source of error, which suggests that it should be prioritized. For example, since 2019, the field has been able to improve overall performance by improving significantly this source of error. **Encoder generalization is small and constant.** We see that the encoder generalization has been relatively small over time but might become important in the near future. \begin{table} \begin{tabular}{l c c c c} \hline \hline & & \multicolumn{3}{c}{Dataset} \\ \cline{3-5} Estimator & Encoder & Pretrain & Train & Eval \\ \hline \(\hat{\mathrm{R}}_{U,S}\) & \(\phi_{U}\) & \(S_{\mathrm{tr}}\) & \(S_{\mathrm{tr}}\) & \(S_{\mathrm{te}}\) \\ \(\hat{\mathrm{R}}_{A,S}\) & \(\phi_{U}\) & \(S_{\mathrm{tr}}\) & \(S_{\mathrm{sub}}\) & \(S_{\mathrm{sub}}\) \\ \(\hat{\mathrm{R}}_{A,\mathcal{F}}\) & \(\phi_{U}\) & \(S_{\mathrm{tr}}\) & \(S_{\mathrm{tr}}\) & \(S_{\mathrm{tr}}\) \\ \(\hat{\mathrm{R}}_{\Phi,\mathcal{F}}\) & \(\phi_{\mathrm{\tiny supp}}\) & \(S_{\mathrm{tr}}\) & \(S_{\mathrm{tr}}\) & \(S_{\mathrm{tr}}\) \\ \hline \hline \end{tabular} \end{table} Table 1: We estimate risk components of an encoder \(\phi_{U}\in\Phi\) pretrained on ImageNet’s train set \(S_{\mathrm{tr}}\), by training and evaluating probes on different partitions of ImageNet’s train \(S_{\mathrm{tr}}\) and test set \(S_{\mathrm{te}}\). \(S_{\mathrm{sub}}\subset S_{\mathrm{tr}}\) is a small training subset. \(\phi_{\mathrm{\tiny supp}}\in\Phi\) is a supervised encoder of the same family. \begin{table} \begin{tabular}{l l c c c c} \hline \hline & & \multicolumn{3}{c}{ImageNet probe acc.} \\ \cline{3-5} Obj. & Arch. & Param. & 100\% & 1\% & 3-shot \\ \hline MoCo-v3 & RN50 & 24M & 73.7 & 55.5 & 40.4 \\ DINO & RN50 & 24M & 74.2 & 52.9 & 35.9 \\ \hline SwAV & RN50w4 & 375M & 76.2 & 56.2 & 36.9 \\ VICRegL & EnvNxt-B & 85M & 74.8 & 64.3 & 56.3 \\ \hline MUGS & ViT-S16 & 22M & 77.3 & 62.9 & 49.6 \\ MSN & ViT-S16 & 22M & 76.1 & 67.5 & 60.4 \\ \hline MSN & ViT-B4 & 86M & 80.1 & 75.1 & 69.3 \\ MUGS & ViT-L16 & 303M & 80.9 & 74.0 & 68.5 \\ MSN & ViT-L7 & 303M & 79.9 & 74.9 & **69.8** \\ \hline CLIP & ViT-L14 & 304M & **85.0** & 75.2 & 62.9 \\ OpenCLIP & ViT-H14 & 632M & 84.4 & **75.8** & 63.7 \\ \hline \hline \end{tabular} \end{table} Table 2: Best performing models for ImageNet linear probing. The first 4 categories of rows show models pretrained on ImageNet-1K of various architectures (RN50, any CNN, ViT-S/16, any ViT). The last category allows any data and architecture. Underlined results are best in their category, bolded ones are best overall. Duplicate rows are removed. Figure 4: The major SSL improvements came from usability, but probe generalization is now the largest source of error. The plot shows risk components of the best ImageNet-pretrained model published in a given year. Lower is better. In Appx. F.3 we show similar trends for the average models. The fact that the generalization error is smaller for the encoder than the probe is surprising. Indeed, they are both (pre)trained on the same data (ImageNet's training set) but the encoder is more "complex" than a regularized linear probe. This requires further analysis but could be due to overparametrization (Belkin et al., 2019; Yang et al., 2020). **Approximation error is negligible.** Unsurprisingly, current encoders have the capacity to perform the desired task. For the rest of the paper, we focus on the most common sources of errors: usability and probe generalization. ### Tradeoffs affecting performance in various settings In this section, we first show that our estimators of usability and probe generalization are useful to choose which models to prefer in full- or few-shot settings. We then highlight a tradeoff between those two components that directly translates to a tradeoff between full- and few-shot performance. #### 5.2.1 Predicting performance across settings Our risk decomposition isolates generalization errors, and should by construction give insights into which models to favor in full- vs few-shot settings. Let us test whether this is also the case when using our simple estimators. As a reminder, error components are estimated on all of ImageNet but we analyze the performance of probes trained on varying number of train samples (\(100\%\), \(1\%\) and 30-, 5-, 3-shot). **Probe generalization signals sample efficiency.** Intuitively, models with low probe generalization error perform better in few-shot settings (less variance) while those with low usability error perform better in full-shot settings (less bias). Fig. 4(a) shows that, indeed, the best encoders in few-shot regimes have smaller probe generalization errors. Can we use this relation to predict performance across settings? **Error components predict performance across settings.** In Appx. F.4 we propose a simple 2-parameter scaling law that fits the performance of all 169 models as a function of estimated error components and the number of training samples \(|S|\) (see Fig. 4(b)). We show that it performs significantly better than standard scaling laws (Kaplan et al., 2020; Rosenfeld, 2021) both in held-out settings (test \(R^{2}=0.94\)) and held-out encoders (test \(R^{2}=0.96\) when holding out contrastive encoders). While the scaling law will not save much compute (probes are efficient to train), it is a useful validation of our risk decomposition and estimators. #### 5.2.2 Tradeoffs One advantage of the supervised risk decomposition is that it highlights a tradeoff between approximation/estimation. Although this tradeoff does not always hold (Neal et al., 2018; Yang et al., 2020; Dar et al., 2021), it is a useful conceptual framework for developing models. For example, it suggests that high-capacity predictors perform better when there is plenty of training data and can benefit from regularization. In Appx. A.5 we derive three corresponding tradeoffs in SSL. Two of those are not insightful as they depend on the negligible approximation error. More interestingly, we derive a usability/probe generalization (U/P) tradeoff. This corresponds to the standard approximation/estimation tradeoff but the gains in capacity come from changing the data (via encoding) rather than the predictor's family \(\mathcal{F}\). As an illustration, constant representations lead to probes that perform badly on training (high usability error) but have zero generalization error. In contrast, if the representationa are one-hot encodings of inputs, then linear probes can achieve perfect training performance (usability) but will not generalize. **Usability/probe generalization tradeoff.** Similarly to approximation/estimation, U/P is not an exact tradeoff but suggests that decreasing one tends to increase the other. This can be seen in Fig. 4: between 2016-2019 usability decreased at the expense of probe generalization, and vice-versa since 2019. This can also be seen in Fig. 6: at every point in time, the best models seem to form a tradeoff curve. Figure 5: Our estimated risk components are tightly related with performance in different settings. (a) Usability error of the best \(20\%\) of models increases as the training samples decreases, while probe generalization error decreases. (b) The performance predicted by our scaling law (x-axis) is close to the true performance (y-axis) for all data settings. Figure 6: Usability vs probe generalization tradeoff for the best 20% of models for each year (color). Models differ in many design choices (e.g. objective, architecture, epochs). **Full-/few-shot tradeoff.** Given the relation between usability/probe generalization and performance in different settings (Sec. 5.2.1), we expect the U/P tradeoff to translate in a full-/few-shot tradeoff. Table 2 shows that, indeed, the best models in full-shot (100%) settings are never the best ones in 3-shot. This is true for the 5 considered categories. Fig. 5 suggests that this is indeed driven by the U/P tradeoff. ### Analysing design choices In this section, we analyze the impact of important SSL design choices on risk components and the performance in full- and 3-shot settings. Table 3 summarizes our findings. We use the following three methods to analyze our results: * **Controlled analysis (CA).** Whenever possible we analyze the effect of a design choice while fixing others. To do so quantitatively, we fit a linear model from the current (possibly log-transformed) design choice to the metric: \(\textit{metric}=\alpha\cdot\textit{parameter}+\beta^{T}\mathds{1}[\textit{model}]\), where \(\mathds{1}[\textit{model}]\) is a one-hot encoding of the value of all other design choices. The downside is that we can only apply CA if we have encoders that only differ in the desired design choice. * **XGBoost+SHAP.** For each risk component and metric, we train one XGBoost model (Chen & Guestrin, 2016) using all design choices and potential confounders (e.g. year). We then perform feature selection to avoid feature redundancy. Finally, we analyze the SHAP value (Lundberg & Lee, 2017) of the desired design choice. The main disadvantage of XGBoost+SHAP is that there might be other confounders we did not consider. * **Global linear analysis (GLA)** For each metric and design choice, we train a linear model from all metadata that we think are either important to predict the metric or may be confounders. The downsides of GLA are that it depends on our incomplete "expert knowledge" of how variables interact, and it makes a linearity assumption. In the main paper, we focus on results from SHAP and qualitative CA, but write "(GLA p-value)" or "(CA p-value)" to show that the other analyses give consistent conclusions. Although different analyses with consistent conclusions mitigate issues with the overall analysis, they do not imply any causal conclusions. For more methodological details see Appx. C.4. For extended analysis of all results see Appx. D. #### 5.3.1 Dimensionality **Increasing dimensionality improves usability at the expense of probe generalization.** Fig. 7 shows that increasing dimensionality improves usability but worsens probe generalization, which in turn worsens few-shot performance (Sec. 5.2.1). This is further supported by our linear model in the global and controlled setting (GLA/CA p-values \(<\)1e-9). In Appx. D.1 we show that what matters is the effective dimensionality (rank) of the representation. The effect of dimensionality can be intuitively understood by the fact that the capacity of linear classifiers depends on the input dimension \(d\)(Vapnik & Chervonenkis, 1971), so increasing \(d\) may improve performance but cause overfitting. For a formal explanation see Dubois et al. (2022). Figure 8: The representation’s dimensionality trades off probe generalization and usability. Colors indicate representations from the same ViT. We concatenate CLS tokens from different blocks to vary the dimensionality (dot size). \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \# dim. \(\downarrow\) & \# views \(\uparrow\) & ViT & \# param.\(\uparrow\) & MLP proj. & generative SSL & \# epoch \(\uparrow\) & Adam \\ \hline Usability error & \(\uparrow\) & \(\downarrow\) & & \(\downarrow\) & \(\downarrow\) & \(\uparrow\) & \(\downarrow\) & \(\downarrow\) & \\ Probe gen. error & \(\downarrow\) & \(\downarrow\) & \(\downarrow\) & & \(\downarrow\) & \(\downarrow\) & \(\downarrow\) & \(\downarrow\) & \(\downarrow\) \\ Full-shot error & \(\uparrow\) & \(\downarrow\) & \(\downarrow\) & \(\downarrow\) & \(\downarrow\) & \(\downarrow\) & \(\uparrow\) & \(\downarrow\) & \(\downarrow\) \\ 3-shot error & \(\downarrow\) & \(\downarrow\) & \(\downarrow\) & \(\downarrow\) & \(\downarrow\) & \(\downarrow\) & \(\downarrow\) & \(\downarrow\) & \(\downarrow\) \\ \hline \hline \end{tabular} \end{table} Table 3: Effect of design choices on error components and full-/3-shot. \(\downarrow\): much better, \(\downarrow\): better, \(\uparrow\): worse, \(\uparrow\): much worse. Figure 7: Impact of the representation’s dimensionality (color) on the usability error, probe generalization error, and full-/3-shot linear probing. Impact is measured by SHAP values (x-axis). Lower is better as it decreases the risk. **Moving along the U/P tradeoff without retraining.** Appx. D.1 suggests that dimensionality might be a simple way to move along the U/P tradeoff. To test this, we take ViT encoders and concatenate CLS tokens from different blocks to increase dimensionality. Fig. 8 shows that this method allows trading off usability and probe generalization. **Improving performance without retraining.** Fig. 8 and Sec. 5.2 suggest that we can extract representations of different dimensionalities from the same encoder to improve performance in desired settings. Indeed, Table 4 shows that we can improve few-shot performance by decreasing dimensionalities. Extracting smaller dimensional representations from the OpenCLIP model even achieves the best overall performance for 1% as seen in Tables 2 and 4. This explains why previous works, e.g. (Caron et al., 2021), showed full-shot improvement when concatenating outputs of ViT blocks, namely, they were increasing the dimensionality. #### 5.3.2 Data and augmentations We now analyze the effect of the number of augmentations. We focus on multi-crops given that we have many pretrained models that only differ in this augmentation. **Augmentations improve usability and probe gen.** A priori, one might think that using more augmentations improves generalization by acting as a regularizer. Fig. 8(a) shows that increasing the number of multi-crops actually mostly improves usability -- although it can also help probe generalization. Fig. 8(b) shows similar results when controlling for confounders. Increasing the number of multi-crops thus overcomes the U-P tradeoff, which improves both full- and the few-shot performance (Fig. 8(a)). In Appx. D.2 we show similar results for other augmentations. Strengthening augmentations intuitively improves probe generalization by increasing the invariance of the SSL encoder, which will retain less information that probes can overfit to (Tsai et al., 2021; Tian et al., 2020; Federici et al., 2020; Mitrovic et al., 2021; Wu et al., 2021; Ruan et al., 2022). The beneficial impact that augmentations have on usability is less obvious but has been suggested by Dubois et al. (2022). Specifically, they prove that stronger augmentations decrease the number of potential tasks and thus the required capacity of probes. Strengthening augmentations thus has a similar impact on usability as increasing the probe's capacity by increasing dimensionality (Fig. 7). **Additional pretraining data can worsen generalization.** In Appx. D.2 we show that pretraining on ImageNet-22K, instead of its subset ImageNet-1K, worsens the encoder's and probe's generalization but can improve usability. #### 5.3.3 Architecture We now analyze the impact of the encoder's architecture. **ViTs improve probe generalization.** Fig. 9(a) shows that ViTs are significantly better than \begin{table} \begin{tabular}{l l c c c c c} \hline \hline Ours & Obj. & ViT & Dim. & 100\% & 1\% & 3-shot \\ \hline ✗ & MUGS & S16 & 1536 & **77.3** & 62.9 & 49.6 \\ ✓ & MUGS & S16 & 384 & 77.0 & **66.6** & **57.9** \\ \hline ✗ & OpenCLIP & H14 & 1280 & **84.4** & 75.8 & 63.7 \\ ✓ & OpenCLIP & H14 & 1024 & 84.3 & **76.5** & **65.5** \\ \hline \hline \end{tabular} \end{table} Table 4: We improve few-shot performance by using representations from layers of smaller dimensionalities (“ours”). Figure 10: Impact of the (a) architecture’s family, and (b) number of parameters (color) on risk components and aggregated full- or few-shot risk. Lower SHAP values (x-axis) are better as Y-axis are errors. Figure 9: Effect of the number of multicrops on usability and probe generalization error, (a) when considering all models; and (b) when all other hyperparameters are constant. alization (GLA p-value=9e-8) and do not worsen usability. This thus translates to few- and full-shot improvements. **Larger encoders improve usability and approximation.** Fig. 10b shows that increasing the number of parameters improves the usability and approximation (GLA p-value=4e-17), without impacting generalization. Those gains improve full- and few-shot performance. In Appx. D.3 we show that smaller ViT patch sizes lead to similar gains. Now let us analyze the impact of projection heads in SSL, which are known to improve overall full-shot performance (Bachman et al., 2019; Chen et al., 2020; Chen et al., 2020; b). **Large projection heads improve usability.** Fig. 11 shows that MLP projections improve usability (CA p-value=9e-12) and often also probe generalization. In Appx. D.3 we show that increasing the capacity (number of parameters) of an MLP projection head further improves usability. Many works have tried to explain why projection heads improve SSL. For example, Jing et al. (2022) suggests that projections avoid dimensionality collapse. In Appx. D.3, we show that projection heads indeed improve effective dimensionality and thus usability (Sec. 5.3.1) but that the increase in effective dimensionality is not larger for non-linear projection heads. This suggests that we still do not completely understand the impact of non-linear projections. #### 5.3.4 Objective We now analyze the effect that the objective has on the representation. To simplify the analysis we aggregate all (28) objectives into 6 types (x-axis of Fig. 12). **Generative and transformation-predicting objectives suffer from high usability error.** Fig. 12 shows that representations learned using objectives that are generative (e.g. MAE or BEiT) or predict the data augmentation (e.g. RotNet or LocNet) are less usable (GLA p-value=3e-4). The other objectives give similar usability, with a slight edge for clustering objectives (e.g. DISSL, DINO, or SwAV). The lack of usability explains why generative encoders such as MAE do not give a good linear probing performance, despite their strong fine-tuning performance (He et al., 2022). Intuitively, generative objectives preserve all information about the input but do not ensure that this information is usable by linear probes (Xu et al., 2020; Dubois et al., 2020). In comparison, contrastive objectives ensure linear usability because they maximize dot-product similarity (Saunshi et al., 2019; Tosh et al., 2021; HaoChen et al., 2021). More generally, Dubois et al. (2022) shows that many existing SSL losses explicitly optimize for usability. **The exact objective has little impact.** Fig. 13 compares different clustering objectives and shows that the impact of the exact objective is relatively minor. For example, the impact on the aggregated risk is at most \(1\) percentage point. This suggests that one should choose a simple and easy-to-tune objective and focus on other components. ## 6 Related work **Risk decomposition.** The estimation/approximation or the bias/variance decomposition has been very useful for practitioners and theoreticians to focus on specific risk components (Kohavi and Wolpert, 1996; Domingos, 2000; Valentini and Dietterich, 2004). Such decomposition has nevertheless rarely been extended beyond classical supervised learning. Notable exceptions include (Wu et al., 2020) and (Zhou et al., 2022) in the context of domain adaptation and federated learning respectively. To our knowledge, we are the first to provide an exact decomposition for SSL, but some theoretical works, e.g., Bansal et al. (2021), have decomposed bounds on the risk (rather than the risk). **Benchmarking SSL.** One of our secondary contributions is a thorough benchmark of many SSL models (5 settings, 30 design choices, 28 objective, and 169 models). There have been previous SSL benchmarks but those are either much Figure 11: Effect of the projection head on usability and probe generalization error, when all other hyperparameters are kept the same. Each color shows a specific model. Figure 12: Impact of objective type on usability. Each bar shows the average usability error for all encoders pretrained with that type of SSL objective. Type details in Appx. C.4. Figure 13: Comparison between clustering objectives. smaller or use a different evaluation pipeline for each model. For example, Goyal et al. (2019) provides a thorough but small benchmark (3 design choices and 2 objectives). While Goyal et al. (2021) and Contributors (2021) evaluate more models (66 and 22 respectively) but use different evaluation pipelines as their goal is to replicate previous work rather than to provide a fair benchmarking. **Understanding SSL.** There is a growing literature of work that tries to explain the effect of specific SSL design choices, e.g. projections heads Gupta et al. (2022); Appalaraju et al. (2020); Jing et al. (2022) or augmentations Tsai et al. (2021); Tian et al. (2020); Federici et al. (2020); Mitrovic et al. (2021); Wu et al. (2021); Dubois et al. (2021), or provide a conceptual framework to think about design choices Dubois et al. (2022). Sometimes those explanations agree with one another but other times they are orthogonal or even in contradiction. Our work does not provide explanations but rather a new tool to empirically verify previous hypotheses and suggest new ones. For example, in Sec. 5.3 we highlight previous explanations that are supported by our empirical results. ## 7 Summary and outlook We present an SSL risk decomposition to provide a fine-grained understanding of the type of errors made by a linear probe predicting from SSL representations. Our risk decomposition generalizes the supervised approximation/estimation decomposition by considering errors arising from the representation learning process. We provide consistent and computationally efficient estimators for each risk component, akin to the training and validation errors in supervised learning. Using those estimators, we analyze \(169\) pretrained SSL models and the effect of \(30\) design choices. Our findings suggest that the two primary sources of errors are the usability of the representation, resulting from linear separability issues, and the probe's generalization error, due to finite training data. Furthermore, we show that there is often a tradeoff between these two sources of errors, which translates into a performance tradeoff between few- and full-shot probing. Some design choices, such as the dimensionality of the representation and the SSL objective, can control this tradeoff and thus improve performance in certain settings at the expense of others. Meanwhile, other choices, such as the use of large projection heads and ViT encoders, overcome the tradeoff and thus improve performance in all settings. Our risk decomposition and in particular our estimators have limitations that should be addressed to improve their applicability. Most notably, they require the probe's training data to be a subset of the encoder's pretraining data, limiting their application in common out-of-distribution settings. We hope that our findings will inspire further research in this direction, and, more generally, the use of risk decompositions for analyzing sources of errors in machine learning. ## Acknowledgements We thank Rohan Taori, Niladri Chatterji, Shibani Santurkar, Ananya Kumar for helpful feedback. YD is supported by a Knights-Hennessy Scholarship. The work is supported by an Open Philanthropy Project Award.
2306.07921
Continuous Cost Aggregation for Dual-Pixel Disparity Extraction
Recent works have shown that depth information can be obtained from Dual-Pixel (DP) sensors. A DP arrangement provides two views in a single shot, thus resembling a stereo image pair with a tiny baseline. However, the different point spread function (PSF) per view, as well as the small disparity range, makes the use of typical stereo matching algorithms problematic. To address the above shortcomings, we propose a Continuous Cost Aggregation (CCA) scheme within a semi-global matching framework that is able to provide accurate continuous disparities from DP images. The proposed algorithm fits parabolas to matching costs and aggregates parabola coefficients along image paths. The aggregation step is performed subject to a quadratic constraint that not only enforces the disparity smoothness but also maintains the quadratic form of the total costs. This gives rise to an inherently efficient disparity propagation scheme with a pixel-wise minimization in closed-form. Furthermore, the continuous form allows for a robust multi-scale aggregation that better compensates for the varying PSF. Experiments on DP data from both DSLR and phone cameras show that the proposed scheme attains state-of-the-art performance in DP disparity estimation.
Sagi Monin, Sagi Katz, Georgios Evangelidis
2023-06-13T17:26:50Z
http://arxiv.org/abs/2306.07921v1
# Continuous Cost Aggregation for Dual-Pixel Disparity Extraction ###### Abstract Recent works have shown that depth information can be obtained from Dual-Pixel (DP) sensors. A DP arrangement provides two views in a single shot, thus resembling a stereo image pair with a tiny baseline. However, the different point spread function (PSF) per view, as well as the small disparity range, makes the use of typical stereo matching algorithms problematic. To address the above shortcomings, we propose a Continuous Cost Aggregation (CCA) scheme within a semi-global matching framework that is able to provide accurate continuous disparities from DP images. The proposed algorithm fits parabolas to matching costs and aggregates parabola coefficients along image paths. The aggregation step is performed subject to a quadratic constraint that not only enforces the disparity smoothness but also maintains the quadratic form of the total costs. This gives rise to an inherently efficient disparity propagation scheme with a pixel-wise minimization in closed-form. Furthermore, the continuous form allows for a robust multi-scale aggregation that better compensates for the varying PSF. Experiments on DP data from both DSLR and phone cameras show that the proposed scheme attains state-of-the-art performance in DP disparity estimation. + Footnote †: star\) This work was done as part of Sagi Monin’s internship at Snap Inc. \(\star\) This work was done as part of Sagi Katz’s work at Snap Inc. ## 1 Introduction Depth is an important cue for disparate applications such as recognition, navigation, and graphic manipulation. The respective sensors typically use stereo, structured light, or time-of-flight techniques [36, 21] for depth estimation. These techniques require special hardware while the form factors may not easily get miniaturized. With simpler hardware, deep neural networks (DNN) obtain depth from monocular images [30], but still show sub-optimal results in comparison to depth sensors. Interestingly, dual-pixel (DP) sensors, which are becoming increasingly common on modern cameras, have been recently employed for depth estimation [43, 13]. Although these sensors were originally designed to assist with autofocus, the resemblance to tiny-baseline stereo view as well as being inherently rectified made them attractive for depth computation. However, a DP image is not strictly equivalent to a stereo image pair[33] and the use of off-the-shelf stereo algorithms on DP images seems to be inadequate (Fig. 1). This is because DP sensors have a split-pixel arrangement, thereby two sub-aperture views from opposing directions are delivered. As a result, the point spread function (PSF) is different per view and depth does not correspond to pure shifts (Fig. 2). In addition, even if high-resolution DP images are provided, processing on edge devices [43] requires working on lower resolution DP-data. This, along with the tiny baseline, implies very small disparity values (Fig. 1). Therefore, unlike traditional stereo algorithms, sub-pixel disparity estimation becomes a prerequisite. To address the above-mentioned shortcomings, we pro Figure 1: SGM is a widely-used stereo disparity estimation method but it underperforms on DP images, in particular when the disparities are quite small. Instead, the proposed Continuous Cost Aggregation (CCA) algorithm attains state-of-the-art performance in DP disparity estimation. Results from DSLR (_top_) and Phone cameras (_bottom_) are shown; CCA estimates disparities in the range \([-12,6]\) (DSLR) and \([-1.3,0.5]\) (Phone), respectively. pose a continuous cost optimization in a semi-global manner. Inspired by Semi-Global Matching (SGM) [18], we adopt the pathwise optimization from multiple directions through the image. While SGM aggregates cost at integer disparities, our algorithm aggregates the coefficients of parabolas that represent costs of continuous disparities, thus easily modelling small and subpixel disparities. This aggregation step is performed subject to a constraint that enforces smoothness between successive pixels along the paths. By utilizing a quadratic constraint, the total cost remains a parabola, which gives rise to a very efficient disparity propagation strategy and makes the minimization very simple. The inherent efficiency of the proposed algorithm rises from cost aggregation over a 2D image space, unlike the aggregation of SGM over the 3D image-disparity space. In addition, the continuous scheme favors a multi-scale approach that fuses disparity estimation from multiple resolutions, thus better dealing with varying PSFs and improving disparity estimation in blurred regions (Fig. 2). Our contributions are summarized as follows: 1. We propose a Continuous Cost Aggregation (CCA) scheme that results in an efficient semi-global solution to extract disparity from DP images. 2. The proposed algorithm includes a multi-scale disparity fusion that is able to rectify errors from previous scale levels. 3. Our algorithm attains state-of-the-art (SOTA) performance in DP disparity estimation. Unlike the learning-based approaches, the algorithm does not depend on training data and attains a satisfactory performance across diverse data-sets. It is noteworthy that CCA has the potential to provide disparities from traditional stereo images. Despite our focus on DP stereo, we include an experiment and show that CCA performs similarly to SGM in standard large-baseline stereo matching, with lower space and time complexity. ## 2 Related Work **Depth from stereo** Early works on depth estimation relied on two-view geometry and stereo matching [38, 15], aiming to estimate the per-pixel disparity between two images. A large variety of methods has been proposed including local and global algorithms. While local algorithms work on a per-pixel basis and typically solve an optimization problem per pixel, global algorithms impose dependencies between neighboring pixels, e.g., Markov Random Fields (MRF) [7], thus solving a single yet large optimization problem. As a result, global algorithms generally outperform local algorithms at the expense of high computational cost. To obtain continuous disparities, stereo algorithms either fit a parabola around the lowest-cost disparity [38, 18] or integrate interpolation kernels into the cost function [32], with parabola fitting as the dominant case. As mentioned, our algorithm is related to the SGM [18] approach that mitigates the above mentioned performance limitations. SGM imposes a smoothness term along straight paths, thus resulting in an efficient algorithm. The algorithm has four main steps: cost-volume calculation, cost aggregation, winner-take-all optimization, and sub-pixel refinement step. Based on SGM, many variants have been proposed: incorporating DNN for cost-volume [9, 42, 41], updating aggregation step [11, 39], hardware implementation for edge devices [35, 40, 5], and memory efficient adaptations [19, 24]. **Depth from single image sensor** Interestingly, depth estimation is not limited to stereo or multiple cameras and monocular depth estimation has also been investigated. Tra Figure 2: A simple illustration of the DP image formation (_left_). Unlike in-focus scene points, out-of-focus points will be projected to different pixels at the two views, hence the disparity. Based on the DP image model, the two views have different PSF, here represented as the 2D kernels of [33] (_middle_). The higher the resolution is, the more different the PSFs are, and the less similar to stereo-image the model is. As seen, the relation between the intensity profiles of the two LR views of a cross-section is better explained by a shift, compared to the HR profiles(_right_). For a more detailed study on DP image formation, we refer the reader to [33, 43, 13]. ditional and early methods mainly relied on inverting depth-dependent PSF [10, 25, 14]. More recent work, however, relies on deep learning and trains DNNs to estimate depth [12, 27]. In this context, significant progress has been made in estimating depth from a single RGB image and plenty of methods have been proposed; for a detailed review, we refer the reader to [30, 26]. More similar to the problem in question, DP images have been also used for depth estimation. In [43], a classic local stereo matching method is combined with an up-sampling bilateral filter. Follow up efforts used DNNs for depth estimation; [13] explicitly modeled the affine relationship between inverse depth and disparity and trained a network to estimate depth up to this ambiguity. In [23] a DNN used for stereo estimation is fine-tuned for DP using a self-supervised approach; [46] fed a DNN with fused data from stereo and DP cameras in order to exploit the complementary errors of depth estimation from each sensor alone. The authors of [33] suggested modeling the different defocus blur in each of the left and right images to estimate depth. Despite the good performance, their method is slow as it requires solving an optimization problem for each patch in the image. The above methods have also emphasized that, although DP data is available from both smartphones and DSLR cameras, the difference in optical aberrations and depth-of-field (DOF) between devices leads to severe differences in the performance of the algorithms. It is noteworthy to mention that DP sensors have been also used for other applications, such as all-in-focus image generation [45, 3, 1, 44], reflection removal [34], improved auto-focusing [17], 3D face reconstruction [22] and multi-view motion synthesis [2]. A large part of these algorithms rely on DNNs, thus requiring a large amount of data with ground-truth. To this end, DP image simulators have been proposed [31, 4]. ## 3 Continuous Cost Aggregation The main pipeline of CCA is summarized in Fig. 3. In what follows, we describe the process along with implementation details such as the confidence score modification of the parabolas and the cost aggregation over multiple scales. ### Continuous Cost Functions To start with, we compute the discrete cost \(C_{int}(p,d)\) for all integer disparity values \(d\) and pixels \(p\). Note that in the case of DP, the disparity range is relatively small. Any standard matching cost that can be smoothly interpolated (e.g., sum of absolute differences (SAD), normalized cross-correlation (NCC)) can be used at this step of CCA. Then, for every pixel, we initialize a parabola around the integer disparity that corresponds to the minimum cost value. If we assume that \(d^{0}\) is the integer disparity of the minimum cost for pixel \(p\), we fit a parabola to the costs around \(d^{0}\) to define a _continuous_ cost: \[C_{p,d^{0}}(\Delta d)=a_{p}\Delta d^{2}+b_{p}\Delta d+c_{p}, \tag{1}\] with \(\Delta d\) being the continuous offset from the integer disparity \(d^{0}\). The coefficients \(\{a_{p},b_{p},c_{p}\}\) can be easily found from the costs of three adjacent disparities: \[a_{p}=\frac{C_{int}(p,d^{0}+1)+C_{int}(p,d^{0}-1)-2C_{int}(p,d^{ 0})}{2},\] \[b_{p}=\frac{C_{int}(p,d^{0}+1)-C_{int}(p,d^{0}-1)}{2},\] \[c_{p}=C_{int}(p,d^{0}).\] By considering the total continuous disparity \(d:=d^{0}+\Delta d\), we can rewrite Eq.(1) as: \[C_{p,d^{0}}(d-d^{0})=a_{p}(d-d^{0})^{2}+b_{p}(d-d^{0})+c_{p}.\] Recall that \(d^{0}\) is a constant offset and the quadratic cost can be defined as: \[C_{p}(d)=\alpha_{p}d^{2}+\beta_{p}\;d+\gamma_{p}, \tag{2}\] where the new coefficients \(\{\alpha_{p},\beta_{p},\gamma_{p}\}\) are defined as: \[\alpha_{p} = a_{p},\] \[\beta_{p} = b_{p}-2a_{p}d^{0},\] \[\gamma_{p} = c_{p}+a_{p}(d^{0})^{2}-b_{p}d^{0}.\] Although these functions are low-order polynomials and cannot represent all the richness of the discrete cost volume, they have the advantage of inherently containing a non-integer estimation of the best local disparity, as well as its confidence. The optimal local disparity corresponds to the parabola minimizer: \[d_{p}^{optimal}=-\frac{\beta_{p}}{2\alpha_{p}},\] and the confidence is represented by the value of \(\alpha_{p}\), which is proportional to the curvature. Intuitively, pixels with a Figure 3: After the initial cost calculation, CCA initializes a parabola around the integer disparity that corresponds to the minimum cost, thus representing a continuous cost per pixel. Then, along 1D paths, parabola coefficients are aggregated subject to a smoothness constraint. The total cost of all the paths is finally minimized to provide a semi-globally smoothed disparity map. Note that the minimization has a simple closed-form solution since the total cost function per pixel remains a parabola. low curvature value have a rather flat cost function, and therefore, their confidence should be low. Although the parabolas are uniquely defined by three coefficients, neither the minimizer nor the confidence are affected by the the third coefficient and its computation can be skipped. While we adopt a quadratic function for cost interpolation, different interpolants can be likewise used. However, the quadratic function is convex and attains a unique minimum. These properties together with its closure under addition simplify the cost aggregation step significantly (see Sec. 3.3). In addition, any local algorithm that provides sub-pixel disparity estimation [29, 32] can be used to calculate \(\Delta d\) and re-center the _initial_ parabola at a better position before the aggregation step, _e.g_., by keeping \(\alpha_{p}\) fixed and updating \(\beta_{p}\) using a better value of \(d_{p}^{optimal}\). ### Confidence Score As mentioned, a confidence measure can be deduced from the curvature of the parabolas. However, it does not take into account other local minima. Therefore, while a parabola makes a good representation of the local cost function, it can only represent a single minimum. In some cases, especially in noisy regions or those with repetitive patterns, the real disparity value might not receive the minimum matching cost value. To mitigate this, we assume that the correct disparity value might not always get the minimum cost value, but instead it gets a different value that is close to the minimum. Thus, while computing the parabola coefficients, we reduce the confidence using the second-lowest cost. If this cost is attained at disparity \(d^{1}\) that is not adjacent to \(d^{0}\), \(|d^{1}-d^{0}|>1\), we scale the parabola by the factor \[S_{confidence}=\max\left(\min\left(\frac{1-q}{1-T_{q}},1\right),\epsilon\right) ^{2}, \tag{3}\] where \(q=C_{int}(p,d^{0})/C_{int}(p,d^{1})\) is the cost ratio, and \(T_{q}\) is the ratio threshold below which \(S_{confidence}=1\) (no scaling). As can be seen in Eq.(3), the scale factor gets lower as the second-lowest cost value approaches the lowest one. Note that such a per-pixel scaling reduces the confidence of wide parabolas without changing their minimum. ### Cost Aggregation As in SGM approach, we aggregate the costs along multiple 1D paths through the image. For a single path, our cost function \(L_{p}(d)\) includes both a local term and a quadratic regularizer (smoothness term): \[L_{p}(d)=C_{p}(d)+P_{adapt}(d-m_{p-1})^{2}, \tag{4}\] where \(m_{p-1}=\operatorname*{argmin}_{d}L_{p-1}(d)\) is the minimizer of the previous parabola in the path, and \(P_{adapt}\) is an adaptive scale defined below. If we substitute Eq.(2) into Eq.(4), the aggregated cost can be rewritten as a new parabola with coefficients \(\{\operatorname{A}_{p},\operatorname{B}_{p},\Gamma_{p}\}\), that attains its minimum at \(-\operatorname{B}_{p}/(2\operatorname{A}_{p})\) and has a confidence parameter \(A_{p}\): \[L_{p}(d)=\operatorname{A}_{p}d^{2}+\operatorname{B}_{p}d+\Gamma_{p},\] where \[\operatorname{A}_{p}=\alpha_{p}+P_{adapt},\] \[\operatorname{B}_{p}=\beta_{p}+P_{adapt}\cdot\tfrac{ \operatorname{B}_{p-1}}{\operatorname{A}_{p-1}},\] \[\Gamma_{p}=\gamma_{p}+P_{adapt}\cdot\left(\tfrac{\operatorname{B} _{p-1}}{2\operatorname{A}_{p-1}}\right)^{2}\] and \(m_{p-1}=-\operatorname{B}_{p-1}/(2\operatorname{A}_{p-1})\). The adaptive weight of the smoothness term in Eq.(4) is defined by \(P_{adapt}=P\cdot\operatorname{A}_{p-1}\cdot e^{\left(-(I_{p}-I_{p-1})^{2}/ \sigma^{2}\right)}\), which includes the user-defined parameters \(P\) and \(\sigma\), the confidence of the previous aggregated parabola \(\operatorname{A}_{p-1}\), and a gradient based exponential factor - similar to the one used in [18] - that becomes smaller over edges of the input image \(I\). Recall that the constant term \(\Gamma_{p}\) does not affect the minimizer, thus only A and B coefficients really need to be updated within a very simple propagation equation. Finally, to define the total cost for pixel \(p\), we sum the aggregated costs of all the paths \(r\) that end in pixel \(p\). \[S_{p}(d)=\sum_{r}L_{p}(d),\] with the sum of parabolas \(S_{p}(d)\) being in fact a parabola itself. Therefore the final continuous disparity value for pixel \(p\) is obtained by the minimizer of \(S_{p}(d)\): \[disparity=-\frac{\sum_{r}\operatorname{B}_{p}}{2\sum_{r}\operatorname{A}_{p}}. \tag{5}\] As seen in Eq. (5), an instability may occur if the denominator approaches \(0\). Although the chances of this happening is low due to the summation over different paths, it is still important to mitigate this issue. To that end, for every pixel, we invalidate the parabola if the coefficient \(\alpha_{p}\) is below a small threshold \(T_{a}\): \[\{\alpha_{p},\beta_{p},\gamma_{p}\}=\begin{cases}\{\epsilon,0,0\},&\text{if } \alpha_{p}<T_{a}\\ \{\alpha_{p},\beta_{p},\gamma_{p}\},&\text{otherwise}.\end{cases}\] Using \(\epsilon\) and \(0\) values means that the aggregated parabola will receive its minimum from the previous pixels in the path. It is important to clarify here that the parabola aggregation can significantly change the initial integer disparity around which the parabola is initialized. As such, CCA should not be seen as a sub-pixel refinement of the initial integer disparity. As discussed, the cost aggregation is performed along straight paths, which may result in artifacts known as streaks [11]. To alleviate this issue, one may optionally apply several iterations of aggregation to make the smoothness requirement stronger. An iterative cost aggregation step is discussed in the appendix. ### Cost Aggregation Over Scales A multi-scale approach is beneficial to stabilizing the disparity estimation, as well as to the potential acceleration due to the reduced number of pixels and the shrinkage of the disparity range. In the specific case of DP cameras, a multi-scale approach is also beneficial to resolve depth in the presence of blur that may destabilize the disparity in fine levels of the pyramid. Given a continuous disparity result at a coarse scale \(s\), the question is how to use it to compute the disparity at finer scale \(s-1\). A simple solution is to upscale the coarse-level disparity and compute the nearest integer disparity at the finer level. Then, the fine-level parabolas can fit around the up-scaled values. Although this approach is very efficient, errors in coarse levels seep through to finer scales without any way to correct them. Instead, we use a less strict approach where the fine scale solution is more affected by coarse scale solution in less confident regions and has only a minor effect in high-confidence regions. For this, we bring the aggregated parabolas \(\{\mathrm{A}_{\mathrm{p},s},\mathrm{B}_{\mathrm{p},s}\}\) from the coarse scale \(s\) to the finer scale \(s-1\) by upscaling and updating the coefficient maps of scale \(s\) to match the map size of scale \(s-1\), using bilinear interpolation: \[\mathrm{A}_{p,s-1}^{prior}=upsample\left(\mathrm{A}_{p,s}\right),\] \[\mathrm{B}_{p,s-1}^{prior}=upsample\left(\mathrm{B}_{p,s}\cdot F \right),\] where \(F\) is the scale factor between scale \(s-1\) and \(s\). Finally, these maps are weighted by a factor \(w\) and added to the per-pixel coefficient maps of scale \(s-1\), in order to calculate the coefficients of Eq. (2): \[\alpha_{p,s}^{\mathit{with\ prior}}=\alpha_{p,s}+w\cdot\mathrm{A }_{p,s-1}^{prior},\] \[\beta_{p,s}^{\mathit{with\ prior}}=\beta_{p,s}+w\cdot\mathrm{B }_{p,s-1}^{prior}.\] Such a weighting has the potential to shift and stabilize "weak" parabolas in weakly textured or blurred regions. Interestingly, any kind of prior might be added in a similar way. For instance, it is possible to aggregate the parabolas over time where \(w\) can be controlled by the similarity of the current pixel to the corresponding pixel from a previous frame. In order to accelerate the computation, our multi-scale solution works by finding the range \([d_{MIN},d_{MAX}]\) of the expected disparity of the current scale according to the previous scale. Then, the matching costs for the current scale is computed only for the expected integer disparity values in the range \([d_{MIN}-1,d_{MAX}+1]\). This approach assumes that the globally detected disparity range is a good representation of the possible disparity range, while still avoiding making early decisions at pixel level. ### Complexity Analysis In order to derive the complexity of CCA, we consider a single-scale of disparity computation. Computing the integer disparity corresponding to the lowest cost and then fitting the initial parabolas requires \(O(WHD)\) operations, where \(W\) and \(H\) are the width and height of the image and \(D\) is the number of disparity levels. In the cost aggregation step, we need to propagate two coefficients over the entire image and accumulate the results, which requires \(O(WH)\) operations for every path. Therefore, assuming the number of paths \(R\), the total complexity of the aggregation is \(O(WHR)\). This is in contrast to SGM where the cost aggregation complexity is \(O(W HDR)\), which is considerably higher. The final computation of the optimal disparity can be done as part of the last path computation and therefore additional computational complexity is not added. Therefore, the time complexity of CCA is \(O(WHD+WHR)\). Since CCA does not need to revisit the entire cost volume and maintains only two scalars per pixel, the space complexity does not depend on the number of disparities and is \(O(WH)\). ## 4 Experiments To evaluate the performance of the proposed algorithm and compare it to SOTA in DP disparity estimation, we conduct experiments using DP data from both DSLR and phone cameras. In addition, we compare the performance of CCA and SGM on standard stereo data-sets. ### Dual Pixel Cameras Openly accessible DP data exists for Canon DSLR cameras [33] and Google Pixel phones [13], with available datasets for both cameras. As DP sensors capture both views in a single shot, the two captured images share the same exposure time and gain, which should result in images with similar intensities. However, while DSLR left-right DP images have similar intensity levels, left-right sub-pixels of phone cameras have different properties (i.e. due to a lower lens/sensor quality). This leads us to use different preprocessing steps in order to compute the disparity maps. In the next sections, we evaluate our algorithm on these two different data-sets and compare to SOTA algorithms. The hyper-parameters used for the experiments below and additional experimental results with different parameter configurations such as number of iterations are provided in the appendix. #### 4.1.1 DSLR Captured Images In DSLR cameras the two resulting DP images are color images as each pixel on the sensor is divided into two adjacent pixels. In addition, the high quality sensor and lens system result in images with similar intensity levels and few aberrations. The large aperture results in images with rather large disparities between images. The good quality of images means that additional pre-processing or robust cost function is not needed. We calculate costs using the SAD and we shift the initial parabola using the subpixel estimation of ENCC [32] since it better initializes the aggregation step (see supplementary material). As out-of-focus regions cause an increase in both disparity and blur of the image, we used three levels in multi-scale aggregation. In all the experiments, we evaluate the performance of our algorithm with and without post-processing filter, by adopting the filtering pipeline of [33]. We evaluate our method on a data-set from [33], being the data captured with a Canon DSLR camera; two subsets are available, with and without ground-truth (GT), respectively; we refer to these as DLSR-A and DSLR-B datasets. We compare our results to the following DP disparity estimation algorithms: DPdisp [33] which is an optimization method that models the different PSF of both views, SDoF [43] that uses a classic local disparity estimation (results are adopted from its implementation in [33]), and DPE [31] which is a learning based approach. The results of DPE on DSLR-A data-set are provided by the authors of [31] using an improved fine-tuned model, compared to the results on DSLR-B data-set that are obtained by the publicly available model. As pointed out in [13], disparity or inverse depth results can be recovered up to an unknown affine ambiguity. Therefore, we consider the affine-invariant error metrics of [33] to conduct a quantitative evaluation. Table 1 compares the performance of algorithms on the full DSLR-A data-set. As seen, CCA outperforms the other algorithms, even without post-processing. Fig. 4 and Fig. 5 qualitatively compare the algorithms by showing results of images examples from DSLR-A and DSLR-B data-sets, respectively. As seen in these figures, CCA significantly outperforms SDoF [43] and DPE [31] that suffer from strong artifacts. In comparison to DPdisp [33], CCA performs better on the boundary of objects (e.g. horns of the action figure, toy car and mug handle in Fig. 4). Note that all the images in Fig. 4 have undergone an affine transform to best fit GT. #### 4.1.2 Phone Captured Images The second data-set is from [13], which provides data captured by Google Pixel 2 and 3 with GT inverse depth-maps. Unlike DSLR images, this data-set suffers from more aberrations, noise and only captures green-channel data. In addition, phone cameras have large depth of field and small aperture, resulting in small sub-pixel disparities. Therefore, we pre-process the images. Since we do not have access to calibration data or devices for a proper vignetting calibration [44], we compensate for vignetting by using a low-pass filter (LPF), i.e., \(I_{L}^{calib}=I_{L}\cdot(LPF(I_{R})/LPF(I_{L}))\). We then remove local photometric distortions using a subtraction bilateral filter [20]. Because of the low image quality, we do not use ENCC for the initial subpixel offset. Instead, \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Method & AI(1) & AI(2) & \(1-|\rho_{s}|\) & Geometric \\ & & & & Mean \\ \hline SDoF [43] & 0.087 & 0.129 & 0.291 & 0.144 \\ \hline DPdisp [33] & 0.047 & 0.074 & 0.082 & 0.065 \\ \hline DPE [31] & 0.061 & 0.098 & 0.103 & 0.110 \\ \hline CCA & \(\mathbf{0.041}\) & \(\mathbf{0.068}\) & \(\mathbf{0.061}\) & \(\mathbf{0.053}\) \\ \hline CCA \(+\) filter & \(\mathbf{0.036}\) & \(\mathbf{0.061}\) & \(\mathbf{0.0490}\) & \(\mathbf{0.048}\) \\ \hline \end{tabular} \end{table} Table 1: Quantitative evaluation on DSLR data-set [33] using the error metrics [33]; the right-most column shows the geometric mean of all the metrics. Figure 4: Results from DSLR examples:(a) Input image, (b) ground truth (GT), (c) SDoF [43] (as implemented in [33]), (d) DPdisp [33] (e) DPE [31] (improved model, images provided by the author of [33]), (f) our method, (h) our method with filter. All the images have undergone an affine transform to best fit the GT. we use the histogram-equalized kernel of [29] that provides a less-biased initial continuous disparity (see supplementary material). In addition to vignetting, the images suffer from field-curvature aberration, resulting in radial varying disparity. Unlike [43], we are unable to calibrate these aberrations since we do not have calibration data. We compare our results to the original implementation of SDoF [43] (results and images are taken from [13]) in Fig. 6. We also tested DPdisp [33] on this dataset; however, it under-performs on mobile devices as it is tailored for DSLR data as discussed in [33]. Hence, we do not include DPdisp [33] in our comparison. One can observe the above mentioned radial distortion in the results of our algorithm in the second-column. We use the error metrics suggested in [13], which consider confidence in the GT inverse depth. In Table 2 we show a quantitative comparison between the algorithms. Even without radial disparity calibration, our algorithm slightly outperforms SDoF. Note that [13] provides better results than our algorithm. We do not include this algorithm in the comparison since its neural network \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Method & AI(1) & AI(2) & \(1-|\rho_{s}|\) & Geometric \\ & & & & Mean \\ \hline SDoF [43] & 0.027 & 0.037 & 0.236 & 0.063 \\ \hline CCA & \(\mathbf{0.026}\) & \(\mathbf{0.036}\) & \(\mathbf{0.225}\) & \(\mathbf{0.059}\) \\ \hline CCA \(+\) filter & \(\mathbf{0.025}\) & \(\mathbf{0.035}\) & \(\mathbf{0.217}\) & \(\mathbf{0.057}\) \\ \hline \end{tabular} \end{table} Table 2: Accuracy of different methods on Google-Pixel dataset [13]. Lower is better. The right-most column shows the geometric mean of all the metrics. Figure 5: Qualitative results from examples of DSLR data-set:(a) Input image, (b) SDoF [43] (as implemented in [33]), (c) DPdisp [33], (d) DPE [31] (publicly available model) (e) our method, (f) our method with filter. Figure 6: Disparity examples from Google Pixel DP data-set. CCA performs similarly to SDoF [43] on small disparities. is able to compensate for severe aberrations without additional calibration. This network, however, learns from data collected from a specific phone and is very unlikely to perform well on quite different devices, such as Canon DSLR. The low generalizability, in particular between DSLR and mobile devices is discussed in [33]. Adapting data from different DP devices and synthetic data is an ongoing research [4, 31]; however, it has not been demonstrated successfully for disparity estimation. One could argue that per-device training is applicable; however, acquiring GT depth data for DP devices requires complicated setups [13, 22] that may not be available. Instead, our method is adaptable to any DP device. We provide additional results in the supplementary material, including results from [13]. Our implementation of CCA algorithm includes non-optimized Matlab code. However, as discussed in Sec. 3.5, CCA has a lower complexity than SGM, which real-time execution has been demonstrated by several implementations [8, 28]. A time comparison here is not fair because only [33] and [31] provide their code and both infer disparities on CPU in minutes. ### Standard Stereo Cameras To complete the evaluation of our algorithm, we test its performance on typical stereo data-sets (no DP). To adapt CCA to a standard stereo algorithm we modify two steps of the algorithm, the confidence score (Sec. 3.2) and the cost aggregation step (Sec. 3.3). Our original confidence score reduces confidence in regions with more than one local minima. For DP images the disparity range is rather small so considering a single minimum is enough. For wide baseline stereo, as the disparity range increases we need to consider several minima, so we adapt our confidence score to consider up to five minima. In addition, we use left-right consistency check to reduce confidence in non-consistent regions (e.g. occlusions) and speckle detection to reduce confidence in regions with small blocks of disparity. As for the aggregation step, we use a threshold \(T\) which depends on the difference between the current pixel disparity \(d\) and the minimizer of the previous parabola in the path \(m_{p-1}\). If \(|d-m_{p-1}|<T\) we use the original aggregation, as presented in Sec. 3.3. If \(|d-m_{p-1}|\geq T\), we then assign \(d=m_{p-1}\) if the difference between the curvature is large enough and no edge is detected between pixels (see supplementary for more details of the algorithm adaptation). We compare our algorithm to SGM with sub-pixel refinement using parabola fitting1 Footnote 1: SGM implementation from: [https://github.com/kobybibas/semi_global_matching](https://github.com/kobybibas/semi_global_matching) To ensure unbiased comparison we use the same cost function for both algorithms and compare both methods with and without a common post processing filter. We test both algorithms on Middlebury data-set [37] at \(1/4\) of the original resolution. In Table 3 we report errors of both algorithms on non-occluded areas over the full data-set. Two representative examples are shown in Fig. 7. Both algorithms perform roughly the same, with SGM slightly out-performing CCA, mainly in smooth textureless areas such as the wall in the second example of Fig. 7. The large RMSE value of SGM before filtering is due to the many speckles of large disparities at the boundary of occluded pixels, which are usually filtered out after post-processing (see the first example of Fig. 7). ## 5 Conclusions This work proposes a continuous disparity estimation algorithm for DP stereo. The proposed algorithm (CCA) uses parabolas to represent continuous costs and computes the \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Method & Bad px & Bad px & Bad px & RMSE \\ & 0.5 [\%] & 1 [\%] & 2 [\%] & \\ \hline SGM & 26.1 & 17.2 & 12.2 & 9.9 \\ \hline CCA & 26.2 & 18.3 & 13.2 & 5.2 \\ \hline SGM \(+\) filter & \(\mathbf{23.5}\) & \(\mathbf{15.2}\) & \(\mathbf{10.5}\) & \(\mathbf{4.04}\) \\ \hline CCA \(+\) filter & 24.6 & 16.7 & 11.6 & \(\mathbf{4.04}\) \\ \hline \end{tabular} \end{table} Table 3: Comparison of SGM and CCA performance on Middlebury dataset (non-occluded areas). Figure 7: SGM and CCA results on Middlebury dataset. Both algorithms perform similarly, however, CCA has lower space- and time-complexity as it does not require storing and aggregating the entire calculated cost-volume. total cost by aggregating the parabola coefficients. This results in a quadratic total cost, which minimization is simple, fast and directly provides continues disparities. The multi-scale extension of this aggregation makes the solution robust to the varying PSF of DP images. Our evaluation shows that CCA attains SOTA performance on DP data from both DSLR and phone cameras. We also show that CCA performs similarly to SGM on standard stereo images, while being more memory and computationally efficient. To further improve the performance of CCA, future research will include better priors for multi-scale aggregation, improved cost calculation (e.g. deep features) and confidence scores, and higher-order polynomials for continuous cost representation.
2305.18389
AnoRand: A Semi Supervised Deep Learning Anomaly Detection Method by Random Labeling
Anomaly detection or more generally outliers detection is one of the most popular and challenging subject in theoretical and applied machine learning. The main challenge is that in general we have access to very few labeled data or no labels at all. In this paper, we present a new semi-supervised anomaly detection method called \textbf{AnoRand} by combining a deep learning architecture with random synthetic label generation. The proposed architecture has two building blocks: (1) a noise detection (ND) block composed of feed forward ferceptron and (2) an autoencoder (AE) block. The main idea of this new architecture is to learn one class (e.g. the majority class in case of anomaly detection) as well as possible by taking advantage of the ability of auto encoders to represent data in a latent space and the ability of Feed Forward Perceptron (FFP) to learn one class when the data is highly imbalanced. First, we create synthetic anomalies by randomly disturbing (add noise) few samples (e.g. 2\%) from the training set. Second, we use the normal and the synthetic samples as input to our model. We compared the performance of the proposed method to 17 state-of-the-art unsupervised anomaly detection method on synthetic datasets and 57 real-world datasets. Our results show that this new method generally outperforms most of the state-of-the-art methods and has the best performance (AUC ROC and AUC PR) on the vast majority of reference datasets. We also tested our method in a supervised way by using the actual labels to train the model. The results show that it has very good performance compared to most of state-of-the-art supervised algorithms.
Mansour Zoubeirou A Mayaki, Michel Riveill
2023-05-28T10:53:34Z
http://arxiv.org/abs/2305.18389v1
# AnoRand: A Semi Supervised Deep Learning Anomaly Detection Method by Random Labeling ###### Abstract Anomaly detection or more generally outliers detection is one of the most popular and challenging subject in theoretical and applied machine learning. The main challenge is that in general we have access to very few labeled data or no labels at all. In this paper, we present a new semi-supervised anomaly detection method called **AnoRand** by combining a deep learning architecture with random synthetic label generation. The proposed architecture has two building blocks: (1) a noise detection (ND) block composed of feed forward perceptron and (2) an autoencoder (AE) block. The main idea of this new architecture is to learn one class (e.g. the majority class in case of anomaly detection) as well as possible by taking advantage of the ability of auto encoders to represent data in a latent space and the ability of Feed Forward Perceptron (FFP) to learn one class when the data is highly imbalanced. First, we create synthetic anomalies by randomly disturbing (add noise) few samples (e.g. 2%) from the training set. Second, we use the normal and the synthetic samples as input to our model. We compared the performance of the proposed method to 17 state-of-the-art unsupervised anomaly detection method on synthetic datasets and 57 real-world datasets. Our results show that this new method generally outperforms most of the state-of-the-art methods and has the best performance (AUC ROC and AUC PR) on the vast majority of reference datasets. We also tested our method in a supervised way by using the actual labels to train the model. The results show that it has very good performance compared to most of state-of-the-art supervised algorithms. ## 1 Introduction Anomaly detection is one of the most exciting and challenging subject in theoretical and applied machine learning. nowadays, deep learning methods are increasingly used for anomaly detection. These methods can be categorized in two main families: supervised methods and unsupervised methods. Supervised anomaly detected methods are used when we have access to labeled data and unsupervised or semi supervised methods when there no labeled data or very few of them. One of the main challenge in anomaly detection or more generally outlier detection is that you don't have enough samples labeled as anomalous. In this situation, most of the classical machine learning methods fail to learn the minority (anomaly) class that most of the time the class of interest. Another challenge is that the labels are often not accurate. Some samples labeled anomalous may not be actual ones and and vice versa. Unsupervised methods have also gained popularity due to the fact that they don't required labeled data. These methods model the distribution of normal samples and then identify anomalous ones via finding outliers. The main limitation of unsupervised methods is that they make the assumption that anomalies are located in low-density regions and therefore their performance is highly dependant on the alignment of these assumption and the underlying anomaly type. To detect anomalies and outliers in semi supervised way, we propose a method that combines a deep auto encoder and feed forward perceptrons (FFP). This new method that we called **AnoRand**, jointly optimizes the deep autoencoder and the FFP model in an end-to-end neural network fashion. AnoRand has two building blocks: a Feed Forward Perceptron block and an autoencoder block. The inspiration for this method comes from the fact that when dealing imbalance data, supervised algorithm tend to learn only the majority class. The main idea is to learn one class (e.g. the majority class in case of anomaly detection) as well as possible by taking advantage of the ability of auto encoders to represent data in a latent space and the ability of Feed Forward Perceptron (FFP) to learn one class when the data is highly imbalanced. Indeed, Anand et al. [2] showed that in a classification problem, the norm of the gradient vector of a class depends on the size of the class. Thus, in case of imbalance classes, the majority class therefore contributes more to updating the model parameters during gradient descent and at the end the model only learns the majority class. In our method, the FFP block has a role of informing and reinforcing the capacity of the autoencoder block to embed the normal samples. Our method is performed in two steps: (1) we first create synthetic anomalies by randomly adding noise to few samples from the training data; (2) secondly we train our deep learning model in supervised manner with the new labeled data. We compared the performance of the proposed method to 17 state-of-the-art unsupervised anomaly detection method on synthetic data sets and 57 real-world data sets from the ADBench benchmark paper [14]. Our results show that this new method generally outperforms most of the state-of-the-art methods and has the best performance (AUC ROC and AUC PR) on the vast majority of reference datasets. In particular, AnoRand outperforms Deep auto encoder, Variational auto encoder and MLP even though they have the same kind of building blocks. We also tested our method (AnoRand) in a supervised way by using the actual labels to train the model instead of creating synthetic label as we did in the semi supervised case. The results show that it has very good performance compared to most of state-of-the-art supervised algorithms. Our results also show that classical methods such as SVM, CatBoost, LGB tend to have better performance than most of deep learning based algorithms. The main contributions of our paper are: * We propose a new anomaly detection method called AnoRand that does not require any assumption about the shape of the decision boundary between the normal samples and anomalous ones. * We learn a decision boundary using only the normal samples and noisy version of few of them. * We compared AnoRand performance to 17 state-of-the-art unsupervised algorithm and 57 real-world anomaly detection benchmark datasets. The results show that it has state-of-the-art performance on most of them. * We tested AnoRand in a supervised way by using the actual labels instead of creating synthetic labels. The results show that it achieves state-of-the-art performance. ## 2 Related works **Anomaly detection.** In data science and machine learning, anomaly detection or outlier detection is generally defined as the identification of rare events or samples which deviate significantly from the majority of the data and do not conform to a well defined notion of normal behaviour. There are mainly four types of anomalies : (i) Local anomalies when the anomalies are deviant from their local neighborhoods. (ii) Global anomalies are more different from the normal data, generated from a uniform distribution, where the boundaries are defined as the min and max of an input feature. (iii) Dependency anomalies refer to samples that do not follow the dependency structure which normal data follow. (iv) Clustered anomalies, also known as group anomalies, exhibit similar characteristics. In the literature, there are three group of anomaly detection algorithms: Supervised, semi-supervised and unsupervised algorithms. ### Anomaly Detection Algorithms **Unsupervised algorithms.** Unsupervised methods make the assumption that anomalies are located in low-density regions. They are effective when the input data and the algorithm assumption(s) meet. These algorithms can be grouped into 2 categories: classical algorithms and deep learning based algorithms. (i) Classical algorithms include distribution and distance based unsupervised methods. Most popular algorithms include: K Nearest Neighbors (KNN) [28]; Isolation Forest (IForest) [22]; One-class SVM (OCSVM) [33] ; The Empirical-Cumulative-distribution-based Outlier Detection (ECOD) [21]. There are other unsupervised methods of this category such as: Local Outlier Factor (LOF) [5], Cluster-based Local Outlier Factor (CBLOF) [16], Connectivity-Based Outlier Factor (COF)[36],Histogram- based outlier detection (HBOS) [12], Subspace Outlier Detection (SOD) [19], Copula Based Outlier Detector (COPOD) [20], Principal Component Analysis (PCA) [35], Lightweight on-line detector of anomalies (LODA) [26]. More details and python implementation of these algorithms can be found in Han et al [14] and the python package Pyod [40]. (ii) Deep learning based algorithms includes algorithm that use deep learning representation to cluster data into homogeneous classes. Deep Support Vector Data Description (DeepSVDD) [30] uses the idea of OCSVM by training a neural network to learn a transformation that minimizes the volume of a hyper-sphere in the output space that encloses the samples of one class. All the samples that are far from the center of the hyper-sphere are labeled as anomalies. Deep Autoencoding Gaussian Mixture Model (DAGMM) [43] optimizes jointly a deep autoencoder and a Gaussian mixture model in a same learning loop. Auto encoders have also been used for anomaly detection by some authors [18; 41; 38; 34]. They used the reconstruction error as anomaly score. It has been shown [9; 37] that reconstruction based anomaly detection method such as auto encoders lead to high number of false alarms. Indeed, auto encoder tend to produced blurry output. This behavior can lead to a kind of smoothing or blurring the anomalies such that will look like normal samples. **Fully supervised and Semi-supervised algorithms.** These methods are designed to use few label during training. Fully supervised methods require full access to labeled data. They can be very effective in detecting known anomalies but may fail to detect unknown ones. Supervised methods may also perform poorly on imbalance data. Indeed when the classes are imbalanced, the algorithms tend to produce high rate of false negative. The algorithms are biased and fail to learn the minority class. To deal with this issue, there are two main approaches : the resampling approach which consists in balancing the classes by adding or removing data from classes and the approach which consists in modifying the learning algorithms so that they take into account the class imbalance. These algorithms include classical methods such as SVM, random forest [39; 3; 10; 4; 8] and deep learning based methods [1; 31; 23; 24; 24; 42; 29; 13]. ## 3 AnoRand method and implementation details ### Proposed Architecture Lets first define \((x,y)=\{(x_{1},y_{1}),(x_{2},y_{2}),\ldots(x_{N},y_{N})\}\) in the machine learning framework such that \(y_{1}^{i}\) is the target (label) vector and \(x_{i}\in R^{d}\) the feature vector for the \(ith\) sample. In this paper we proposed to combine an autoencoder architecture with a fully connected architecture to detect outliers and anomalies. The proposed architecture has two building blocks: (1) a noise detection (ND) block composed of Feed Forward Perceptron (FFP) and (2) an autoencoder (AE) block. We call this new method **AnoRand**. AnoRand jointly optimizes the deep autoencoder and the FFP model in an end-to-end neural network fashion. The joint optimization balances autoencoding reconstruction and helps the autoencoder escape from less attractive local optima and further reduce reconstruction errors. The main idea of this new method is to learn one class (e.g. the majority class in case of anomaly detection) as well as possible by taking advantage of the ability of auto encoders to represent data in a latent space and the ability of Feed Forward Perceptron (FFP) to learn one class when the data is highly imbalanced. Indeed, Anand et al. [2] showed that in highly imbalance classification problem, the expectation of the total gradient is dominated by that of the majority class. The majority class therefore contributes more to updating the model parameters during gradient descent. Our method takes advantage of these limitations of FFP models. AnoRand method is performed in two steps: First, we create synthetic anomalies by randomly disturbing (add noise) few samples (e.g. 2%) from the training set. Second, we use the normal and the synthetic samples as input to our model. The synthetic label generation is describe in subsection 3.2. In this architecture, the noise detection block has a role of informing and reinforcing the capacity of the autoencoder block to embed the normal samples. We call the FFP block a noise detection block because our experiments show that when the synthetic labels are generated using high value of noise, the FFP block outputs high value for anomalies. Lets denote by \(z_{1}=F(x,\theta_{0})\) the mapping of the input features by FFP and \(z_{0}=E(x,\theta_{1})\) its mapping by the encoder from an auto encoder architecture. Where \(\theta_{1}\) is the weights of the Feed Forward Perceptron (FFP) and \(\theta_{0}\) the weights of the auto encoder block. The final latent vector of the autoencoder block is define as \(z=(z_{0},z_{1})=(F(x,\theta_{0}),E(x,\theta_{1}))\). The final model has one input and tree outputs. The model takes as input the features X and the some synthetic labels Y generated as described in subsection 3.2. The first output \(\hat{y}_{1}\) is the probability of the sample being a noisy sample estimated by the noise detection block, the second output \(\hat{x}\) is the reconstructed signal of the autoencoder block and the third output \(\hat{y}_{0}\) is the predicted probability of the sample being an anomaly estimated by the AE block. Notice that \(\hat{y}_{0}\) is computed using the reconstruction error of the sample. We make the hypothesis that as the model learns the normal class, when the reconstruction error is high, the sample is more likely to be an anomaly. The final model prediction is a weighted sum of \(\hat{y}_{1}\) and \(\hat{y}_{0}\). The full network architecture is described in figure 1. ### Synthetic label generation Lets define \(X_{0}\) the samples from the normal class labeled 0. To build our semi supervised model, we created the synthetic anomalies (fake anomalous samples) as follows: we first selected randomly few samples (for example 1% of \(X_{0}\)) from the normal class, we denoted these samples as \(X_{0}^{0}\) and we refer to the remaining samples as \(X_{1}\). Secondly, we added some noise to \(X_{0}^{0}\) and label them as anomalous. The noisy samples are referred to as \(X_{0}^{1}\). The noisy samples can be generated by using Gaussian noise or other sophisticated methods such as in Cai et al.[6]. In our experiment, we combined Gaussian noise with the synthetic minority over-sampling technique (SMOTE) introduced by Chawla et al.[7] to create these noisy samples. This technique consists creating synthetic sample by using some k nearest neigbord of a sample in the minority class. For a given sample in the minority class, we randomly select k neighbors and generate one sample in the direction of each one. Each synthetic sample is generated by taking the difference between the feature vector (sample) under consideration and its nearest neighbor and multiply this difference by a random number between 0 and 1. The obtained value is then added to the feature vector of the sample under consideration. In our experiments, we first labeled \(X_{0}^{0}\) as anomalous and then oversampled them using SMOTE up to 5% of the total training dataset. Then added a Gaussian noise to \(X_{0}^{0}\). By combining the SMOTE technique with Gaussian noise we make sure that the synthetic anomalies are not very far from the normal samples in terms of distribution. This is important in real world application because anomalies are just modified or "broken" versions of the normal samples. So the final training dataset \(X_{tr}=X_{1}+X_{0}^{0}+X_{0}^{1}\) is composed of 5% of "anomalous" samples. Figure 1: Proposed architecture ### Objective function The final loss of the model is composed of two parts: the cross entropy computed using the noise detection block and the cross entropy obtained by the auto encoder reconstruction error. The loss function is defined as follows: \[\mathcal{L}(\theta)=w\cdot\mathcal{L}(\theta_{1})+(1-w)\cdot\mathcal{L}(\theta_{ 0}) \tag{1}\] \[\mathcal{L}(\theta_{0})=\sum_{i=1}^{N}\mathcal{L}(z(x_{i},\theta_{0}),y_{i})=- \sum_{i=1}^{N}y_{i}\log(y_{0}^{i})+(1-y_{i})\log(1-y_{0}^{i}) \tag{2}\] \[\mathcal{L}(\theta_{1})=\sum_{i=1}^{N}\mathcal{L}(z(x_{i},\theta_{1}),y_{i})=- \sum_{i=1}^{N}y_{i}\log(y_{1}^{i})+(1-y_{i})\log(1-y_{1}^{i}) \tag{3}\] Where \(\theta=(\theta_{0},\theta_{1})\) and \(w\leq 1\). \(y_{0}^{i}=z(x_{i},\theta_{0})\) is the estimated probability of sample \(i\) being a noisy sample estimated by the ND block and \(y_{1}^{i}=z(x_{i},\theta_{1})\) is the estimated probability of sample \(i\) being an anomaly computed using the model final prediction. Recall that \(\theta_{0}\) and \(\theta_{1}\) are respectively some adjustable parameters of the ND block and the AE block. \(\mathcal{L}(\theta_{0})\) is the cross entropy of the ED (autoencoder) block and \(\mathcal{L}(\theta_{1})\) is the cross entropy of the ND (noise detector) block. N is the total number of samples. The final loss function is a weighted sum of the cross entropy losses. Note that \(\mathcal{L}(\theta_{0})\) is computed using the reconstruction error. We look for the optimal value of \(w\) by varying it between 0 and 1. The higher \(w\) is, the greater the role of the ND block in the final prediction. In all our experiments, we set empirically \(w=0.2\) (see subsection 4). To train our model, we try to find a configuration of the model parameters that minimizes the training loss \[\hat{\theta}=\operatorname*{arg\,min}_{\theta}\mathcal{L}(\theta)= \operatorname*{arg\,min}_{\theta}\sum_{i=1}^{N}\mathcal{L}(z(x_{i},\theta),y_ {i}) \tag{4}\] Note that, \(\theta_{0}\) and \(\theta_{1}\) are not mutually independent or exclusive. Indeed, \(\mathcal{L}(\theta_{0})\) and \(\mathcal{L}(\theta_{1})\) are computed separately but during the back-propagation process, some parameters of the model are updated using the gradients of \(\mathcal{L}(\theta_{0})\) and \(\mathcal{L}(\theta_{1})\) with respective weights \(1-w\) and \(w\). ### Evaluation Metrics We evaluate the algorithms by using two widely used metrics: AUC ROC (Area Under Receiver Operating Characteristic Curve) and AUC PR (Area Under Precision-Recall Curve). The AUC PR shows precision values for corresponding recall values. It provides a model-wide evaluation like the AUC ROC plot. The AUC PR measures the entire two-dimensional area under the entire precision-recall curve (by integral calculations) from (0,0) to (1, 1). In all incoming experiments, we report these two metrics as performance metrics. The higher the values, the better is the algorithm. To compare the algorithms, we will use AUC PR metric instead of the AUC ROC as Saito et al. [32] show in their study, the AUC ROC may not be well suited in case of highly imbalanced classes. In their article [32], these authors showed that AUC ROC could be misleading when applied in imbalanced classification scenarios instead AUC PR should be used. ### Anomaly score To compute the model final prediction, we combine the output of the noise detection block and the output of the autoencoder block. We call the FFP block a noise detection block because our experiments show that when the synthetic labels are generated using high value of noise, the FFP block outputs (\(\hat{y}_{1}\)) high value for anomalies. This means that both \(\hat{y}_{0}\) and \(\hat{y}_{1}\) have high values for anomalies or outliers. So to be sure that we catch all anomalies and outliers, we combine these two values to compute the final anomaly score. The final prediction is then a weighted sum of \(\hat{y}_{1}\) and \(\hat{y}_{0}\). Lets define \(\alpha\) the weight of the the autoencoder block output \(\hat{y}_{0}\) inside the weighted sum. We first compute the third quantile \(Q_{3}\) of each output and compute \(\alpha\) as follows: \[\alpha=\frac{Q_{3}^{1}}{Q_{3}^{0}+Q_{3}^{1}} \tag{5}\] Where \(Q_{3}^{0}\) is the third quantile of the the autoencoder block predicted probabilities \(\hat{y}_{0}\) and \(Q_{3}^{1}\) is the third quantile of the the noise detection block predicted probabilities \(\hat{y}_{1}\). We used the third quantile \(Q_{3}\) just to make sure that we have a good estimate of the range of the predicted probability. One can use any other quantile or aggregating metrics (mean, variance etc.). The quantiles are more suited because they are less sensitive to extreme values. The final model is defined as follows: \[\hat{y}=\hat{y}_{1}\cdot(1-\alpha)+\alpha\cdot\hat{y}_{0} \tag{6}\] ## 4 Experiment For all upcoming experiments, we set the hyper-parameters of our model as follows: the FFP block has 2 hidden layers with respectively 32,16 neurons, the Encoder has two hidden layers with respectively 32,16 neurons and final latent layer has 16 neurons. We chose these values arbitrarily and did not do any further hyper parameter optimization to seek for best parameters. We did not spend time on hyperparameter optimization because our goal was to show that the proposed method works well even with arbitrary hyperparameters. For the state-of-the-art algorithms we used their implementation in the python Outlier Detection (PyOD) package [40]. We set the hyper-parameters to there default values. Our models are trained for 200 epochs on 1 GPU (NVIDIA GetFor 8GB) with batch size 128. The learning rate is \(1\times 10^{-4}\). We compared the performances of our method to those of 18 baseline unsupervised clustering algorithms including: CBLOF [16],HBOS [12], KNN[28], IForest [22], LOF [5], OCSVM [33], PCA[35], COF[36], SOD [19], COPOD [20], ECOD [21], AutoEncoder [18], DeepSVDD [30], GMM [43] and LODA[26]. These unsupervised algorithms are readily available in the Python Outlier Detection (PyOD) package [40]. We also added a simple MLP classifier trained using the synthetic labels. **Data generation and splitting.** In these experiments, we simulated a classification dataset using the "make_classification" module from sklearn. The "make_classification" module creates clusters of samples normally distributed about vertices of an hypercube and assigns an equal number of clusters to each class. It then introduces interdependence between the created features and adds various types of noise to the data. We generated a training set of 20000 samples with an imbalance rate of 5%. This means that the minority class represents 5% of the training dataset. Note that for iteration in each experiment, we generated new samples by varying the random state parameter. **Choice of the optimal value for \(w\).** Recall that \(w\) is the weight assigned to the cross entropy of the noise detection block. We make the hypothesis that when \(w\) tends to 1, the influence of the autoencoder block tends to 0 and the final model is equivalent to a simple Feed Forward Perceptron (FFP) model. So by varying the weights, we expect to see the impact of each part of our architecture to the final model loss. For each value of \(w\in[0,1]\), we trained our model 10 times on 10 different samples and report its AUC PR in figure 1(a) and the AUC ROC in figure 1(b). The figures show that the model performance increases until 0.2 and decreases very fast when \(w\) is greater than 0.2. The boxplot at 0.2 shows, the model's AUC PR and AUC ROC are stable. Indeed, at this point, the interquartile range of the boxplots are small and there are less outliers. These results suggest that in the proposed architecture, the ND block positively contributes to the performance of the final model up to a certain level. The optimal value of \(w\) lays around 0.2. **Noise level when generating synthetic labels.** Recall that the first step of the proposed method is to generate synthetic labels by introducing some noise inside a very small subset of the normal samples as explained in subsection 3.2. These noisy sample will then be considered as the abnormal sample during training. In this subsection we evaluate the impact of the noise level on the models Figure 2: Performance metrics by varying w final performance. The final goal is to find out if the amount of noise has an impact on the model final prediction. Lets denote the noise level as the standard deviation of the Gaussian distribution used to create the noisy samples. Figure 2(a) and 2(b) show the model final performance according to the noise level used to generate the synthetic labels. In these experiments we trained 10 models for each noise level. These figures show that the model's performance increases with the noise level. When the noise level is less than 0.32, the AUC PR is stable and lies around 45%. Between 0.32 and 0.52, the performance increases rapidly but very unstable. When the noise level is greater than 0.52, the model performance becomes more stable and the box plots are more and more small in range. These experiments suggest that the value of the standard deviation of the Gaussian noise should be greater than 0.52 to have better performances. ### Anomaly detection on synthetic datasets In this subsection, we compared the performance of our architecture to those of some state-of-the-art algorithms on synthetic datasets generated using the "make_classification" module from python sklearn package. Recall that at this stage we select a small subset (2%) of the training data to create the synthetic labels (see subsection 3.2). We trained and tested each algorithm 10 times and reported the AUC PR and training time in figure 4. At each iteration, we select randomly 2% of the samples from the training set and use them to create the synthetic label for our model. By doing so we make sure that the model performance does not depend on the samples used during label generation. We can see from these two figures that our model outperforms all other models in terms of PRC AUC. In particular, our model outperforms Deep auto encoder, Variational auto encoder and MLP even though they have the same kind of building blocks. We also note that deep learning based unsupervised methods like DeepSVDD and Autoencoder are surprisingly worse than classical methods. Figure 3(b) shows that our method takes more time to train compared to the other algorithms. ### Unsupervised anomaly detection on real world datasets We compared the performance of our method (**AnoRand**) to those of some state-of-the-art unsupervised methods on the ADBench anomaly detection benchmark [14]. In there paper, Han et al. [14] Figure 4: Performance metrics on synthetic data set for unsupervised algorithms Figure 3: Performance metrics for varying noise level when generating labels compared the performances of 14 algorithms on 57 benchmark datasets. The datasets cover different fields including healthcare, security, and more. We grouped the datasets into four categories to make the comparison easy: NLP datasets, Healthcare datasets, Science datasets and datasets from other fields (documents, web etc.). In there paper, the authors compared supervised, semi-supervised and unsupervised anomaly detection methods on these datasets. In our study, we only focus on the unsupervised algorithms of the benchmark. In table 1 and figure 5, we report the algorithms performance and their rankings on the the ADBench real-world datasets. In figure 4(a), the boxplots show that our model has best ranking among all its counterpart unsupervised algorithms. These results show that our model has best overall ranking among the tested algorithms. Indeed, figure 4(a) shows that AnoRand is ranked first (\(1^{st}\)) on 22 datasets, second (\(2^{nd}\)) on 5, third (\(3^{rd}\)) on 6 and fourth (\(4^{th}\)) on 4. The results also show in situations where another algorithm outperforms ours, the performance gap is very small in most cases. **Results on image classification datasets.** This category includes datasets from computer vision (CV) and image classification datasets such as Mnist, CIFAR10 and MVTec. Pre-trained ResNet18 [15] models have been used to extract data embedding from the original images. Table 1 shows the algorithms performance on the images benchmark datasets. We can see that **AnoRand** outperforms all reference algorithms on 5 of the 10 benchmark datasets and has second best performance on 2 other datasets. **Results on NLP datasets.** For NLP datasets, they used a BERT [11] pre-trained on the BookCorpus and English Wikipedia to extract the embedding of the token. Table 1 shows the algorithms performance on 5 NLP benchmark datasets. On these datasets, **AnoRand** outperforms the other algorithms on the speech and Imdb dataset It is ranked third on the Amazon, Agnews and the Yelp datasets. **Results on Healthcare datasets.** Table 1 shows the algorithms performance on the 10 healthcare benchmark datasets. This table shows that **AnoRand** outperforms the reference algorithms on 3 datasets. It has also the best overall ranking. **Results on Other datasets.** In this category, we include all other datasets from other fields. On these datasets, **AnoRand** outperforms the other algorithm on 10 datasets, ranked second in two and is ranked third on 2 others. AnoRand has also the best overall ranking. ### Supervised anomaly detection on real world datasets In this subsection we used our model as fully supervised method on 29 real-world anomaly detection benchmark datasets. We then compare its performance to those of 16 state-of-the-art supervised anomaly detection algorithms. We used the same datasets are in the semi-supervised case but this time we use the true label to train our models. We compared the performances of our method to those of 16 baseline supervised clustering algorithms [14] including: SVM, GANomaly [1], DeepSAD [31], REPEN [23], DevNet[25], PReNet[24], FEAWAD[42], XGBOD[39],LightGBM[17], CatBoost[27], Naive Bayes (NB) [3], Multi-layer Perceptron (MLP) [29], Random Forest (RF) [4], XGBoost [8], Residual Nets (ResNet)[13], FFTransformer [13] etc. We report the results of the supervised algorithms in table 2 and the ranking in figure 4(b). These results show that in the supervised scenario, Our method (AnoRand) and CatBoost have the best overall performance in terms of AUC PR and ranking. A deep analysis of the table 2 reveals that in most of the datasets, when AnoRand is not ranked first, its ranking is in the top four. The results also show that, on most the benchmark datasets, AnoRand outperforms its counterpart deep learning based methods. Table 2 shows that our method has better performance on big datasets such as the celeba, fraud, CIFAR10, census and the cover datasets. Our results also show that classical methods such as SVM, CatBoost, LGB tend to have better performance than deep learning based algorithms. ## 5 Conclusion In this paper, we proposed a new semi supervised anomaly detection method based on deep autoencoder architecture. This new method that we called **AnoRand**, jointly optimizes the deep autoencoder and the FFP model in an end-to-end neural network fashion. Our method is performed in two steps: we first create synthetic anomalies by randomly adding noise to few samples from the training data; secondly we train our deep learning model in supervised way with the new labeled data. Our method takes advantage of these limitations of FFP models in case of imbalance classes and use them to reinforce the autoencoder capabilities. Our experimental results show that our method achieves state-of-the-art performance on synthetic datasets and 57 realworld datasets, significantly outperforming existing unsupervised alternatives. Moreover, on most the benchmark datasets whatever the category, AnoRand outperforms all its counterpart deep learning based methods. The We also tested our method (AnoRand) in a supervised way by using the actual labels to train the model instead of creating synthetic label as we did in the semi supervised case. The results show that it has very good performance compared to most of state-of-the-art supervised algorithms. The main limitation of our method is that the training takes longer than most of the state-of-the-art algorithm.
2310.04516
Vulnerability Analysis of Nonlinear Control Systems to Stealthy False Data Injection Attacks
In this work, we focus on analyzing vulnerability of nonlinear dynamical control systems to stealthy false data injection attacks on sensors. We start by defining the stealthiness notion in the most general form where an attack is considered stealthy if it would be undetected by any intrusion detector, i.e., any intrusion detector could not do better than a random guess. Depending on the level of attacker's knowledge about the plant model, controller, and the system states, two different attack models are considered. For each attack model, we derive the conditions for which the system will be vulnerable to stealthy impactful attacks, in addition to finding a methodology for designing such sequence of false data injection attacks. When the attacker has complete knowledge about the system, we show that if the closed loop system is incrementally exponentially stable while the open loop plant is incrementally unstable, then the system is vulnerable to stealthy yet impactful attacks on sensors. However, in the second attack model, with less knowledge about the system, additional conditions need to be satisfied and the level of stealthiness depends on the accuracy of attacker's knowledge about the system. We also consider the impact of stealthy attacks on state estimation, and show that if the closed loop control system including the estimator is incrementally stable, then the state estimation in the presence of attack converges to the attack free estimates. Finally, we illustrate our results on numerical case studies.
Amir Khazraei, Miroslav Pajic
2023-10-06T18:24:53Z
http://arxiv.org/abs/2310.04516v1
# Vulnerability Analysis of Nonlinear Control Systems to Stealthy False Data Injection Attacks ###### Abstract In this work, we focus on analyzing vulnerability of nonlinear dynamical control systems to stealthy false-data injection attacks on sensors. We start by defining the _stealthiness_ notion in the most general form where an attack is considered stealthy if it would be undetected by _any_ intrusion detector - i.e., any intrusion detector could not do better than a random guess. Depending on the level of attacker's knowledge about the plant model, controller, and the system states, two different attack models are considered. For each attack model, we derive the conditions for which the system will be vulnerable to stealthy impactful attacks, in addition to finding a methodology for designing such sequence of false-data injection attacks. When the attacker has complete knowledge about the system, we show that if the closed-loop system is incrementally exponentially stable while the open-loop plant is incrementally unstable, then the system is vulnerable to stealthy yet impactful attacks on sensors. However, in the second attack model, with less knowledge about the system, additional conditions need to be satisfied and the level of stealthiness depends on the accuracy of attacker's knowledge about the system. We also consider the impact of stealthy attacks on state estimation, and show that if the closed-loop control system including the estimator is incrementally stable, then the state estimation in the presence of attack converges to the attack-free estimates. Finally, we illustrate our results on numerical case-studies. ## I Introduction Control systems have been shown to be vulnerable to a wide range of cyber and physical attacks with severe consequences. (e.g., [1]). As part of the control design and analysis, it is thus critical to identify early any vulnerability of the considered system to impactful attacks, especially the ones that are potentially stealthy to the deployed intrusion detection mechanisms. Depending on the resources available to the attacker, different types of stealthy impactful attacks have been proposed. For instance, for LTI systems with strictly proper transfer functions, by compromising the control input, the attacker can design effective stealthy attacks if the system has unstable zero invariant; e.g., [2] where such attack is referred to as the zero dynamics attack. However, when the transfer function is not strictly proper, the attacker needs to compromise both plant's inputs and outputs. When the attacker compromises both the plant's actuation and sensing, e.g., [3] derives the conditions under which the system is vulnerable to stealthy attacks. Other types of attacks that targets both input and output also exist for LTI systems including replay attack [4, 5] and covert attack [6]. Specifically, the authors in [5] show that replay attacks can bypass \(\chi^{2}\) intrusion detector (ID) for some class of LQG controllers in LTI systems and remain stealthy. On the other hand, sensor attacks, commonly referred to as false-data injection attacks, have drawn a great deal of attention. For example, vulnerability to false data injection attacks, of static state estimation in systems such as power grids was considered in [7]; by adding values that lie in the range of the observation matrix, such attack can bypass \(\chi^{2}\) detectors and lead to incorrect state estimation. However, for dynamical systems, merely inserting constant values in the range of the observation matrix would not be stealthy and effective; for stealthiness, a false-data injection attack needs to evolve with some dynamics to remain stealthy [8, 9, 10, 11, 12, 13]. Specifically, for linear time-invariant (LTI) systems with Gaussian noise, if measurements from all sensors can be compromised, the plant's (i.e., open-loop) instability is a sufficient condition for an attacker being able to significantly impact the system while remaining undetected (i.e., stealthy) by a particular type of residual-based (\(\chi^{2}\)) IDs [8, 9, 10, 13]. These works also show that to construct such attack sequences, the attacker needs to have complete knowledge about the system, including the dynamical model of the plant as well as the controller and Kalman filter gains. Yet, for LTI systems with bounded noise, the plant's instability is a necessary and sufficient condition for the existence of impactful, stealthy attacks when all senors are compromised [11, 12]. All these results [2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14] only address LTI systems. Moreover, the notion of stealthiness is only characterized for a _specific type_ of the employed ID (e.g., \(\chi^{2}\)-based detectors or RSE detectors). The notion of attack stealthiness independent of the employed ID (i.e., remaining stealthy for _all_ existing/potential IDs) for LTI systems is studied in [14, 15, 16, 17, 18, 19]. One of the main differences of this work is that our notion of stealthiness, initially introduced in [20], is stronger than the one from [14, 15, 16, 17, 18, 19] where stealthiness depends on time; i.e., stealthiness from [14, 15, 16, 17, 18, 19] requires that only for a bounded time the attack is guaranteed to stay undetected by any ID. However, the notion of stealthiness in our work is independent of time and the attack is guaranteed to be stealthy for all time steps after initiating the attack. Moreover, the performance degradation metric used in [15] is the error covariance of a Kalman filter estimator as opposed in our work; we assume the attacker's goal is to cause deviation in the trajectories of the states. To the best of our knowledge, no existing work provides _vulnerability analysis for systems with nonlinear dynamics, while considering general control and ID designs_, as well as provide generative models for such stealthy and impactful attacks. In [6], covert attacks are introduced as stealthy attacks that can target a potentially nonlinear system. However, the attacker needs to have perfect knowledge of the system's dynamics and be able to compromise _both_ the plant's input and outputs. Even more importantly, as the attack design is based on attacks on LTI systems, no guarantees are provided for effectiveness and stealthiness of attacks on nonlinear systems. In [21], attack design for nonlinear systems with an arbitrary but fixed ID (the type of ID is known to the attacker) is framed as an optimization problem in a discrete Markov decision system; however, no analysis is provided to determine which classes of systems are vulnerable to stealthy impactful attacks. Moreover, the design is based on discretizing the continuous state space where the complexity of the problem increases for higher dimensions. On the other hand, we show that if the condition on the existence of impactful stealthy attacks holds, our generated attack sequence can bypass any ID and the attacker does not need to know what type of ID is deployed. More recently, [22] introduced stealthy attacks on a _specific class_ of nonlinear systems with residual-based IDs, but provided effective attacks only when _both_ plant's inputs and outputs are compromised by the attacker. On the other hand, in this work, we assume the attacker can only compromise the plant's sensing data and consider systems with _general_ nonlinear dynamics. For systems with general nonlinear dynamics and residual-based IDs, machine learning-based methods to design the stealthy attacks have been introduced (e.g., [23]), but without any theoretical analysis and guarantees regarding the impact of the stealthy attacks. ### _Paper Contribution and Organization_ To the best of our knowledge, this is the first work that considers existence and design of _impactful_ sensor attacks on systems with general nonlinear dynamics such that the attacks are also _stealthy_ for _any_ deployed ID. The main contributions of this paper are summarized as follows. First, we introduce two different attack models depending on the level of system knowledge the attacker has. For each attack model, we provide conditions that a nonlinear system is vulnerable to effective yet stealthy attacks without limiting the analysis to any particular type of employed IDs. Specifically, for the first attack model with a higher level of the system knowledge, we show that if the closed-loop control system is incrementally exponentially stable while the open-loop control system is incrementally unstable, then the system is vulnerable to impactful attacks that can remain stealthy from any ID. For the second attack model that requires less knowledge about the system, additional conditions are imposed on the attacker; we show that if the closed-loop system is incrementally input to state stable and the open-loop system is incrementally input to state unstable, then the system is vulnerable to impactful attacks and the level of stealthiness depends on the accuracy of attacker's knowledge about the system. We also show that for LTI systems, if a certain subset of sensors are under attack, the closed-loop system is asymptotically stable, and the open-loop system is unstable, then the system is vulnerable to stealthy attacks independent of the deployed ID; this is a generalization of results from [8, 9, 10, 13] showing that such condition is sufficient for the stealthiness under only one class of IDs (\(\chi^{2}\)). Second, for each attack model, we provide a general method to a sequence of stealthy and impactful attack vectors. We show that as the dynamical model of the system becomes'simpler' (e.g., moving from highly nonlinear to LTI), the attacker needs lower levels of system knowledge to generate the attack sequence. In an extreme case, for LTI systems, we show that the attacker only needs to have access to the the state transition and the observation matrices, unlike [8, 9, 10, 13] that assume the attacker has access to the full plant model information as well as the controller and Kalman filter gain. Finally, we consider the impact of the proposed stealthy attacks on the nonlinear state estimators. We show that if the closed-loop control system, which includes the dynamics of the plant and the estimator, is incrementally exponentially stable, then the state estimation in the presence of attack converges exponentially to the attack free estimates. The paper is organized as follows. In Section II, we introduce preliminaries, whereas Section III presents the system and attack model, before formalizing the notion of stealthiness in Section IV. Section V provides sufficient conditions for existence of the impactful yet stealthy attacks. Section VI provides the impact of such attacks on state estimation. Finally, in Section VII, we illustrate our results on two case-studies, before concluding remarks in Section VIII. _Notation:_ We use \(\mathbb{R},\mathbb{Z},\mathbb{Z}_{\geq 0}\) to denote the sets of reals, integers and non-negative integers, respectively, and \(\mathbb{P}\) denotes the probability for a random variable. For a square matrix \(A\), \(\lambda_{max}(A)\) denotes the maximum eigenvalue and if \(A\) is symmetric, then \(A\succ 0\) denoted the matrix is positive definite. For a vector \(x\in\mathbb{R}^{n}\), \(||x||_{p}\) denotes the \(p\)-norm of \(x\); when \(p\) is not specified, the 2-norm is implied. For a vector sequence, \(x_{0}:x_{t}\) denotes the set \(\{x_{0},x_{1},...,x_{t}\}\). A function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}^{p}\) is Lipschitz with constant \(L\) if for any \(x,y\in\mathbb{R}^{n}\) it holds that \(||f(x)-f(y)||\leq L||x-y||\). Finally, if \(\mathbf{P}\) and \(\mathbf{Q}\) are probability distributions relative to Lebesgue measure with densities \(\mathbf{p}\) and \(\mathbf{q}\), respectively, then the Kullback-Leibler (KL) divergence between \(\mathbf{P}\) and \(\mathbf{Q}\) is defined as \(KL(\mathbf{P},\mathbf{Q})=\int\mathbf{p}(x)\log\frac{\mathbf{p}(x)}{\mathbf{ q}(x)}dx\). ## II Preliminaries Let \(\mathbb{X}\subseteq\mathbb{R}^{n}\) and \(\mathbb{D}\subseteq\mathbb{R}^{m}\). Consider a discrete-time nonlinear system with an exogenous input, modeled as \[x_{t+1}=f(x_{t},d_{t}),\quad x_{t}\in\mathbb{X},\ t\in\mathbb{Z}_{\geq 0}, \tag{1}\] where \(f:\mathbb{X}\times\mathbb{D}\rightarrow\mathbb{X}\) is continuous. We denote by \(x(t,\xi,d)\) the trajectory (i.e., the solution) of (1) at time \(t\), when the system has the initial condition \(\xi\) and is subject to the input sequence \(d=\{d_{0}:d_{t-1}\}\). The following definitions are derived from [24, 25, 26]. **Definition 1**: _The system (1) is incrementally exponentially stable (IES) in the set \(\mathbb{X}\subseteq\mathbb{R}^{n}\) if exist \(\kappa>1\) and \(\lambda<1\) that_ \[\|x(t,\xi_{1},d)-x(t,\xi_{2},d)\|\leq\kappa\|\xi_{1}-\xi_{2}\|\lambda^{t}, \tag{2}\] holds for all \(\xi_{1},\xi_{2}\in\mathbb{X}\), any \(d_{t}\in\mathbb{D}\), and \(t\in\mathbb{Z}_{\geq 0}\). When \(\mathbb{X}=\mathbb{R}^{n}\), the system is referred to as globally incrementally exponentially stable (GIES). **Definition 2**: _The system (1) is incrementally input to state stable (IISS) in the set \(\mathbb{X}\subseteq\mathbb{R}^{n}\) if there exists a function \(\gamma\in\mathcal{K}_{\infty}\), \(\kappa>1\), and \(\lambda<1\) such that for any \(t\in\mathbb{Z}_{\geq 0}\)_ \[\|x(t,\xi_{1},d^{1})-x(t,\xi_{2},d^{2})\|\leq\kappa\|\xi_{1}-\xi_{2}\|\lambda^ {t}+\gamma(\|d^{1}-d^{2}\|_{\infty}), \tag{3}\] _holds for all \(\xi_{1},\xi_{2}\in\mathbb{X}\), any \(d_{t}^{1},d_{t}^{2}\in\mathbb{D}\), and \(t\in\mathbb{Z}_{\geq 0}\), where \(\|d^{1}-d^{2}\|_{\infty}=\sup_{t\geq 0}\|d_{t}^{1}-d_{t}^{2}\|_{\infty}\). When \(\mathbb{X}=\mathbb{R}^{n}\), the system is referred to as globally incrementally input to state stable (GIISS)._ By replacing \(d^{1}=d^{2}\) in (3), one can verify that if the system is IISS, then it is also IES. **Definition 3**: _The system (1) is incrementally unstable (IU) in the set \(\mathbb{X}\subseteq\mathbb{R}^{n}\) if for all \(\xi_{1}\in\mathbb{X}\) and any \(d_{t}\in\mathbb{D}\), there exists an \(\xi_{2}\) arbitrarily close to \(\xi_{1}\) such that for any \(M>0\),_ \[\|x(t,\xi_{1},d)-x(t,\xi_{2},d)\|\geq M, \tag{4}\] _holds for all \(t\geq t^{\prime}\), for some \(t^{\prime}\in\mathbb{Z}_{\geq 0}\)._ **Definition 4**: _The system (1) is incrementally input to state unstable (IISU) in the set \(\mathbb{X}\subseteq\mathbb{R}^{n}\) if for all \(\xi_{1}\in\mathbb{X}\) and any \(d_{t}^{1},d_{t}^{2}\in\mathbb{D}\) satisfying \(\|d_{t}^{1}-d_{t}^{2}\|\leq\delta\) for all \(t\in\mathbb{Z}_{\geq 0}\), there exists an \(\xi_{2}\) such that for any \(M>0\),_ \[\|x(t,\xi_{1},d^{1})-x(t,\xi_{2},d^{2})\|\geq M, \tag{5}\] _holds for all \(t\geq t^{\prime}\), for some \(t^{\prime}\in\mathbb{Z}_{\geq 0}\)._ Similarly, by replacing \(d^{1}=d^{2}\) in (3), one can verify that if the system is IISU, then it is also IU. Now, we present some properties of Kullback-Leibler (KL) divergence known as _monotonicity_ and _chain-rule_[27]. **Lemma 1**: _[_27_]_ **(Monotonicity):** Let \(P_{X,Y}\) and \(Q_{X,Y}\) be two distributions for a pair of variables \(X\) and \(Y\), and \(P_{X}\) and \(Q_{X}\) be two distributions for variable \(X\). Then,_ \[KL(Q_{X}||P_{X})\leq KL(Q_{X,Y}||P_{X,Y}) \tag{6}\] **Lemma 2**: _[_27_]_ **(Chain rule):** Let \(P_{X,Y}\) and \(Q_{X,Y}\) be two distributions for a pair of variables \(X\) and \(Y\). Then,_ \[KL(Q_{X,Y}||P_{X,Y})=KL(Q_{X}||P_{X})+KL(Q_{Y|X}||P_{Y|X}), \tag{7}\] _where \(KL(Q_{Y|X}||P_{Y|X})\) is defined as_ \[KL(Q_{Y|X}||P_{Y|X})=\mathbb{E}_{x\sim Q_{X}}\{KL(Q_{Y|X=x}||P_{Y|X=x})\}. \tag{8}\] **Lemma 3**: _[_27_]_ **Let \(P_{X}\) and \(Q_{X}\) be two Gaussian distributions with the same covariance \(\Sigma\) and different means of \(\mu_{P}\) and \(\mu_{Q}\), respectively. Then, it holds that** \[KL(Q_{X}||P_{X})=\mu_{Q}^{T}\Sigma^{-1}\mu_{P}. \tag{9}\] **Lemma 4**: _Let \(Q_{X}\) be a distribution for a random variable \(X\) and we have \(X\leq M\) for some \(M>0\). Then,_ \[\mathbb{E}_{Q_{X}}\{X\}\leq M \tag{10}\] The proof directly follows from the definition of expectation and some properties of integral. ## III System and Attack Model In this section, we introduce the considered system and attack model, allowing us to formally capture the problem addressed in this work. We consider the setup from Fig. 1 where each of the components is modeled as follows. #### Iii-1 Plant We assume that the states of the system evolve following a general nonlinear discrete-time dynamics that can be captured in the state-space form as \[\begin{split} x_{t+1}&=f(x_{t},u_{t})+w_{t},\\ y_{t}&=h(x_{t})+v_{t};\end{split} \tag{11}\] here, \(x\in\mathbb{R}^{n}\), \(u\in\mathbb{R}^{m}\), \(y\in\mathbb{R}^{p}\) are the state, input and output vectors of the plant, respectively. We assume that \(h\) is Lipschitz with a constant \(L_{h}\). The plant output vector captures measurements from the set of plant sensors \(\mathcal{S}\). Further, \(w\in\mathbb{R}^{n}\) and \(v\in\mathbb{R}^{p}\) are the process and measurement noises that assumed to be i.i.d Gaussians with zero mean, and \(\mathbf{R}_{w}\succ 0\) and \(\mathbf{R}_{v}\succ 0\) covariance matrices, respectively. We also assume the system starts operating at time \(-T\) with initial state \(x_{-T}\). As we show later, it will be useful to consider the input to state relation of the dynamics (11); if we define \(U=\begin{bmatrix}u^{T}&w^{T}\end{bmatrix}^{T}\), the first equation in (11) becomes \[x_{t+1}=f_{u}(x_{t},U_{t}). \tag{12}\] #### Iii-2 Control Unit The controller, illustrated in Fig. 1, is equipped with a feedback controller in the most general form, as well as an intrusion detector (ID). In what follows, we provide more details on the controller design. Intrusion detector will be discussed after introducing the attack model. _Controller:_ Due to their robustness to uncertainties, closed-loop controllers are utilized in most control systems. In the most general form, a feedback controller can be captured in the state-space form as \[\begin{split}\mathcal{X}_{t}&=f_{c}(\mathcal{X}_{t-1},y_{t}^{c}), \\ u_{t}&=h_{c}(\mathcal{X}_{t},y_{t}^{c}),\end{split} \tag{13}\] where \(\mathcal{X}\) is the internal state of the controller, and \(y^{c}\) captures the sensor measurements received by the controller. Thus, without malicious activity, it holds that \(y^{c}=y\).1 Note that Fig. 1: Control system architecture considered in this work, in the presence of network-based attacks on sensing data. the control model (13) is general, capturing for instance nonlinear filtering followed by a classic nonlinear controller (e.g., \(f_{c}\) can model an extended Kalman filter and \(h_{c}\) any full-state feedback controller). Here, we assume that the function \(f_{c}\) is Lipschitz with a constant \(L_{f_{c}}\). We also assume \(\mathcal{X}_{-T}\) is obtained deterministically by the system operator.2 Footnote 2: The result of the chapter holds even when such initial condition is chosen randomly. We discuss more about this in the proof of Theorem 2. The dynamics of the closed-loop system can be captured as \[\mathbf{X}_{t+1}=F(\mathbf{X}_{t},\mathbf{W}_{t}). \tag{14}\] with the full state of the closed-loop system \(\mathbf{X}\stackrel{{\Delta}}{{=}}\left[x_{t}^{T}\ x_{t}^{T}\right]^{T}\), and exogenous disturbances \(\mathbf{W}_{t}\stackrel{{\Delta}}{{=}}\left[w_{t}^{T}\ v_{t+1}^{T}\ v_{t}^{T}\right]^{T}\). Therefore, the functions \(f_{c}\) and \(h_{c}\) are designed such that the closed-loop system states satisfy some desired properties. #### Iii-B3 Attack Model We consider a sensor attack model where, for sensors from a set \(\mathcal{K}\subseteq\mathcal{S}\), the information delivered to the controller differs from the non-compromised sensor measurements. The attacker can achieve this via e.g., noninvasive attacks such sensor spoofing (e.g., [28]) or by compromising information-flow from the sensors in \(\mathcal{K}\) to the controller (e.g., as in network-based attacks [29, 30]). In either cases, the attacker can launch false-date injection attacks, inserting a desired value instead of the current measurement of a compromised sensor.3 Footnote 3: We refer to sensors from \(\mathcal{K}\) as compromised, even if a sensor itself is not directly compromised but its measurements may be altered due to e.g., network-based attacks. Thus, assuming that the attack starts at time \(t=0\), the sensor measurements delivered to the controller for \(t\in\mathbb{Z}_{\geq 0}\) can be modeled as \[y_{t}^{c,a}=y_{t}^{a}+a_{t}; \tag{15}\] here, \(a_{t}\in\mathbb{R}^{p}\) denotes the attack signal injected by the attacker at time \(t\) via the compromised sensors from \(\mathcal{K}\), \(y_{t}^{a}\) is the true sensing information (i.e., before the attack is injected at time \(t\)) - we use the superscript a to differentiate all signals of the attacked system. In the rest of the paper, we assume \(\mathcal{K}=\mathcal{S}\); for some systems, we will discuss how the results can be generalized for the case when \(\mathcal{K}\subset\mathcal{S}\). Now, with different levels of runtime knowledge about the plant and its states we consider two different attack models as follows. **Attack model I:** The attacker has perfect knowledge about the system states and the control input, as well as the functions \(f\) and \(h\) in (11). **Attack model II:** The attacker has only some imperfect knowledge about the system states. The attacker imperfectly reconstructs the states by using either the system's sensor measurements or its own set of external sensors. Moreover, the attacker has knowledge about the controller functions \(f_{c}\) and \(h_{c}\) in (13) and the functions \(f\) and \(h\) in (11) but does not have access to control input \(u_{t}\). Consider the system with plant (11), controller (13) and an ID that we define in the next subsection. We denote such system with _attack model I_ and _attack model II_ as \(\Sigma_{\text{I}}\) and \(\Sigma_{\text{II}}\), respectively. Note that since the controller uses the received sensing information to compute the input \(u_{t}\), the compromised sensor values affect the evolution of the system and controller states. Hence, we add the superscript \(a\) to denote any signal obtained from a compromised system - e.g., thus, \(y_{t}^{a}\) is used to denote before-attack sensor measurements when the system is under attack in (15), and we denote the closed-loop plant and controller state when the system is compromised as \(\mathbf{X}^{a}\stackrel{{\Delta}}{{=}}\begin{bmatrix}x^{a}\\ x^{a}\end{bmatrix}\). Since the attack starts at time zero, it takes one time step to affect the states and actual output; therefore, we have \(y_{0}^{a}=y_{0}\) and \(x_{0}^{a}=x_{0}\). As discussed in the attack models, in this work, we consider the commonly adopted threat model as in majority of existing stealthy attack designs, e.g., [4, 6, 8, 9, 11], where the attacker has full knowledge of the system, its dynamics and employed architecture. In addition, the attacker has the required computational power to calculate suitable attack signals to be injected, while planning ahead as needed. Finally, the _attack goal_ is to design an attack signal \(a_{t}\), \(t\in\mathbb{Z}_{\geq 0}\), such that it always remains _stealthy_ - i.e., undetected by an employed ID - while _maximizing control performance degradation_. The notions of _stealthiness_ and _control performance degradation_ depend on the employed control architecture, and thus will be formally defined after both the controller and ID have been introduced. #### Iii-B4 Intrusion Detector (ID) To detect system attacks (and anomalies), we assume that an ID is employed, analyzing the received sensor measurements. Specifically, by defining \(Y\stackrel{{\Delta}}{{=}}y^{c}\), as well as \(Y^{a}\stackrel{{\Delta}}{{=}}y^{c,a}\) when the system is under attack, we assume that the ID has access to a sequence of values \(Y_{-T}:Y_{t}\) (all observations since the system starts operating until time \(t\))4 and solves the binary hypothesis checking Footnote 4: It should be noted that \(-T\) in this notation does not have its mathematical meaning. Here, we used it to show the initial time of the system. \[H_{0}\text{: normal condition (the ID receives }Y_{-T}:Y_{t})\text{:}\] \[H_{1}\text{: abnormal behaviour (the ID receives }Y_{-T}^{-1},Y_{0}^{a}:Y_{t}^{a}),\] where we denote \(Y_{-T}^{-1}=Y_{-T}:Y_{-1}\). Given a sequence of received data denoted by \(\bar{Y}^{t}=\bar{Y}_{-T}:\bar{Y}_{t}\), it is either extracted from the \(H_{0}\) hypothesis with the joint distribution denoted as \(\mathbf{P}(Y_{-T}:Y_{t})\), or from the alternate hypothesis with a joint (_unknown_) distribution denoted by \(\mathbf{Q}(Y_{-T}^{-1},Y_{0}^{a}:Y_{t}^{a})\);6 note that the joint distribution \(\mathbf{Q}\) is controlled by the injected attack signal and is thus unknown. Footnote 5: Since the attack starts at \(t=0\), we do not use superscript \(a\) for the system evolution for \(t<0\), as the trajectories of the non-compromised and compromised systems do not differ before the attack starts. Footnote 6: With some abuse of the notation, we just use \(\mathbf{P}\) or \(\mathbf{Q}\) to refer to these joint distributions. We define the intrusion detector \(D\) as the mapping \[D:\bar{Y}^{t}\rightarrow\{0,1\}, \tag{16}\] where the output \(0\) is associated with \(H_{0}\) and output 1 with \(H_{1}\). It should be noted that ID in the above format is general as there is no assumption on the mapping \(D\), and the output measurements are the only information that is accessible to the system to detect the abnormalities. Any other signal including \(\mathcal{X}\) (consequently the input control \(u_{t}\)) can be obtained using the sequence of sensor measurements \(\bar{Y}^{t}\) where such mapping can be captured in mapping \(D\). Let us define \(p_{t}^{TD}(D)=\mathbb{P}(D(\bar{Y}^{t})=1|\bar{Y}^{t}\sim\mathbf{Q})\) as the probability of true detection, and \(p_{t}^{FA}(D)=\mathbb{P}(D(\bar{Y}^{t})=1|\bar{Y}^{t}\sim\mathbf{P})\) as the probability of false alarm for the detector \(D\). Let also consider the random guess-based ID (defined by \(D_{RG}\)). For such random guess-based ID we have \[p^{FA}(D_{RG})\stackrel{{\text{(a)}}}{{=}} \mathbb{P}(D_{RG}(\bar{Y}^{t})=1|\bar{Y}^{t}\sim\mathbf{P})\stackrel{{ \text{(b)}}}{{=}}\mathbb{P}(D_{RG}(\bar{Y}^{t})=1)\] \[\stackrel{{\text{(c)}}}{{=}} \mathbb{P}(D_{RG}(\bar{Y}^{t})=1|\bar{Y}^{t}\sim\mathbf{Q}) \stackrel{{\text{(d)}}}{{=}}p^{TD}(D_{RG}),\] where (a) and (d) hold according to the definition of true detection and false alarm probabilities; (b) and (c) hold because in random guess the detection is independent of the distribution of the observation. Hence, for random guess detectors, the probability of true detection and false alarm are equal. **Definition 5**: _An ID (defined by \(D\)) is better than a random guess ID (defined by \(D_{RG}\)) if \(p^{FA}(D)<p^{TD}(D)\)._ **Remark 1**: _Note that although we assumed the ID has access to the measurement data from time \(-T\) to the current time \(t\), it does not mean that the system only should individually process each observation at each time step. The ID can combine all the observation until current time step to decide if the system is in normal condition._ We now formalize the notion of stealthy attacks. ## IV Formalizing Stealthiness and Attack Objectives This section captures the conditions that an attack sequence is stealthy from _any_ ID. Specifically, we define an attack to be strictly stealthy if _there exists no detector that can perform better than a random guess between the two hypothesis_ (Definition 5). However, reaching such stealthiness guarantees may not be possible in general. Thus, we also define the notion of \(\epsilon\)_-stealthiness_, which as we will show later, is attainable for a large class of nonlinear systems. Formally, we define the notions of _strict stealthiness_ and \(\epsilon\)_-stealthiness_ as follows. **Definition 6**: _Consider the system from (11). An attack sequence, denoted by \(\{a_{0},a_{1},...\}\), is **strictly stealthy** if there exists no detector for which \(p_{t}^{TD}-p_{t}^{FA}>0\) holds, for any \(t\geq 0\). An attack is \(\epsilon\)**-stealthy** if for a given \(\epsilon>0\), there exists no detector such that \(p_{t}^{TD}-p_{t}^{FA}>\epsilon\) holds, for any \(t\geq 0\)._ The following theorem uses Neyman-Pearson lemma to capture the condition for which the received sensor measurements satisfy the stealthiness condition in Definition 6. **Theorem 1** ([20, 31]): _An attack sequence is_ * _strictly stealthy if and only if_ \(KL\big{(}\mathbf{Q}(Y_{-T}^{-1},Y_{0}^{a}:Y_{t}^{a})||\mathbf{P}(Y_{-T}:Y_{t} )\big{)}=0\) _for all_ \(t\in\mathbb{Z}_{\geq 0}\)_._ * _is_ \(\epsilon\)_-stealthy if the observation sequence_ \(Y_{0}^{a}:Y_{t}^{a}\) _satisfies_ \[KL\big{(}\mathbf{Q}(Y_{-T}^{-1},Y_{0}^{a}:Y_{t}^{a})||\mathbf{P}(Y_{-T}:Y_{t} )\big{)}\leq\log(\frac{1}{1-\epsilon^{2}}).\] **Remark 2**: _The \(\epsilon\)-stealthiness condition from [16] requires_ \[\lim_{t\to\infty}\frac{KL\big{(}\mathbf{Q}(Y_{0}^{a}:Y_{t}^{a})||\mathbf{P}(Y_ {0}:Y_{t})\big{)}}{t}\leq\epsilon.\] This allows for the KL divergence to linearly increase over time for any \(\epsilon>0\), and as a result, after large-enough time period the attack will be detected. On the other hand, our definition of \(\epsilon\)-stealthy only depends on \(\epsilon\) and is fixed for any time \(t\); thus, it introduces a stronger notion of attack stealthiness. _Formalizing Attack Goal:_ The attacker intends to _maximize_ control performance degradation. Specifically, the attack goal is to cause deviation in the system's trajectory. In other words, if we assume the attack starts at \(t=0\), and denote the states of the attack-free and under attack systems as \(x_{t}\) and \(x_{t}^{a}\), respectively for \(t\in\mathbb{Z}_{\geq 0}\), then the attack objective is to achieve \[\|x_{t^{\prime}}^{a}-x_{t^{\prime}}\|\geq\alpha. \tag{17}\] for some \(t^{\prime}\in\mathbb{Z}_{\geq 0}\). In other words, the attacker wants to cause deviation in the trajectory of states with respect to system's own desired unattacked trajectory. Moreover, the attacker wants _to remain stealthy (i.e., undetected by the intrusion detector)_, as formalized below. **Definition 7**: _An attack sequence is referred to as \((\epsilon,\alpha)\)_-successful_ attack if exists \(t^{\prime}\in\mathbb{Z}_{\geq 0}\) such that \(\|x_{t^{\prime}}^{a}-x_{t^{\prime}}\|\geq\alpha\) and the attack is \(\epsilon\)-stealthy for all \(t\in\mathbb{Z}_{\geq 0}\). When such sequence exists for a system, the system is called \((\epsilon,\alpha)\)-attackable. When the system is \((\epsilon,\alpha)\)-attackable for arbitrarily large \(\alpha\), the system is referred to as perfectly attackable._ Now, the problem considered in this work can be formalized as capturing the potential impact of stealthy attacks on the considered system. Specifically, in the next section, we derive conditions for existence of a _stealthy_ yet _effective_ attack sequence \(a_{0},a_{1},...\) resulting in \(\|x_{t}^{a}-x_{t}\|\geq\alpha\) for some \(t\in\mathbb{Z}_{\geq 0}\) - i.e., we find conditions for the system to be \((\epsilon,\alpha)\)-attackable. Here, for an attack to be stealthy, we focus on the \(\epsilon-\)stealthy notion; i.e., that even the best ID could only improve the detection probability by \(\epsilon\) compared to the random-guess baseline detector. ## V Vulnerability Analysis of Nonlinear Systems to Stealthy Attacks In this section, we derive the conditions such that the nonlinear system (11) with closed-loop dynamics (14) is vulnerable to effective stealthy attacks formally defined in Section IV. ### _Vulnerability Analysis of Nonlinear Systems \(\Sigma_{I}\)_ First, we derive the condition such that the system \(\Sigma_{\text{I}}\) is vulnerable to stealthy attacks. **Theorem 2**: _The system \(\Sigma_{\text{I}}\) is \((\epsilon,\alpha)\)-attackable for arbitrarily small \(\epsilon\) and arbitrarily large \(\alpha\), if the closed-loop system (14) is IES and the system (12) is IU._ Assume that the trajectory of the system and controller states for \(t\in\mathbb{Z}_{<0}\) is denoted by \(\mathbf{X}_{-T}:\mathbf{X}_{-1}\). Following the attack start at \(t=0\), let us consider the evolution of the system with and without attacks during \(t\in\mathbb{Z}_{\geq 0}\). For the system under attack, starting at time zero, the trajectory \(\mathbf{X}_{0}^{a}:\mathbf{X}_{t}^{a}\) of the system and controller states is governed by \[\begin{split} x_{t+1}^{a}=& f(x_{t}^{a},u_{t}^{a})+w_{t}^{a}, \quad y_{t}^{c,a}=h(x_{t}^{a})+v_{t}^{a}+a_{t}\\ \chi_{t}^{a}=& f_{c}(\chi_{t-1}^{a},y_{t}^{c,a}), \quad u_{t}^{a}=h_{c}(\chi_{t}^{a},y_{t}^{c,a}).\end{split} \tag{18}\] On the other hand, if the system was not under attack during \(t\in\mathbb{Z}_{\geq 0}\), we denote the plant and controller state evolution by \(\mathbf{X}_{0}:\mathbf{X}_{t}\). Hence, it is a continuation of the system trajectories \(\mathbf{X}_{-T}:\mathbf{X}_{-1}\) if hypothetically no data-injection attack occurs during \(t\in\mathbb{Z}_{\geq 0}\). Since the system and measurement noises are independent of the state, we can assume that \(w_{t}^{a}=w_{t}\) and \(v_{t}^{a}=v_{t}\). In this case, the dynamics of the plant and controller state evolution satisfies \[\begin{split} x_{t+1}=& f(x_{t},u_{t})+w_{t},\quad y _{t}^{c}=h(x_{t})+v_{t},\\ \chi_{t}=& f_{c}(\chi_{t-1},y_{t}^{c}),\quad u_{t}=h _{c}(\chi_{t},y_{t}^{c}),\end{split} \tag{19}\] captured in the compact form (14), with \(\mathbf{X}_{0}=\begin{bmatrix}x_{0}\\ \chi_{0}\end{bmatrix}\). Now, consider the sequence of attack vectors injected in the system (18), constructed by the attacker using the dynamics \[\begin{split} s_{t+1}&=f(x_{t}^{a},u_{t}^{a})-f(x_{t}^{a}- s_{t},u_{t}^{a})\\ a_{t}&=h(x_{t}^{a}-s_{t})-h(x_{t}^{a}),\end{split} \tag{20}\] for \(t\in\mathbb{Z}_{\geq 0}\), and with some arbitrarily chosen nonzero initial value of \(s_{0}\). By injecting the above attack sequence into the sensor measurements, we can verify that \(y_{t}^{c,a}=h(x_{t}^{a})+v_{t}+a_{t}=h(x_{t}^{a}-s_{t})+v_{t}\). After defining7 Footnote 7: Superscript \(f\) is the initial of the word ‘fake’ as we will show later how the attacker fool the system to believe \(x^{f}\) represents as the systems state. \[x_{t}^{f}\triangleq x_{t}^{a}-s_{t}, \tag{21}\] and combining (20) with (18), the dynamics of \(x_{t}^{f}\) and the controller, as well as the corresponding input and output satisfy \[\begin{split} x_{t+1}^{f}=& f(x_{t}^{f},u_{t}^{a})+w_{t}, \quad y_{t}^{c,a}=h(x_{t}^{f})+v_{t},\\ \chi_{t}^{a}=& f_{c}(\chi_{t-1}^{a},y_{t}^{c,a}), \quad\quad u_{t}^{a}=h_{c}(\chi_{t}^{a},y_{t}^{c,a}),\end{split} \tag{22}\] with the initial condition \(x_{0}^{f}=x_{0}^{a}-s_{0}\). Now, if we define \(\mathbf{X}_{t}^{f}=\begin{bmatrix}x_{t}^{f}\\ \chi_{t}^{a}\end{bmatrix}\), it holds that \[\mathbf{X}_{t+1}^{f}=F(\mathbf{X}_{t}^{f},\mathbf{W}_{t}). \tag{23}\] with \(\mathbf{X}_{0}^{f}=\begin{bmatrix}x_{0}^{f}\\ x_{0}^{a}\end{bmatrix}\). Since we have that \(x_{0}^{a}=x_{0}\), it holds that \(\mathbf{X}_{0}-\mathbf{X}_{0}^{f}=\begin{bmatrix}s_{0}\\ X_{0}-\lambda_{0}^{a}\end{bmatrix}\). Since the functions \(f_{c}\) and \(h\) are Lipschitz with constants \(L_{f_{c}}\) and \(L_{h}\), respectively, we have \[\begin{split}\|\chi_{0}-\chi_{0}^{a}\|&\leq\|f_{c}(\chi_{-1},y_{0}^{ c})-f_{c}(\chi_{-1},y_{0}^{c,a})\|\leq L_{f_{c}}\|y_{0}^{c}-y_{0}^{c,a}\|\\ &\leq L_{f_{c}}\|h(x_{0})+v_{0}-h(x_{0}-s_{0})-v_{0}\|\\ &\leq L_{f_{c}}L_{h}\|s_{0}\|.\end{split}\] Therefore, we get \(\|\mathbf{X}_{0}-\mathbf{X}_{0}^{f}\|\leq(1+L_{f_{c}}L_{h})\|s_{0}\|\). On the other hand, since both (23) and (14) have the same function and argument \(\mathbf{W}_{t}\), the closed-loop system (23) is IES, and it also follows that \[\begin{split}\|\mathbf{X}(t,\mathbf{X}_{0},\mathbf{W})-\mathbf{X }^{f}(t,\mathbf{X}_{0}^{f},\mathbf{W})\|&\leq\kappa\|\mathbf{X}_{0}- \mathbf{X}_{0}^{f}\|\lambda^{t}\\ &=\kappa(1+L_{f_{c}}L_{h})\|s_{0}\|\lambda^{t},\end{split} \tag{24}\] for some nonnegative \(\lambda<1\). Therefore, the trajectories of \(\mathbf{X}\) (i.e., the system without attack) and \(\mathbf{X}^{f}\) converge to each other exponentially fast. Now, by defining \(\mathbf{Z}_{t}=\begin{bmatrix}x_{t}\\ y_{t}^{c}\end{bmatrix}\) and \(\mathbf{Z}_{t}^{f}=\begin{bmatrix}x_{t}^{f}\\ y_{t}^{c,a}\end{bmatrix}\), it holds that \[\begin{split} KL\big{(}&\mathbf{Q}(Y_{-T}^{-1},Y_{0}^{a}:Y_{0}^{a}) ||\mathbf{P}(Y_{-T}:Y_{t})\big{)}\\ &\leq KL\big{(}\mathbf{Q}(\mathbf{Z}_{-T}^{-1},\mathbf{Z}_{0}^{f}: \mathbf{Z}_{t}^{f})||\mathbf{P}(\mathbf{Z}_{-T}:\mathbf{Z}_{t})\big{)},\end{split} \tag{25}\] where we applied the monotonicity property of the KL-divergence from Lemma 1 to get the above inequality. Then, we apply the chain-rule property of KL-divergence on the right-hand side of (25) to obtain the following \[\begin{split} KL\big{(}&\mathbf{Q}(\mathbf{Z}_{-T}^{-1}, \mathbf{Z}_{0}^{f}:\mathbf{Z}_{t}^{f})||\mathbf{P}(\mathbf{Z}_{-T}^{-1}, \mathbf{Z}_{0}:\mathbf{Z}_{t})\big{)}\\ =& KL\big{(}\mathbf{Q}(\mathbf{Z}_{-T}^{-1})||\mathbf{P}( \mathbf{Z}_{-T}^{-1})\big{)}+\\ KL\big{(}&\mathbf{Q}(\mathbf{Z}_{0}^{f}:\mathbf{Z}_{t}^{f}| \mathbf{Z}_{-T}^{-1})||\mathbf{P}(\mathbf{Z}_{0}:\mathbf{Z}_{t}|\mathbf{Z}_{-T}^ {-1})\big{)}\\ =& KL\big{(}\mathbf{Q}(\mathbf{Z}_{0}^{f}:\mathbf{Z}_{t}^{f}| \mathbf{Z}_{-T}^{-1})||\mathbf{P}(\mathbf{Z}_{0}:\mathbf{Z}_{t}|\mathbf{Z}_{-T}^ {-1})\big{)};\end{split} \tag{26}\] here, we used the fact that the KL-divergence of two identical joint distributions (i.e., \(\mathbf{Q}(\mathbf{Z}_{-T}^{-1})\) and \(\mathbf{P}(\mathbf{Z}_{-T}^{-1})\) since the system is not under attack for \(t<0\)) is zero. Applying the chain-rule property of KL-divergence to (26) \[\begin{split} KL\big{(}&\mathbf{Q}(\mathbf{Z}_{0}^{f}: \mathbf{Z}_{t}^{f}|\mathbf{Z}_{-T}^{-1})||\mathbf{P}(\mathbf{Z}_{0}:\mathbf{Z }_{t}|\mathbf{Z}_{-T}^{-1})\big{)}\\ =&\sum_{k=0}^{t}\Big{\{}KL\big{(}\mathbf{Q}(x_{k}^{f}| \mathbf{Z}_{-T}^{-1},\mathbf{Z}_{0}^{f}:\mathbf{Z}_{k-1}^{f})||\mathbf{P}(x_{ k}|\mathbf{Z}_{-T}:\mathbf{Z}_{k-1})\big{)}\\ +& KL\big{(}\mathbf{Q}(y_{k}^{c,a}|x_{k}^{f},\mathbf{Z}_{-T}^{-1}, \mathbf{Z}_{0}^{f}:\mathbf{Z}_{k-1}^{f})||\mathbf{P}(y_{k}|x_{k},\mathbf{Z}_{-T}: \mathbf{Z}_{t-1})\big{)}\Big{\}}.\end{split} \tag{27}\] Given \(\mathbf{Z}_{-T}:\mathbf{Z}_{k-1}\), the distribution of \(x_{k}\) is a Gaussian with mean \(f(x_{k-1},u_{k-1})\) and covariance \(\mathbf{R}_{w}\). Similarly given \(\mathbf{Z}_{-T Now, it holds that \(\mathbf{Q}(y_{k}^{c,a}|x_{k}^{f},\mathbf{Z}_{-T}:\mathbf{Z}_{k-1}^{f})=\mathbf{Q}(y _{k}^{c,a}|x_{k}^{f})\) and \(\mathbf{P}(y_{k}|x_{k},\mathbf{Z}_{-T}:\mathbf{Z}_{k-1})=\mathbf{P}(y_{k}|x_{k})\); also, from (19) and (22) it holds that given \(x_{k}\) and \(x_{k}^{f}\), \(\mathbf{P}(y_{k}|x_{k})\) and \(\mathbf{Q}(y_{k}^{c,a}|x_{k}^{f})\) are both Gaussian with mean \(h(x_{k})\) and \(h(x_{k}^{f})\), respectively and covariance \(\mathbf{R}_{v}\). Thus, it follows that \[KL\big{(} \mathbf{Q}(y_{k}^{c,a}|x_{k}^{f})\|\mathbf{P}(y_{k}|x_{k})\big{)}\] \[=\mathbb{E}_{\mathbf{Q}(x_{k}^{f})}\big{\{}\big{(}h(x_{k})-h(x_{ k}^{f})\big{)}^{T}\mathbf{R}_{v}^{-1}\big{(}h(x_{k})-h(x_{k}^{f})\big{)}\big{\}}\] \[\leq L_{h}^{2}(x_{k}-x_{k}^{f})^{T}\mathbf{R}_{v}^{-1}(x_{k}-x_{k }^{f})\] \[\leq L_{h}^{2}(1+L_{f_{e}}L_{h})^{2}\kappa^{2}\|s_{0}\|^{2} \lambda^{2k}\lambda_{max}(\mathbf{R}_{v}^{-1}), \tag{29}\] where we used again Lemmas 2, 3 and 4 to get the above inequality. Combining (25)-(29) results in \[KL\big{(}\mathbf{Q}(Y_{-T}^{-1},Y_{0}^{a}:Y_{t}^{a})\|\mathbf{P} (Y_{-T}:Y_{t})\big{)}\leq\] \[\leq\frac{\kappa^{2}(1+L_{f_{e}}L_{h})^{2}\|s_{0}\|^{2}\lambda^{2 k}\lambda_{max}(\mathbf{R}_{v}^{-1})+L_{h}^{2}\lambda_{max}(\mathbf{R}_{v}^{-1}) \big{)}\stackrel{{\Delta}}{{=}}b_{\epsilon}. \tag{30}\] Finally, with \(b_{\epsilon}\) defined as in (30) and using Theorem 1 the attack sequence defined in (20) satisfies the \(\epsilon\)-stealthiness condition with \(\epsilon=\sqrt{1-e^{-b_{\epsilon}}}\). We now show that the proposed attack sequence is effective; i.e., there exists \(t^{\prime}\in\mathbb{Z}_{\geq 0}\) such that \(\|x_{t^{\prime}}^{a}-x_{t^{\prime}}\|\geq\alpha\) for arbitrarily large \(\alpha\). To achieve this, consider the two dynamics from (18) and (22) for any \(t\in\mathbb{Z}_{\geq 0}\) \[x_{t+1}^{a}= f(x_{t}^{a},u_{t}^{a})+w_{t}=f_{u}(x_{t}^{a},U_{t}^{a}) \tag{31}\] \[x_{t+1}^{f}= f(x_{t}^{f},u_{t}^{a})+w_{t}=f_{u}(x_{t}^{f},U_{t}^{a})\] with \(U_{t}^{a}=\begin{bmatrix}u_{t}^{aT}&w_{t}^{T}\end{bmatrix}^{T}\), for \(t\in\mathbb{Z}_{\geq 0}\). Since we assumed that the open-loop system (12) is IU, it holds that for all \(x_{0}^{a}=x_{0}\), there exits a nonzero \(s_{0}\) such that for any \(M>0\) \[\|x^{a}(t,x_{0}^{a},U^{a})-x^{f}(t,x_{0}^{a}-s_{0},U^{a})\|\geq M \tag{32}\] holds in \(t\geq t^{\prime}\), for some \(t^{\prime}\in\mathbb{Z}_{\geq 0}\). On the other hand, we showed in (24) that \(\|x(t,x_{0},U)-x^{f}(t,x_{0}^{a}-s_{0},U^{a})\|\leq\kappa(1+L_{f_{e}}L_{h})\|s _{0}\|\lambda^{t}\). Combining this with (32) results in \[\|x^{a}(t,x_{0}^{a},U^{a})-x(t,x_{0}-s_{0},U)\|=\] \[\|x^{a}(t,x_{0}^{a},U^{a})-x^{f}(t,x_{0}^{a}-s_{0},U^{a})+x^{f}(t,x_{0}^{a}-s_{0},U^{a})\] \[-x(t,x_{0}-s_{0},U)\|\geq\|x^{a}(t,x_{0}^{a},U^{a})-x^{f}(t,x_{0} ^{a}-s_{0},U^{a})\|\] \[-\|x^{f}(t,x_{0}^{a}-s_{0},U^{a})-x(t,x_{0}-s_{0},U)\|\] \[\geq M-\kappa(1+L_{f_{e}}L_{h})\|s_{0}\|\lambda^{t}\] \[\Rightarrow\|x^{a}(t,x_{0}^{a},U^{a})-x(t,x_{0}-s_{0},U)\|\] \[\geq M-\kappa(1+L_{f_{e}}L_{h})\|s_{0}\|\lambda^{t}\geq M-\kappa( 1+L_{f_{e}}L_{h})\|s_{0}\|. \tag{33}\] Since \(M\) is a free parameter, we can choose it to satisfy \(M>\alpha+\kappa(1+L_{f_{e}}L_{h})\|s_{0}\|\), for arbitrarily large \(\alpha\). Thus, the system is \((\epsilon,\alpha)\)-attackable. From (22), the false sensor measurements are generated by the evolution of \(x_{t}^{f}\). Therefore, intuitively, the attacker wants to deceive the system into believing that \(x_{t}^{f}\) is the actual state of the system instead of \(x_{t}^{a}\). Since \(x_{t}^{f}\) and \(x_{t}\) (i.e., the system state if no attack occurs during \(t\in\mathbb{Z}_{\geq 0}\)) converge to each other exponentially fast, the idea is that the system almost believes that \(x_{t}\) is the system state (under attack), while the actual state \(x_{t}^{a}\) becomes arbitrarily large (see Fig. 2). On the other hand, \(s_{t}=x_{t}^{a}-x_{t}^{f}\) denotes the deviation between the actual state \(x_{t}^{a}\) and fake state \(x_{t}^{f}\), and since we have \(\|x_{t}^{f}-x_{t}\|\leq\kappa(1+L_{f_{e}}L_{h})\|s_{0}\|\lambda^{t}\), we can derive \(\|x_{t}^{a}-x_{t}\|\geq\|s_{t}\|-\kappa(1+L_{f_{e}}L_{h})\|s_{0}\|\) using the same procedure as in (33). Thus, for an \(s_{0}\) with very a small norm, \(s_{t}\) can represent the deviation between the under attack \(x_{t}^{a}\) and attack-free \(x_{t}\) states. This shows that to have an impactful attack, the dynamics of the \(s_{t}\) needs to be unstable. Moreover, all parameters \(\kappa\), \(\lambda\), \(L_{h}\), \(L_{f_{e}}\), \(\mathbf{R}_{w}\), and \(\mathbf{R}_{v}\) in (30) are some constants that depend either on system properties (\(L_{h}\), \(\mathbf{R}_{w}\), and \(\mathbf{R}_{v}\)) or are determined by the controller design (\(L_{f_{e}}\), \(\kappa\), \(\lambda\)). However, \(s_{0}\) is set by the attacker, and _it can be chosen arbitrarily small to make \(\epsilon\) arbitrarily close to zero_. Yet, \(s_{0}\) cannot be equal to zero; in that case (32) would not hold - i.e., the attack would not be impactful. Therefore, as opposed to attack methods targeting the prediction covariance in [16] where the attack impact linearly changes with \(\epsilon\), here arbitrarily large \(\alpha\) (high impact attacks) can be achieved even with an arbitrarily small \(\epsilon\); it may only take more time to get to \(\|x_{t}^{a}-x_{t^{\prime}}\|\geq\alpha\). _Discussion on Incremental Stability:_ The notion of incremental stability from Definition 1 determines how fast the system 'forgets' the impact of the initial condition [24]. For example, consider an LTI state space model \(x_{t+1}=Ax_{t}+Bd_{t}\). If the matrix \(A\) has all eigenvalues inside the unit circle then the system is IES according to Definition 1. Alternatively, we can look at the solution of states at time \(t\) as \[x_{t}=A^{t}x_{0}+\sum\nolimits_{k=0}^{t-1}A^{t-k-1}Bd_{k},\] where the term \(A^{t}x_{0}\) captures the impact of the initial condition and the second term capturing the impact of the input sequence \(d_{0}:d_{t-1}\) on the system. If all eigenvalues of the matrix \(A\) are inside the unit circle, then the term \(A^{t}x_{0}\) will approach to zero exponentially for any given initial condition. This means the system will 'forget' the impact of initial condition if the matrix \(A\) is stable. Fig. 2: The trajectories of the system states \(x_{t}\) without attack (black line), of the ‘fake ’states \(x_{t}^{f}\) (blue dash line), and the system states \(x_{t}^{a}\) after the attack starts (red line). Hence, IES means the impact of initial condition converges to zero exponentially fast. Similar notion also exists with an asymptotic behaviour where the system is called incrementally asymptotically stable if the impact of initial condition approaches to zero asymptotically. If we replace the IES condition in Theorem 2 by incrementally asymptotically stability, one can still show that the system will be \((\epsilon,\alpha)\)-attackable; yet, the upper bound obtained in (30) will be looser. Additional discussion is provided in Appendix VIII-A. **Remark 3**: _In case that either \(w=0\) or \(v=0\) (i.e., when there is no process or measurement noise), one can still get a similar bound on the KL-divergence only as a function of the nonzero noise covariance by applying the monotonicity and data-processing inequalities. However, ensuring stealthiness requirement is not possible if both \(w=0\) and \(v=0\) (i.e., for a noiseless system), as the system would be completely deterministic, and thus theoretically any small perturbation to the sensor measurements could be detected._ ### _Vulnerability Analysis of Nonlinear Systems \(\Sigma_{\text{II}}\)_ As discussed in Section III, for the system \(\Sigma_{\text{II}}\) the attacker does not have perfectly access to the system states. Here, we consider two possible methods for estimating the states. **Case 1:** The attacker designs an estimator \(\mathcal{E}\) that takes the sequence of system measurements \(y_{-\mathcal{T}}:y_{-1},y_{0}^{a}:y_{t-1}^{a}\) to estimate the states \[\hat{x}_{t}^{a}=\mathcal{E}_{t}(y_{-\mathcal{T}}:y_{-1},y_{0}^{a}:y_{t-1}^{a}) \tag{34}\] where \(\mathcal{E}\) is a nonlinear mapping that can represent any nonlinear filtering such as extended Kalman filter (EKF). We assume that the attacker uses the measurements of \(\mathcal{T}\) (with \(-T<-\mathcal{T}\)) time step before the attack initiation to estimate the states. It is also adssumed that the estimation error \(\zeta=\hat{x}-x\) is bounded; i.e., \(\|\zeta_{t}\|\leq b_{\zeta}\) for all \(t\in\mathbb{Z}_{>0}\). **Case 2:** The attacker uses its own set of sensors \(y_{t}^{\prime}\) to imperfectly measure the states as \(y_{t}^{\prime}=h^{\prime}(x_{t})+v_{t}^{\prime}\) where \(v^{\prime}\) is i.i.d noise independent of \(w\) and \(v\). Such measurements are used in a filter or fusion model to extract the state estimate as \[\hat{x}_{t}^{a}=\mathcal{E}_{t}^{\prime}(y_{-\mathcal{T}}^{\prime}:y_{-1}^{ \prime a},y_{0}^{\prime a}:y_{t}^{\prime a}), \tag{35}\] with \(\zeta_{t}=\hat{x}_{t}^{a}-x_{t}^{a}\). Moreover, it is assumed that \(\zeta_{t}\) is bounded by \(b_{\zeta}^{\prime}\) for all \(t\in\mathbb{Z}_{>0}\). **Remark 4**: _Since the system and measurement noises are modeled by Gaussian distribution, theoretically it might not be possible to provide a bound on the estimation error with probability one for case 1. However, we assume \(b_{\zeta}\) is chosen high enough that the estimation error is bounded by \(b_{\zeta}\) during the attack run-time with probability of almost one._ Now, the following theorem captures the conditions such that the system \(\Sigma_{\text{II}}\) is vulnerable to stealthy attacks. **Theorem 3**: _Assume that the function \(h_{c}\) is Lipschitz with a constant \(L_{h_{c}}\). The system \(\Sigma_{\text{II}}\) is \((\epsilon,\alpha)\)-attackable for arbitrarily large \(\alpha\), if the closed-loop system (14) is IISS, the controller dynamics (13) is IES, and the system (12) is IISU._ As in Theorem 2, let us consider the trajectory of the system and controller states for \(t\in\mathbb{Z}_{\leq 0}\) is denoted by \(\mathbf{X}_{-T}:\mathbf{X}_{-1}\). Following the attack start at \(t=0\), consider the evolution of the system with and without attacks during \(t\in\mathbb{Z}_{\geq 0}\). For the system under attack, starting at \(t=0\), the trajectory \(\mathbf{X}_{0}^{a}:\mathbf{X}_{t}^{a}\) of the system and controller states satisfies \[\begin{split} x_{t+1}^{a}=& f(x_{t}^{a},u_{t}^{a})+w_{t}, \quad y_{t}^{c,a}=h(x_{t}^{a})+v_{t}+a_{t}\\ \mathcal{X}_{t}^{a}=& f_{c}(\mathcal{X}_{t-1}^{a},y_{t}^{c,a}), \qquad u_{t}^{a}=h_{c}(\mathcal{X}_{t}^{a},y_{t}^{c,a}).\end{split} \tag{36}\] On the other hand, if the system were not under attack during \(t\in\mathbb{Z}_{\geq 0}\), we denote the plant and controller state evolution by \(\mathbf{X}_{0}:\mathbf{X}_{t}\). Hence, it is a continuation of the system trajectories \(\mathbf{X}_{-T}:\mathbf{X}_{-1}\) if hypothetically no data-injection attack occurs during \(t\in\mathbb{Z}_{\geq 0}\). Since the system and measurement noises are independent of the state, we can assume that \(w_{t}^{a}=w_{t}\) and \(v_{t}^{a}=v_{t}\). In this case, the dynamics of the plant and controller state evolution satisfies \[\begin{split} x_{t+1}=& f(x_{t},u_{t})+w_{t},\quad y _{t}^{c}=h(x_{t})+v_{t},\\ \mathcal{X}_{t}=& f_{c}(\mathcal{X}_{t-1},y_{t}^{c}), \qquad u_{t}=h_{c}(\mathcal{X}_{t},y_{t}^{c}),\end{split} \tag{37}\] captured in the compact form (14), with \(\mathbf{X}_{0}=\begin{bmatrix}x_{0}^{T}&\mathcal{X}_{0}^{T}\end{bmatrix}^{T}\). Now, consider the sequence of attack vectors injected in the system from (18), which are constructed by the attacker using the following dynamical model \[s_{t+1}=f(\hat{x}_{t}^{a},u_{t}^{s})-f(\hat{x}_{t}^{a}-s_{t},u_{t}^{s}) \tag{38a}\] \[a_{t}=h(\hat{x}_{t}^{a}-s_{t})-h(\hat{x}_{t}^{a}),\] (38b) \[\mathcal{X}_{t}^{s}=f_{c}(\mathcal{X}_{t-1}^{s},y_{t}^{c,a}),\] (38c) \[u_{t}^{s}=h_{c}(\mathcal{X}_{t}^{s},y_{t}^{c,a}) \tag{38d}\] for \(t\in\mathbb{Z}_{\geq 0}\), and with some arbitrarily chosen nonzero initial value of \(s_{0}\) and \(\mathcal{X}_{0}^{s}\). By injecting the above attack sequence into the sensor measurements, we can verify that \[y_{t}^{c,a}=h(x_{t}^{a})+v_{t}+a_{t}=h(x_{t}^{a})-h(\hat{x}_{t}^{a})+h(\hat{x} _{t}^{a}-s_{t})+v_{t}. \tag{39}\] After defining \[x_{t}^{f}\overset{\Delta}{=}x_{t}^{a}-s_{t}, \tag{40}\] and combining (20) with (18), the dynamics of \(x_{t}^{f}\) and the controller, and the corresponding input and output satisfy \[\begin{split} x_{t+1}^{f}=& f(x_{t}^{a},u_{t}^{a})-f( \hat{x}_{t}^{a},u_{t}^{s})+f(\hat{x}_{t}^{a}-s_{t},u_{t}^{s})+w_{t}\\ =& f(x_{t}^{f},u_{t}^{a})-f(x_{t}^{f},u_{t}^{a})+f(x_{t}^{a},u_{t}^{a })-f(\hat{x}_{t}^{a},u_{t}^{s})\\ &+f(x_{t}^{f}+\zeta_{t},u_{t}^{s})+w_{t}=f(x_{t}^{f},u_{t}^{a})+w_ {t}+\sigma_{t},\end{split} \tag{41a}\] \[\begin{split} y_{t}^{c,a}=& h(x_{t}^{a})+v_{t}-h(\hat{x}_{t}^{a})+h(\hat{x}_{t}^{a}-s_{t})\\ =& h(x_{t}^{f})+h(x_{t}^{f}+\zeta_{t})-h(x_{t}^{f})+h(x_{t}^{a })-h(\hat{x}_{t}^{a})+v_{t}\\ =& h(x_{t}^{f})+v_{t}+\sigma_{t}^{\prime}\end{split} \tag{41b}\] with the initial condition \(x_{0}^{f}=x_{0}^{a}-s_{0}\), where we also added and subtracted \(f(x_{t}^{f},u_{t}^{a})\) and \(h(x_{t}^{f})\) from (41a) and (41b), respectively. We also define \(\sigma_{t}\overset{\Delta}{=}f(x_{t}^{f}+\zeta_{t},u_{t}^{s})-f(x_{t}^{f},u_{t}^{ a})+f(x_{t}^{a},u_{t}^{a})-f(\hat{x}_{t}^{a},u_{t}^{s})-f(\hat{x}_{t}^{a},u_{t}^{s})\) and \(\sigma_{t}^{\prime}\overset{\Delta}{=}h(x_{t}^{f}+\zeta_{t})-h(x_{t}^{f})+h(x_{t}^{a })-h(\hat{x}_{t}^{a})\). Since the controller dynamics (13) is IES and the exogenous input to (38c) and (36) is the same (i.e., \(y_{t}^{c,a}\)), we have \(\|\mathcal{x}_{t}^{a}-\mathcal{X}_{0}^{s}\|\leq\kappa_{s}\lambda_{s}^{t}\| \mathcal{X}_{0}^{a}-\mathcal{X}_{0}^{s}\|\) for \(t\in\mathbb{Z}_{\geq 0}\). Since the function \(h_{s}\) is Lipschitz, we get \(\|u_{t}^{a}-u_{t}^{s}\|\leq L_{h_{c}\kappa_{s}}\lambda_{s}^{t}\|\mathcal{X}_{0 }^{a}-\mathcal{X}_{0}^{s}\|\) for \(t\in\mathbb{Z}_{\geq 0}\). Now, using the assumption that the functions \(f\) and \(h\) are Lipschitz, we can find a bound on the norm of \(\sigma\) and \(\sigma^{\prime}\) as \[\|\sigma_{t}\| \leq\|f(x_{t}^{f}+\zeta_{t},u_{t}^{s})-f(x_{t}^{f},u_{t}^{a})+f( x_{t}^{a},u_{t}^{a})-f(\hat{x}_{t}^{a},u_{t}^{s})\| \tag{42a}\] \[\leq 2L_{f}(\|\zeta_{t}\|+L_{h_{c}}\kappa_{s}\lambda_{s}^{t}\| \mathcal{X}_{0}^{a}-\mathcal{X}_{0}^{s}\|)\] \[\|\sigma_{t}^{\prime}\| \leq\|h(x_{t}^{f}+\zeta_{t})-h(x_{t}^{f})+h(x_{t}^{a})-h(\hat{x}_ {t}^{a})\|\leq 2L_{h}\|\zeta_{t}\| \tag{42b}\] Now, if we define \(\mathbf{X}_{t}^{f}=\begin{bmatrix}x_{t}^{f}\\ \mathcal{X}_{0}^{a}\end{bmatrix}\), it holds that \[\mathbf{X}_{t+1}^{f}=F(\mathbf{X}_{t}^{f},\mathbf{W}_{t}^{\prime}). \tag{43}\] with \(\mathbf{X}_{0}^{f}=\begin{bmatrix}x_{0}^{f}\\ \mathcal{X}_{0}^{0}\end{bmatrix}\) and \(\mathbf{W}_{t}^{\prime}=\begin{bmatrix}w_{t}+\sigma_{t}\\ v_{t}+\sigma_{t}^{\prime}\end{bmatrix}\). Since \(x_{0}^{a}=x_{0}\), it holds that \(\mathbf{X}_{0}-\mathbf{X}_{0}^{f}=\begin{bmatrix}s_{0}\\ \mathcal{X}_{0}-\mathcal{X}_{0}^{s}\end{bmatrix}\) and similar to Theorem 2, we have \(\|\mathbf{X}_{0}-\mathbf{X}_{0}^{f}\|\leq(1+L_{f_{c}}L_{h})\|s_{0}\|\). On the other hand, since the closed-loop system (23) is IISS, we have \[\|\mathbf{X} (t,\mathbf{X}_{0},\mathbf{W})-\mathbf{X}^{f}(t,\mathbf{X}_{0}^{f },\mathbf{W}^{\prime})\|\leq\kappa\|\mathbf{X}_{0}-\mathbf{X}_{0}^{f}\| \lambda^{t} \tag{44}\] \[+\gamma(\|\mathbf{W}-\mathbf{W}^{\prime}\|_{\infty})\leq\kappa(1 +L_{f_{c}}L_{h})\|s_{0}\|\lambda^{t}\] \[+\gamma(\|\sigma_{t}\|_{\infty}+\sigma_{t}^{\prime}\|_{\infty}) \leq\kappa(1+L_{f_{c}}L_{h})\|s_{0}\|\lambda^{t}\] \[+\gamma(2(L_{f}+L_{h})\|\zeta_{t}\|+2L_{f}L_{h_{c}}\kappa_{s} \lambda_{s}^{t}\|\mathcal{X}_{0}^{a}-\mathcal{X}_{0}^{s}\|),\] where the term \(\kappa(1+L_{f_{c}}L_{h})\|s_{0}\|\lambda^{t}\) converges exponentially fast to zero. For simplicity of the notation, we denote the \(\gamma\) function above simply by \(\gamma(t)\) as it is a function of time. The term \(\gamma(t)\) has positive value and will approach to zero as both \(\|\zeta_{t}\|\) and \(\|\mathcal{X}_{0}^{a}-\mathcal{X}_{0}^{s}\|\)) approach to zero. Therefore, the trajectories of \(\mathbf{X}\) (i.e., the system without attack) and \(\mathbf{X}^{f}\) are within a bounded distance from each other. We now use these results to show that the generated attack sequence satisfies the \(\epsilon\)-stealthiness condition. Due to the space limit, we only show here the stealthiness for state estimation method using Case 1. The proof for Case 2 can be found in Appendix. Before finishing the proof, we introduce the lemma (proof in Appendix), and define \(\mathbf{Z}_{t}=\begin{bmatrix}x_{t}\\ y_{t}^{e}\end{bmatrix}\) and \(\mathbf{Z}_{t}^{f}=\begin{bmatrix}x_{t}^{f}\\ y_{t}^{e}\end{bmatrix}\). **Lemma 5**: _Assume the sequence of \(\mathbf{Z}_{-T}^{-1},\mathbf{Z}_{0}^{f}:\mathbf{Z}_{t}^{f}\) is given for any \(t\geq 0\) and \(s_{0}\), \(\mathcal{X}_{-T}\) and \(\mathcal{X}_{0}^{s}\) are chosen deterministically. Then, the signals \(u_{t}^{a}\), \(u_{t}^{s}\), \(y_{t}^{a}\), \(\hat{x}_{t+1}^{a}\) and \(s_{t+1}\) are also uniquely (deterministically) obtained._ Now, using the monotonicity property of the KL-divergence from Lemma 1 it holds that \[\begin{split} KL\big{(}&\mathbf{Q}(Y_{-T}^{-1},Y_{0}^{a}:Y_{t}^{a}) ||\mathbf{P}(Y_{-T}:Y_{t})\big{)}\\ &\leq KL\big{(}\mathbf{Q}(\mathbf{Z}_{-T}^{-1},\mathbf{Z}_{0}^{f}: \mathbf{Z}_{t}^{f})||\mathbf{P}(\mathbf{Z}_{-T}:\mathbf{Z}_{t})\big{)}, \end{split} \tag{45}\] Then, we apply the chain-rule property of KL-divergence on the right-hand side of (45) to obtain the following \[\begin{split} KL\big{(}&\mathbf{Q}(\mathbf{Z}_{-T}^{-1}, \mathbf{Z}_{0}^{f}:\mathbf{Z}_{t}^{f})||\mathbf{P}(\mathbf{Z}_{-T}^{-1}, \mathbf{Z}_{0}:\mathbf{Z}_{t})\big{)}\\ &=KL\big{(}\mathbf{Q}(\mathbf{Z}_{-T}^{-1})||\mathbf{P}(\mathbf{Z} _{-T}^{-1})\big{)}+\\ KL\big{(}&\mathbf{Q}(\mathbf{Z}_{0}^{f}:\mathbf{Z}_{t}^{f}| \mathbf{Z}_{-T}^{-1})||\mathbf{P}(\mathbf{Z}_{0}:\mathbf{Z}_{t}|\mathbf{Z}_{- T}^{-1})\big{)};\end{split} \tag{46}\] where we used the fact that the KL-divergence of two identical distributions (i.e., \(\mathbf{Q}(\mathbf{Z}_{-T}^{-1})\) and \(\mathbf{P}(\mathbf{Z}_{-T}^{-1})\) is zero. Applying the chain-rule property of KL-divergence to (26) results in \[\begin{split} KL\big{(}&\mathbf{Q}(\mathbf{Z}_{0}^{f}: \mathbf{Z}_{t}^{f}|\mathbf{Z}_{-T}^{-1})||\mathbf{P}(\mathbf{Z}_{0}:\mathbf{Z}_{ t}|\mathbf{Z}_{-T}^{-1})\big{)}\\ &=\sum_{k=0}^{\prime}\Big{\{}KL\big{(}\mathbf{Q}(x_{k}^{f}| \mathbf{Z}_{-T}^{-1},\mathbf{Z}_{0}^{f}:\mathbf{Z}_{t-1}^{f})||\mathbf{P}(x_{k}| \mathbf{Z}_{-T}:\mathbf{Z}_{k-1})\big{)}\\ &+KL\big{(}\mathbf{Q}(y_{k}^{c,a}|x_{k}^{f},\mathbf{Z}_{-T}^{-1}, \mathbf{Z}_{0}^{f}:\mathbf{Z}_{k-1}^{f})||\mathbf{P}(y_{k}|x_{k},\mathbf{Z}_{- T}:\mathbf{Z}_{k-1})\big{)}\Big{\}}.\end{split} \tag{47}\] Given \(\mathbf{Z}_{-T}:\mathbf{Z}_{k-1}\), the distribution of \(x_{k}\) is a Gaussian with mean \(f(x_{k-1},u_{k-1})\) and covariance \(\mathbf{R}_{w}\). On the other hand, using (41a) and (40) we have \[\begin{split} x_{k}^{f}=& f(x_{k-1}^{f}+s_{k-1},u_{k-1}^{a} )-f(\hat{x}_{k-1}^{a},u_{k-1}^{s})\\ &+f(\hat{x}_{k-1}^{a}-s_{k-1},u_{k-1}^{s})+w_{k-1}.\end{split} \tag{48}\] Using Lemma 5, given \(\mathbf{Z}_{-T}^{-1},\mathbf{Z}_{0}^{f}:\mathbf{Z}_{t-1}^{f}\) and the fact that \(s_{0}\), \(\mathcal{X}_{-T}\) and \(\mathcal{X}_{0}^{s}\) are deterministic, \(\hat{x}_{k-1}^{a}\), \(s_{k-1}\), \(u_{k-1}^{s}\) and \(u_{k-1}^{a}\) can be deterministically obtained. Therefore, the distribution of \(x_{k}^{f}\) given \( Combining (45)-(51) results in \[KL\big{(}\mathbf{Q}(Y_{-T}^{-1},Y_{0}^{a}:Y_{t}^{a})||\mathbf{P}(Y_ {-T}:Y_{t})\big{)}\] \[\leq \sum_{k=0}^{t}\Big{\{}\lambda_{max}(\mathbf{R}_{w}^{-1})\|x_{k}-x_ {k}^{f}\|^{2}+\lambda_{max}(\mathbf{R}_{v}^{-1})\] \[\times\Big{(}L_{h}^{2}\|x_{k}-x_{k}^{f}\|^{2}+2L_{h}^{2}b_{\zeta} \|x_{k}-x_{k}^{f}\|+L_{h}^{2}b_{\zeta}^{2}\Big{)}\Big{\}}.\] It is straightforward to verify that \[\sum_{k=0}^{t}\|x_{k}-x_{k}^{f}\|^{2}\leq\sum_{k=0}^{t}\big{(} \kappa(1+L_{f_{c}}L_{h})\|s_{0}\|\lambda^{k}+\gamma(k)\big{)}^{2}\] \[= \sum_{k=0}^{t}\Big{(}\kappa^{2}(1+L_{f_{c}}L_{h})^{2}\|s_{0}\|^{ 2}\lambda^{2k}\] \[\qquad\qquad+2\kappa(1+L_{f_{c}}L_{h})\|s_{0}\|\lambda^{k}\gamma (k)+\gamma^{2}(k)\Big{)}\] \[\leq \frac{\kappa^{2}(1+L_{f_{c}}L_{h})^{2}\|s_{0}\|^{2}}{1-\lambda^ {2}}+2\frac{\kappa(1+L_{f_{c}}L_{h})\|s_{0}\|\gamma(0)}{1-\lambda}\] \[\qquad+\sum_{k=0}^{t}\gamma^{2}(k);\] \[\sum_{k=0}^{t}\|x_{k}-x_{k}^{f}\|\leq \sum_{k=0}^{t}\big{(}\kappa(1+L_{f_{c}}L_{h})\|s_{0}\|\lambda^{k} +\gamma(k)\big{)}\] \[\leq \frac{\kappa(1+L_{f_{c}}L_{h})\|s_{0}\|}{1-\lambda}+\sum_{k=0}^{ t}\gamma(k).\] Therefore, we get \[KL\big{(}\mathbf{Q}(Y_{-T}^{-1},Y_{0}^{a}:Y_{t}^{a})||\mathbf{P} (Y_{-T}:Y_{t})\big{)}\leq\] \[\frac{\kappa^{2}(1+L_{f_{c}}L_{h})^{2}\|s_{0}\|^{2}}{1-\lambda^{2} }\big{(}\lambda_{max}(\mathbf{R}_{w}^{-1})+L_{h}^{2}\lambda_{max}(\mathbf{R}_ {v}^{-1})\big{)}+\] \[\frac{\kappa(1+L_{f_{c}}L_{h})\|s_{0}\|}{1-\lambda}\big{(}2\gamma (0)\lambda_{max}(\mathbf{R}_{w}^{-1})+L_{h}^{2}\lambda_{max}(\mathbf{R}_{v}^{ -1})\big{)}\] \[+\sum_{k=0}^{t}\gamma^{2}(k)\big{(}\lambda_{max}(\mathbf{R}_{w}^{ -1})+L_{h}^{2}\lambda_{max}(\mathbf{R}_{v}^{-1})\big{)}\] \[+\sum_{k=0}^{t}\gamma(k)\big{(}2L_{h}^{2}b_{\zeta}\lambda_{max}( \mathbf{R}_{v}^{-1})\big{)}+\sum_{k=0}^{t}b_{\zeta}^{2}L_{h}^{2}\lambda_{max}( \mathbf{R}_{v}^{-1}). \tag{53}\] Denoting the right-hand side of the above inequality by \(b_{\epsilon}\) and using Theorem 1, the system \(\Sigma_{\text{II}}\) will be \(\epsilon\)-stealthy attackable with \(\epsilon=\sqrt{1-e^{-b_{\epsilon}}}\). Now, we show that the attack (38a) is also \(\alpha\)-impactful. Using (36) and (41a) we have \[x_{t+1}^{a}=f(x_{t}^{a},u_{t}^{a})+w_{t}=f_{u}(x_{t}^{a},U_{t}^{ a}) \tag{54}\] \[x_{t+1}^{f}=f(x_{t}^{f},u_{t}^{a})+w_{t}+\sigma_{t}=f_{u}(x_{t}^ {f},{U^{\prime}}_{t}^{a})\] with \(U_{t}^{a}=\begin{bmatrix}{u_{t}^{a}}^{T}&{w_{t}^{a}}^{T}\end{bmatrix}^{T}\)and \({U^{\prime}}_{t}^{a}=\begin{bmatrix}{u_{t}^{a}}^{T}&({w_{t}}+\sigma_{t})^{T} \end{bmatrix}^{T}\), for \(t\in\mathbb{Z}_{\geq 0}\). Using the assumption that the system is IISU, there exists a nonzero \(s_{0}\) such that \(\|x^{a}(t,x_{0}^{a},U^{a})-x^{f}(t,x_{0}^{a}-s_{0},U^{\prime a})\|\geq M\) holds for any \(M>0\) in \(t\geq t^{\prime}\) for some \(t^{\prime}\) given \(\|U_{t}^{a}-{U^{\prime}}_{t}^{a}\|\leq\|\sigma_{t}\|\leq 2L_{f}(b_{\zeta}+L_{h_{c}} \kappa_{s}\lambda_{s}^{t}\|\mathcal{X}_{0}^{a}-\mathcal{X}_{0}^{a}\|)\) for all \(t\in\mathbb{Z}_{\geq 0}\). Following the same procedure as in Theorem 2, we can show that it holds that \[\|x^{a}(t,x_{0}^{a},U^{a})-x(t,x_{0}-s_{0},U)\|\geq M-\kappa(1+L_{f_{c}}L_{h}) \|s_{0}\|-\gamma(0),\] where by choosing \(M\geq\alpha+\kappa(1+L_{f_{c}}L_{h})\|s_{0}\|+\gamma(0)\), the attack will be \(\alpha\)-impactful. Therefore, the system is (\(\epsilon\),\(\alpha\))-attackable. Unlike Theorem 2 where the 'fake' state converges exponentially fast to the attack-free state, under the conditions of Theorem 3, the 'fake' state will be within a bounded distance from the attack-free states. Intuitively, depending on the magnitude of deviation \(\gamma\), different levels of stealthiness guarantees will be obtained, where for smaller values of \(b_{\zeta}\) in Case 1 (or \(b_{\zeta}^{\prime}\) in Case 2), \(\lambda_{s}\) and \(\|\mathcal{X}_{0}^{s}-\mathcal{X}_{0}^{a}\|\), the deviation is smaller and thus, the attack will have stronger stealthier guarantees. The IES condition in Theorem 3 ensures the estimated input control \(u_{t}^{s}\) converges to actual input control \(u_{t}^{a}\) exponentially fast. The stealthiness can also be verified directly with the right-hand side of the inequalities (53) and (79) in Theorem 3, respectively. These terms indicate \(b_{\epsilon}\) for \(\epsilon=\sqrt{1-e^{-b_{\epsilon}}}\) for each of Cases 1 and 2. As long as the attacker's estimate of states is more accurate (i.e., \(b_{\zeta}\) or \(b_{\zeta}^{\prime}\) is smaller), then \(\epsilon\) is smaller. However, there is no guarantee that \(\epsilon\) approaches to zero when the estimation error is large. On the other hand, having the initial condition \(\mathcal{X}_{0}^{s}\) close to \(\mathcal{X}_{0}^{a}\) would help the attack be stealthier as the function \(\gamma(k)\) would be smaller. Ideally, when \(s_{0}\) can be chosen arbitrarily close to zero and both \(b_{\zeta}\) (or \(b_{\zeta}^{\prime}\) ), \(\|\mathcal{X}_{0}^{s}-\mathcal{X}_{0}^{a}\|\) approach to zero, then \(\epsilon\) will be close to zero. We can also observe that having larger noise covariance \(\mathbf{R}_{w}\) and \(\mathbf{R}_{v}\) would help the attacker to have a stealthier attack as \(\lambda_{max}(\mathbf{R}_{w}^{-1})\) and \(\lambda_{max}(\mathbf{R}_{v}^{-1})\) would be smaller. However, if the attacker only relies on the system's sensor measurements for state estimation \(\hat{x}_{t}^{a}\) (i.e., Case 1), having larger levels of noise would also cause larger \(b_{\zeta}\) that has negative impact on attack stealthiness. To avoid this problem, the attacker might use the side information (Case 2) to have smaller estimation error independent of system's noise profile. ### _Vulnerability of Nonlinear, Input Affine Systems_ The model (11) is very general and as a result it requires the attacker to have knowledge about the states and input control (system \(\Sigma_{\text{I}}\)) or their estimates (system \(\Sigma_{\text{II}}\)) to design the stealthy attacks. However, as we show in this section,'simpler' dynamical models result in some relaxations on the required knowledge for the attacker to design stealthy attacks. For example, consider the plants that can be modeled as8 Footnote 8: With some abuse of the notation we again use \(f\) as state transition function. \[x_{t+1}=f(x_{t})+Bu_{t}+w_{t}. \tag{55}\] where \(x\), \(u\) and \(w\) are defined as before and \(B\in\mathbb{R}^{m\times n}\). Such formulation can include systems where the impact of the control input on states is weighted by a constant matrix or systems with format of \(x_{t+1}=f(x_{t})+g(x_{t})u_{t}+w_{t}\) where the function \(g(x)\) has very small Lipschitz constant. Let consider the system \(\Sigma_{\text{I}}\) with plant model (55). Therefore, when the conditions of Theorem 2 holds such system will be \((\epsilon,\alpha)\) attackable. However, generating the attack sequence using (20) for such system will be as \[\begin{split} s_{t+1}&=f(x_{t}^{a})+Bu_{t}^{a}-f(x _{t}^{a}-s_{t})-Bu_{t}^{a}\\ &=f(x_{t}^{a})-f(x_{t}^{a}-s_{t})\\ & a_{t}&=h(x_{t}^{a}-s_{t})-h(x_{t}^{a}),\end{split} \tag{56}\] which means the attacker does not need to have access to input control during the attack (\(u_{t}^{a}\)) and such requirement in attack model I can be relaxed. Moreover, the attacker does not need to have knowledge about the matrix \(B\). Similarly, for system \(\Sigma_{\text{II}}\) with input affine plant model (55), the attack sequence generation (38a) in Theorem 3 will also be relaxed as \[\begin{split} s_{t+1}&=f(\hat{x}_{t}^{a})+Bu_{t}^{ s}-f(\hat{x}_{t}^{a}-s_{t})-Bu_{t}^{s}\\ &=f(\hat{x}_{t}^{a})-f(\hat{x}_{t}^{a}-s_{t})\\ & a_{t}&=h(\hat{x}_{t}^{a}-s_{t})-h(\hat{x}_{t}^{a}),\end{split} \tag{57}\] Therefore, the attacker does not need to have access to the estimate of input (\(u_{t}^{s}\)), which as a result relaxes the assumption on having the knowledge about the controller dynamics in attack model II and the matrix \(B\). Moreover, since the estimated input control is not needed, the IES assumption on the controller dynamics (13) can be removed from Theorem 3. On the other hand, for such system the upper bound on \(\|\sigma_{t}\|\) ((42a) in Theorem 3) will be smaller as we get \(\|\sigma_{t}\|\leq 2L_{f}\|\zeta_{t}\|\). This will help the attacker to have smaller bound on \(\epsilon\) and as a result have stealthier attack. ### _Vulnerability Analysis of LTI Systems_ We now derive sufficient conditions for \((\epsilon,\alpha)\)-successful attacks on LTI systems, with (11) and (13) simplified as \[\begin{split} x_{t+1}&=Ax_{t}+Bu_{t}+w_{t},\quad y _{t}=Cx_{t}+v_{t},\\ \mathcal{X}_{t}&=A_{c}\mathcal{X}_{t-1}+B_{c}y_{c}^{ e},\quad u_{t}=C_{c}\mathcal{X}_{t}.\end{split} \tag{58}\] LTI systems with any linear controller (e.g., LQG controllers) can be captured in the above form. The following lemma provides the conditions for IES and IU for the above LTI system. **Lemma 6**: _Consider an LTI dynamical system in the form of \(x_{t+1}=Ax_{t}+Bd_{t}\). The system is IES if and only if all eigenvalues of the matrix \(A\) are inside the unit circle. The system is IU if and only if \(A\) has an unstable eigenvalue._ Let consider two trajectories \(x(t,\xi_{1},d)\) and \(x(t,\xi_{2},d)\) with initial conditions \(\zeta_{1}\) and \(\zeta_{2}\), respectively, and the input sequence of \(d=d_{0}:d_{t-1}\). Now, let define \(\Delta x_{t}=x(t,\xi_{1},d)-x(t,\xi_{2},d)\), which has the dynamics \(\Delta x_{t+1}=A\Delta x_{t}\) with initial condition \(\Delta x_{0}=\xi_{1}-\xi_{2}\). Hence, \(\Delta x_{t}\) converge exponentially to zero if and only if the matrix \(A\) has all eigenvalues inside the unit circle. Now, let assume the matrix \(A\) is unstable and for any trajectory \(x(t,\xi_{1},d)\) consider another trajectory \(x(t,\xi_{2},d)\) with \(\xi_{2}=\xi_{1}+cq_{i}\) where \(c\) is a constant and \(q_{i}\) is the unstable eigenvector associate with unstable eigenvalue \(\lambda_{i}\) of the matrix \(A\). Now, we get \(\Delta x_{t}=A^{t}\Delta x_{0}=c\lambda_{t}^{t}q_{i}\). Since \(|\lambda_{i}|>1\), for any \(M>0\), there exists a time step \(t^{\prime}\) such that \(\|\Delta x_{t^{\prime}}\|>M\). It is also straightforward to show the converse direction. This allows us to directly capture conditions for stealthy yet effective attacks on LTI systems. **Corollary 1**: _The LTI system (58) is \((\epsilon,\alpha)\)-attackable for arbitrarily large \(\alpha\) if the matrix \(A\) is unstable and the closed-loop control system is asymptotically stable._ The proof is directly obtained by combining Theorem 2 and Lemma 6. Asymptotic stability of the closed-loop system is not a restrictive assumption as stability is commonly the weakest required performance guarantee for a control system. Matrix \(A\) being unstable is a sufficient condition for satisfying \((\epsilon,\alpha)\)-attackability when any set of sensors can be compromised. Note that the \((\epsilon,\alpha)\)-attackability condition for LTI systems with an optimal detector complies with the results from [8, 9] where LQG controllers with residue based detectors (e.g., \(\chi^{2}\) detectors) are considered. In this case, the false-data injection attack sequence design method from (20) reduces into a simple dynamical model \[\begin{split} s_{t+1}&=Ax_{t}^{a}+Bu_{t}^{a}-(A(x _{t}^{a}-s_{t})+Bu_{t}^{a})=As_{t}\\ a_{t}&=C(x_{t}^{a}-s_{t})-C(x_{t}^{a})=-Cs_{t}, \end{split} \tag{59}\] which only requires knowledge of the matrices \(A\) and \(C\) unlike the works [8, 9, 10, 13] that assume the attacker has access to the full plant model information as well as the controller and Kalman filter gain. Similarly, the attack sequence generated using the attack model II in (38a) for LTI systems would have the same form as above. As a result, the attacker does not need to have access to states, input control (or their estimates) and therefore, both attack models fall into the same attack model where the attacker only needs to know the matrices \(A\) and \(C\). As discussed, \(s_{t}\) is a measure of the distance between \(x_{t}^{a}\) and \(x_{t}\); thus, for impactful attacks, \(s_{t}\) in (59) needs to dynamically grow over time, and \(s_{0}\) needs to be in the subspace of unstable eigenvectors of matrix \(A\). For example, if \(q_{i}\) is the eigenvector associated with unstable eigenvalue \(\lambda_{i}\), for \(s_{0}=cq_{i}\) we get \(s_{t}=c\lambda_{t}^{t}q_{i}\), where \(c\) can be chosen by the attacker. Using Theorem 2 we have \(\|x_{t}^{a}-x_{t}\|\geq c\lambda_{t}^{t}q_{i}-(1+\|A_{c}\|\|C\|)c\|q_{i}\|\) resulting in large deviation in states for arbitrarily large time steps. On the other hand, if the condition of Corollary 1 holds and the attack is generated using (59), then the attack is \(\epsilon\)-stealthy. From Theorem 2, we have \(\epsilon=\sqrt{1-e^{-b_{\epsilon}}}\) with \[b_{\epsilon}=\frac{c^{2}\|q_{i}\|^{2}}{1-\lambda_{c}^{2}}\big{(}\lambda_{max}( \mathbf{R}_{w}^{-1})+\|C\|^{2}\lambda_{max}(\mathbf{R}_{w}^{-1})\big{)} \stackrel{{\Delta}}{{=}}b_{\epsilon},\] where we used \(s_{0}=cq_{i}\), and \(\lambda_{c}\) is the decay rate of the closed-loop system and all parameters are defined as before. Since all parameters above are characterized by the system except \(c\), the attacker can choose the scalar \(c\) arbitrarily close to zero to have arbitrarily nonzero small \(\epsilon\). **Remark 5**: _We initially assumed that \(\mathcal{K}=\mathcal{S}\); i.e., the attacker can compromise all sensors. However, when the system is LTI, the minimum subset of compromised sensors can be obtained as \(\min_{q_{i}\in\{q_{1},\ldots,q_{f}\}}\|\text{supp}(Cq_{i})\|_{0}\), where \(\{q_{1},...,q_{f}\}\) denotes the set of unstable eigenvectors of the matrix \(A\), and supp denotes the set of nonzero elements of the vector._ ## VI The Impact of Stealthy Attack In this section, we evaluate the impact of the stealthy attack from (20) on state estimation. Here, as an example we consider EKF; however, the results can be extended to any other nonlinear Luenberger type observer. When the sensors are attack free, the state estimates of (11) are updated as \[\hat{x}_{t|t-1} =f(\hat{x}_{t-1},u_{t-1}), \tag{60}\] \[\hat{x}_{t} =\hat{x}_{t|t-1}+L(y_{t}^{c}-h(\hat{x}_{t|t-1})),\] where \(L\) is the observer gain that is assumed in steady state. We assume that the control input \(u_{t}\) is obtained by the state estimate \(\hat{x}_{t}\) as \(u_{t}=K(\hat{x}_{t})\). Thus, the estimation dynamics is \[\hat{x}_{t}=f(\hat{x}_{t-1},K(\hat{x}_{t-1}))+L(y_{t}^{c}-h(f(\hat{x}_{t-1},K( \hat{x}_{t-1})))),\] which is a special form of (13). Now, the dynamics of the closed-loop system can be captured as \[\mathbf{X}_{t+1}^{\prime}=F^{\prime}(\mathbf{X}_{t}^{\prime},\mathbf{W}_{t}) \tag{61}\] where we define the full state of the closed-loop system as \(\mathbf{X}^{\prime}\triangleq\begin{bmatrix}x_{t}^{T}&\hat{x}_{t}^{T}\end{bmatrix}^ {T}\), and exogenous disturbances as \(\mathbf{W}_{t}\triangleq\begin{bmatrix}w_{t}^{T}&v_{t+1}^{T}&v_{t}^{T}\end{bmatrix}^ {T}\) similar defined as in (14). In what follows, we provide the condition such that there exists a sequence of \((\epsilon,\alpha)\)-attacks for which the estimate of states during the attack converges exponentially fast to attack free case. **Theorem 4**: _If the closed-loop system (61) with attack model I is IES and the open loop system (12) is IU, then there exists a sequence of \((\epsilon,\alpha)\)-attacks such that_ \[\|\hat{x}_{t}^{f}-\hat{x}_{t}\|\leq\eta\lambda^{t}\] _for some positive \(\lambda<1\) and some \(\eta>0\)._ In the proof of Theorem 2, we showed that if the closed-loop system (61) is IES and the open-loop system (12) is IU, then the system is (\(\epsilon,\alpha\)-attackable), i.e., a sequence of attacks generated by (20) will cause \(\alpha\) deviation in the states while remaining \(\epsilon\)-stealthy. Now, we will show the impact of such attack on the state estimation. Similar to Theorem 2, by defining \(x_{t}^{f}\triangleq x_{t}^{a}-s_{t}\) we get \[x_{t+1}^{f}= f(x_{t}^{f},K(\hat{x}_{t}^{f}))+w_{t}^{a},\quad y_{t}^{c,a}=h(x_ {t}^{f})+v_{t}^{a},\] \[\hat{x}_{t}^{f}= f(\hat{x}_{t-1}^{f},K(\hat{x}_{t-1}^{f}))+L(y_{t}^{c,a}-h(f( \hat{x}_{t-1},K(\hat{x}_{t-1})))) \tag{62}\] On the other hand, if the system were not under attack during \(t\in\mathbb{Z}_{\geq 0}\), the plant and the state estimation evolution would satisfy \[x_{t+1}= f(x_{t},K(\hat{x}_{t}))+w_{t},\quad y_{t}^{c}=h(x_{t})+v_{t},\] \[\hat{x}_{t}= f(\hat{x}_{t-1},K(\hat{x}_{t-1}))+L(y_{t}^{c}-h(f(\hat{x}_{t-1 },K(\hat{x}_{t-1})))) \tag{63}\] Since the system and measurement noises are independent of the state, we can assume that \(w_{t}^{a}\)\(=w_{t}\) and \(v_{t}^{a}=v_{t}\). By defining \(\mathbf{X}_{t}^{f}\triangleq\begin{bmatrix}(x_{t}^{f})^{T}&(\hat{x}_{t}^{f})^ {T}\end{bmatrix}^{T}\), the dynamics (62) and (63) can be written as \(\mathbf{X}_{t+1}^{\prime f}=F^{\prime}(\mathbf{X}_{t}^{\prime f},\mathbf{W}_{t})\) and \(\mathbf{X}_{t+1}^{\prime}=F^{\prime}(\mathbf{X}_{t}^{\prime},\mathbf{W}_{t})\). Since both these dynamics are subject to the same input and the system dynamics is IES, there exist \(\kappa>1\) and \(\lambda<1\) such that \[\|\mathbf{X}_{t}^{\prime f}-\mathbf{X}_{t}^{\prime}\|\leq\kappa\|\mathbf{X}_{0 }^{\prime f}-\mathbf{X}_{0}^{\prime}\|\lambda^{t} \tag{64}\] with \(\mathbf{X}_{0}^{\prime f}-\mathbf{X}_{0}^{\prime}=\begin{bmatrix}x_{0}^{f}-x_ {0}\\ \hat{x}_{0}^{f}-\hat{x}_{0}\end{bmatrix}=\begin{bmatrix}s_{0}\\ \hat{x}_{0}^{f}-\hat{x}_{0}\end{bmatrix}\); thus, we have \[\hat{x}_{0}^{f}-\hat{x}_{0}=L(y_{0}^{c,a}-y_{0}^{c})=L(h(x_{0}-s_{0})-h(x_{0})) \tag{65}\] where we used the fact that attack starts at \(t=0\); i.e., \(\hat{x}_{-1}^{f}=\hat{x}_{-1}\) and \(x_{0}^{a}=x_{0}\). Therefore, we get \(\|\mathbf{X}_{0}^{\prime f}-\mathbf{X}_{0}^{\prime}\|\leq(1+\|L\|L_{h})\|s_{0}\|\). As a result we get \[\|\hat{x}_{t}^{f}-\hat{x}_{t}\|\leq\|\mathbf{X}_{t}^{\prime f}-\mathbf{X}_{t}^{ \prime}\|\leq\kappa(1+\|L\|L_{h})\|s_{0}\|\lambda^{t}, \tag{66}\] and finally, for \(\eta=\kappa(1+\|L\|L_{h})\|s_{0}\|\), it holds that \(\|\hat{x}_{t}^{f}-\hat{x}_{t}\|\leq\eta\lambda^{t}\). For LTI plants where a linear Kalman filter is used for state estimation and the LQR controller is used to control the plant, the closed-loop system satisfies \[x_{t+1} =Ax_{t}+Bu_{t}+w_{t},\quad y_{t}=Cx_{t}+v_{t},\] \[\hat{x}_{t} =A\hat{x}_{t-1}+Bu_{t-1}+L(y_{t}^{c}-C(A\hat{x}_{t-1}+Bu_{t-1})),\] \[u_{t} =K\hat{x}_{t}; \tag{67}\] here, \(K\) and \(L\) are obtained by solving algebraic Riccati equations. When the system is observable and controllable, the obtained \(K\) and \(L\) are stabilizing the closed-loop system [32]. **Corollary 2**: _Consider the closed-loop system (67) with pairs \((A,B)\) and \((A,C)\) controllable and observable, respectively. If matrix \(A\) is unstable, then there exists a sequence of \((\epsilon,\alpha)\)-attacks such that \(\|\hat{x}_{t}^{f}-\hat{x}_{t}\|\leq\eta\lambda^{t}\) for some positive \(\lambda<1\) and some \(\eta>0\)._ Since the system is controllable and observable, \(K\) and \(L\) in the LQG controller are stabilizing the closed-loop system (67); thus, from Lemma 6 the closed-loop system is IES. The rest of the proof follows the proof of Theorem 4. Hence, for LTI systems with LQG controllers, the attack generated by (59) causes the state estimation during the attack to converge exponentially fast to attack-free state estimates. This result is important as it shows the changes in estimation of states are small due to the attack and the defender cannot detect the presence of attack by observing the state estimates. ## VII Simulation Results We illustrate and evaluate our methodology for vulnerability analysis of nonlinear control systems on two case studies, cart-pole and unmanned aerial vehicles (UAVs). ### _Cart-pole_ We use the dynamics of the cart-pole system [33] \[\ddot{\theta} =\frac{g\sin\theta+\cos\theta\big{(}\frac{-F-M_{p}l\theta^{2}\sin \theta}{M_{c}+M_{p}}\big{)}}{l\big{(}\frac{4}{3}-\frac{M_{p}\cos\theta}{M_{c}+M _{p}}\big{)}} \tag{68}\] \[\ddot{x} =\frac{F+M_{p}l(\dot{\theta}^{2}\sin\theta-\ddot{\theta}\cos \theta)}{M_{c}+M_{p}};\] here, \(\theta\) is the pendulum angle from vertical, \(x\) is the cart position along the track, and \(F\) is the control torque applied to the pivot, \(g=9.81\) is the acceleration due to gravity, \(M_{c}=1\)_Kg_ is the mass of the cart and \(M_{p}=0.1\)_Kg_ is the mass of the pole. The length of the pole is \(2l=1\)_m_. We assume that only \(x\) and \(\theta\) are directly measured and the system is equipped with an EKF to estimate system states. The system and measurement noise are \(\mathbf{R}_{w}=\sigma^{2}I\) and \(\mathbf{R}_{v}=\sigma^{2}I\) where \(I\) is identity matrix with suitable dimension. Moreover, a feedback full state controller that uses estimated states by the EKF was used to keep the pendulum inverted around \(\theta=0\) and \(x=0\) equilibrium point. To detect the presence of attack, we consider two standard widely-used IDs - i.e., \(\chi^{2}\) and cumulative sum (CUSUM) detectors [34]. For each ID, the residue is obtained by comparing the current received sensor measurement and the expected measurement. We set the thresholds for both these detectors to \(p^{FA}=0.002\). Fig. 3 shows ineffectiveness of LTI attacks on Cart-pole system with actual nonlinear dynamics (68) and \(\sigma^{2}=0.001\). The LTI attack was designed according to (59), where \(A\) was obtained by linearizing the nonlinear dynamics (68) around the equilibrium point at zero. The attack starts at \(t=0\) with initial condition \(\|s_{0}\|=10^{-7}\). Fig. (a)a shows the evolution of rod's angle over time, while Fig. (b)b shows the average alarm rate for the CUSUM detector under the LTI-based attack. The zoomed area highlights that when the rod's angle starts deviating from zero, the the alarm rate of the ID increases over time. Specifically, for \(0.4\,rad=22.9^{\circ}\), \(p^{TD}\) is almost ten times larger than \(p^{FA}\). This demonstrates the limitation of the LTI-based attacks as the LTI approximation of a nonlinear model is only valid for the region around the equilibrium point and as the system moves away from equilibrium point, the approximation error significantly increases. For the attack model I, we used the attack sequence (20). Fig. (a)a shows \(\epsilon\) versus different norm values of the initial condition \(s_{0}\) for fixed \(\sigma^{2}=0.001\). By increasing the norm of the initial condition, \(\epsilon\) also increases making the attack likelier to be detected. However, choosing the initial condition close to zero (i.e., with a very small norm) has a side effect that it takes more time until the attack becomes effective as illustrated in Fig. (b)b. Hence, there is a trade-off between attack stealthiness and time before it becomes effective. The impact of noise variance on attack stealthiness is also considered, in Fig. (c)c where for a fixed \(\|s_{0}\|=10^{-8}\), we show that increasing the noise power \(\sigma^{2}\) decreases the value of \(\epsilon\). In Fig. (d)d and (e)e, we considered the impact of attacker's state estimation error, \(b_{\zeta}\) in attack model II, on the detection rate of CUSUM and \(\chi^{2}\) detectors when \(\|s_{0}\|=10^{-8}\). Having inaccurate estimate with \(b_{\zeta}=0.5\) can result in attack detection, while more accurate estimate (\(b_{\zeta}=0.05\)) improves attack stealthiness. Fig. (f)f shows the evolution of rod's angle over time for different values of \(b_{\zeta}\) - after initiating the attack at \(t=0\), the rod angle increases over time in both cases. ### _Unmanned Aerial Vehicles_ We also considered a quadrotor with complex highly nonlinear model from [35] that has 12 states \(\big{[}x,\,y,\,z,\,\dot{x},\,\dot{y},\,\dot{z},\,\phi,\,\theta,\,\psi,\,\dot{ \phi},\,\dot{\theta},\,\dot{\psi}\big{]}^{T}\); \(x\), \(y\) and \(z\) represent the quadrotor position along the \(X\), \(Y\) and \(Z\) axis, respectively, while \(\dot{x}\), \(\dot{y}\) and \(\dot{z}\) are their velocity. \(\phi\), \(\theta\) and \(\psi\) are pitch, roll and yaw angles respectively, and \(\dot{\phi}\), \(\dot{\theta}\) and \(\dot{\psi}\) represent their corresponding angular velocity. The system was discretized using Euler method with \(T_{s}=0.01s\). The states \(\big{[}x,\,y,\,z,\,\phi,\,\theta,\,\psi,\dot{\phi},\,\dot{\theta},\,\dot{\psi} \big{]}^{T}\) were measured with zero-mean Gaussian noise with the covariance matrix \(\mathbf{R}_{v}=0.001I\). We assumed standard disturbance on the input modeled by system noise with zero mean Gaussian with the covariance matrix \(\mathbf{R}_{w}=0.001I\). The system employs an EKF to estimate the states and a PID controller to control the drone. To detect the presence of any abnormal behaviour, CUSUM and \(\chi^{2}\) IDs were deployed with the threshold fixed to \(p^{FA}=0.002\). We considered the position control task [35], where the drone takes off from (0,0,0) in global coordinate and reaches to predefined point in the space \((0,0,10m)\) and stays there (see the black line trajectory in Fig. 6). Once the drone reaches to that point, the attack starts (it should be noted that there is no limitation for the attacker on the starting time of the attack and our assumption here is just to illustrate the results better). For attack model I with generated attack sequence as (20), Fig. (a)a shows the impact of \(\|s_{0}\|\) on attack stealthiness. The results again show that a larger norm of the initial condition \(s_{0}\) results in an increased attack detection rate. For the attack model I where the attacker generates attack sequence by (38) using case 1 and \(\|s_{0}\|=10^{-5}\) from \(t=0\), Fig. (a)a shows the alarm rate of CUSUM and \(\chi^{2}\) IDs for 50 seconds. Fig. 3: (a) The angle of the pendulum rod over time while the LTI-based attack starts at time zero. (b) The alarm rate for LTI-based attack on Cart-pole system over 5000 experiments. We also showed that by fixing \(\|s_{0}\|=10^{-5}\), and changing the nonzero elements of the \(s_{0}\) the attacker can control the direction of the drone's deviation (see Fig. 6). Assuming that the attack starts at the red square, Fig. (a)a shows that by placing a positive nonzero value on the element \(s_{0}\) associated with pitch angle (\(\phi\)) causes the drone to deviate \(20~{}m\) in negative \(Y\)-axis in \(50~{}s\). Such negative \(s_{0}\) results on the drone's deviation in the positive direction of \(Y\) axis. It also shows that putting the nonzero element in roll angle can cause the deviation along with \(X\) axis. However, if the attacker uses the combination of both, the drone deviates in both \(X\) and \(Y\) axis ( Fig. (b)b). ## VIII Conclusion In this paper, we focused on vulnerability analysis for nonlinear control systems with Gaussian noise, when attacker can compromise sensor measurements from any subset of sensors. Notions of strict stealthiness and \(\epsilon\)-stealthiness were defined, and we showed that these notions are independent of the deployed intrusion detector. Using the KL-divergence, we presented conditions for the existence of stealthy yet effective attacks. Specifically, we defined the \((\epsilon,\alpha)\)-successful attacks where the goal of the attacker is to be \(\epsilon\)-stealthy while causing deviation in the trajectory of states with respect to system's own desired unattacked trajectory, determined by the parameter \(\alpha\). Depending on the level of attacker's knowledge about the plant model, controller, and the system states, two different attack models are considered and for each attack model, we then derived a condition for which there exists a sequence of such \((\epsilon,\alpha)\)-successful false-data injection attacks. We also provided the results for LTI systems, showing that they are compatible with the existing results for LTI systems and \(\chi^{2}\)-based detectors. Finally, we considered the impact of stealthy attacks on state estimation, and showed that if the the closed-loop control system including the estimator is incrementally stable, then the state estimation in the presence of attack converges to the attack-free estimates.
2310.12374
Free weakly Novikov metabelian algebra of infinite rank
An explicit base with multiplication table is obtained for the free weakly Novikov metabelian algebra of infinite rank over an arbitrary field of characteristic $\neq 2$.
Iritan Ferreira dos Santos, Alexey M. Kuz'min
2023-10-18T23:00:46Z
http://arxiv.org/abs/2310.12374v1
# Free weakly Novikov metabelian algebra ###### Abstract. An explicit base with multiplication table is obtained for the free weakly Novikov metabelian algebra of infinite rank over an arbitrary field of characteristic \(\neq 2\). Keywords: Right symmetric algebra, metabelian algebra, Novikov algebra, free algebra of variety. _MSC 2020:_ 17A30, 17A50, 17D25, 17D99. ## 1. Introduction Weakly Novikov rings appeared in papers by E. Kleinfeld and H. F. Smith [2, 3] as the rings satisfying \[(x,y,z) =(x,z,y)\quad\text{(the right symmetry identity)}, \tag{2}\] \[x(y,z,t) =(y,z,xt)\quad\text{(the weakly Novikov identity)}, \tag{1}\] where \((x,y,z)=(xy)z-x(yz)\) stands, as usual, for the _associator_ in variables \(x,y,z\). Note that (2) generalizes \[x(yz)=y(xz)\quad\text{(the left commutativity identity)}. \tag{3}\] For more results on nonmultilinear generalizations of identity (2), see recent papers by Samanta and Hentzel [12, 13]. Let \(F\) be a field of characteristic distinct from \(2\), \(\mathsf{WN}\) be the variety of algebras over \(F\) defined by (1) and (2), and \(\mathsf{N}\) be the Novikov subvariety of \(\mathsf{WN}\), i. e. the subvariety distinguished by (3). Following [5], we set \(\mathcal{V}^{(2)}\) to be the metabelian subvariety of a given variety \(\mathcal{V}\), i. e. the subvariety distinguished by \[(xy)(zt)=0\quad\text{(the metabelian identity)}. \tag{4}\] It's known from the paper by Shestakov and Zhang [14] that \(\mathsf{N}^{(2)}\) is left nilpotent of index not more then \(9\). In the present paper, we obtain an explicit base with multiplication table for the free algebra of \(\mathsf{WN}^{(2)}\) on the countable set of generators. As a corollary, we prove that \(\mathsf{WN}^{(2)}\) and \(\mathsf{N}^{(2)}\) are both left nilpotent of exact index \(5\). We stress that questions on finding certain effective bases for free algebras in given varieties are strongly motivated by a number of open problems dealing with varieties of all alternative, Jordan, and Maltsev algebras (see, for ex. [15, 16, 17, 18]). ## 2. Preliminary results We set, as usual, \([a,b]=ab-ba\), \(a\circ b=ab+ba\), and \(ab=aR=bL\). In what follows. ## 1. Introduction Let \(F\) be a finite field, \(\mathfrak{M}^{(2)}\) be a finite field, \(\mathfrak{M}^{(2)}\) be a finite field. Let \(\mathfrak{M}^{(2)}\) be a finite field, \(\mathfrak{M}^{(2)}\) be a finite field, \(\mathfrak{M}^{(2)}\) be a finite field, \(\mathfrak{M}^{(2)}\) be a finite field, \(\mathfrak{M}^{(2)}\) is a finite field, \(\mathfrak{M}^{(2) To complete the proof it remains to check that identity (2) holds in \((\mathfrak{A},\cdot)\). Indeed, on generators of \(\mathfrak{A}\), we have \[x_{q}\cdot\left(x_{j},x_{i},x_{k}\right)=x_{q}\cdot\left(x_{i}L_{x_{j}}R_{x_{k}} -x_{k}L_{x_{i}}L_{x_{j}}\right)=-x_{k}L_{x_{q}}L_{x_{i}}L_{x_{j}}=\left(x_{j},x _{i},x_{q}\cdot x_{k}\right);\] otherwise, (2) is an immediate consequence of relations (6)-(8). **Corollary 1.1**.: _Every Lie-nilpotent or Jordan-nilpotent subvariety of index \(n\) in \(\mathfrak{M}^{(2)}\) is nilpotent of index not more then \(2n+1\)._ Proof.: Let us prove that, for \(\mathfrak{B}\in\mathfrak{M}^{(2)}\), \(H_{x}=R_{x}-L_{x}\), and \(\Theta_{x}=R_{x}+L_{x}\), each of the relations \[\mathfrak{B}^{2}H_{x_{1}}\ldots H_{x_{n-1}}=0,\quad\mathfrak{B}^{2}\Theta_{x _{1}}\ldots\Theta_{x_{n-1}}=0\] yields \(\mathfrak{B}^{2n+1}=0\). In the case \(n=1\), the affirmation is trivial in view of (6). Further, we set \[R^{n}=R_{x_{1}}\ldots R_{x_{n}},\quad L^{n}=R_{x_{1}}\ldots L_{x_{n}},\quad H^ {n}=H_{x_{1}}\ldots H_{x_{n}},\quad n\geqslant 2.\] By virtue of (6) and (7), the relations \[\mathfrak{B}^{2}H^{n}L_{y}=0,\quad\mathfrak{B}^{3}R_{y}H^{n}=0\] yield, respectively, \[\mathfrak{B}^{2}L^{n+1}=0,\quad\mathfrak{B}^{3}R^{n+1}=0.\] Therefore, by Theorem 1, \[\mathfrak{B}^{3}T_{x_{1}}\ldots T_{x_{2n}}=0,\] for all \(T_{x_{i}}\in\{R_{x_{i}},L_{x_{i}}\}\). **Corollary 1.2**.: _The flexible and the antiflexible subvarieties of \(\mathfrak{M}^{(2)}\) are both nilpotent of index not more than \(5\). Moreover, for \(\mathfrak{B}\in\mathfrak{M}^{(2)}\) and \(n\geqslant 2\), the identity_ \[(\mathfrak{B}^{n},x,y)=\pm\left(y,x,\mathfrak{B}^{n}\right) \tag{10}\] _implies \(\mathfrak{B}^{n+3}=0\)._ Proof.: By virtue of (4), identity (10) yields \[\mathfrak{B}^{n}R_{x}R_{y}=\mp\mathfrak{B}^{n}L_{x}L_{y}.\] Thus, in view of (6), we have \[\mathfrak{B}^{n}L_{x}L_{y}R_{z}=\mp\mathfrak{B}^{n}R_{x}R_{y}R_{z}=\mathfrak{ B}^{n}R_{x}L_{y}L_{z}=0.\] Similarly, by (7), we obtain \[\mathfrak{B}^{n}L_{x}R_{y}R_{z}=\mp\mathfrak{B}^{n}L_{x}L_{y}L_{z}=\mathfrak{ B}^{n}R_{x}R_{y}L_{z}=0.\] ## 3. Main Theorem ### Auxiliary identities Let \(\mathfrak{A}=F_{\mathsf{WN}^{(2)}}\left\langle X\right\rangle\) be the free algebra of the variety \(\mathsf{WN}^{(2)}\) on the countable set \(X=\{x_{1},x_{2},\dots\}\) of generators; the symbol \(T_{x}\in\{R_{x},L_{x}\}\) is used as a common denotation for the operators \(R_{x}\) and \(L_{x}\) of right and left multiplication by \(x\in X\), respectively. **Lemma 3.1**.: _The free algebra \(\mathfrak{A}\) satisfies the identities_ \[x\cdot(y,z,t) =(y,xz,t)\,, \tag{12}\] \[(x,y,zt) =(x,zy,t),\] (13) \[x\cdot y(zt) =-(x,zy,t),\] (14) \[x(yz)\cdot t =\big{(}x,[y,z],t\big{)}+(y,xz,t),\] (15) \[(x,y,z)\cdot t =h(x,y,z,t)+\big{(}x,y\circ z,t\big{)}, \tag{11}\] _where_ \[h(x,y,z,t)\stackrel{{\rm def}}{{=}}(xy,z,t)-(y,xz,t)-2(x,yz,t).\] Proof.: By virtue of (2) and (1), we have \[x\cdot(y,z,t)=x\cdot(y,t,z)=(y,t,xz)=(y,xz,t).\] Thus, (11) is proved. Further, combining (2) and (11), we get (12): \[(x,y,zt)=z\cdot(x,y,t)=(x,zy,t).\] Furthermore, by (4) and (12), we obtain (13): \[x\cdot y(zt)=-(x,y,zt)=-(x,zy,t).\] Now, recall that every nonassociative ring satisfies the Teichmuller's identity \[x\cdot(y,z,t)+(x,y,z)\cdot t=(xy,z,t)-(x,yz,t)+(x,y,zt).\] Hence, using (11) and (12), we have \[(x,y,z)\cdot t=(xy,z,t)-(y,xz,t)-\big{(}x,[y,z]\,,t\big{)}.\] Consequently, to prove (14) it remains to observe that \[(x,y,z)\cdot t-(xy,z,t)=-x(yz)\cdot t,\] in view of (4). Moreover, taking into account the denotations, introduced above, \[(x,y,z)\cdot t=(xy,z,t)-(y,xz,t)+\big{(}x,[z,y]\,,t\big{)}=\\ =(xy,z,t)-(y,xz,t)-2(x,yz,t)+\big{(}x,y\circ z,t\big{)}=\\ =h(x,y,z,t)+\big{(}x,y\circ z,t\big{)},\] we complete the proof of (15). **Lemma 3.2**.: _A monomial \(\mu\in\mathfrak{A}^{5}\) can be nonzero only if it has the form \(\mu=wR_{x}R_{y}R_{y}\) Proof.: Let us consider consecutively all possible cases for \(\mu\neq wR_{x}R_{y}R_{z}\). First, by (2) and (4), we have \[wR_{x}R_{y}L_{z} =z(w,x,y)-(w,x,zy)=0, \tag{17}\] \[wR_{x}L_{y}L_{z} =w(z,y,x)-(z,y,wx)=0. \tag{16}\] Similarly, by (4) and (11), we get \[wL_{x}L_{y}L_{z}=(y,zx,w)-z(y,x,w)=0. \tag{18}\] Further, combining (2), (4), (11) with (16) and (17), we proceed \[wR_{x}L_{y}R_{z} =wR_{x}R_{z}L_{y}+(y,wx,z)-w(y,x,z)=0, \tag{20}\] \[wL_{x}R_{y}L_{z} =wR_{y}L_{x}L_{z}+z(x,w,y)-(x,w,zy)=0. \tag{19}\] Now, by (11), (17), and (20), we obtain \[wL_{x}L_{y}R_{z}=wL_{x}R_{z}L_{y}+w[L_{y},R_{z}]L_{x}+(y,xw,z)-x(y,w,z)=0. \tag{21}\] Finally, using (19), (21), and taking into account (1), we prove \[wL_{x}R_{y}R_{z}=wR_{y}L_{x}R_{z}-wL_{y}L_{x}R_{z}+(x,w,y)z-(x,y,w)z=0. \tag{22}\] **Lemma 3.3**.: _The T-ideal \(\left\langle(a,bc,d)\right\rangle^{\mathrm{T}}\) is spanned by the associators \((x_{i},x_{j}x_{k},x_{\ell})\) taken only on generators \(x_{i},x_{j},x_{k},x_{\ell}\in X\) of \(\mathfrak{A}\)._ Proof.: It's enough to notice that, by Lemma 3.2, the associators \((x_{i},x_{j}x_{k},x_{\ell})\) lie in the annihilator of \(\mathfrak{A}\) and all associators of the form \[(w,x_{j}x_{k},x_{\ell}),\quad(x_{i},wx_{k},x_{\ell}),\quad(x_{i},x_{j}w,x_{ \ell}),\quad(x_{i},x_{j}x_{k},w),\] for \(w\in\mathfrak{A}^{2}\), are null ones. ### Pre-base Recall that a function \(f(x,y)\) is said to be symmetric w.r.t. \(x,y\) if \[f(x,y)=f(y,x).\] **Definition**.: The _base elements of \(\mathfrak{A}^{3}\)_ are all elements _of types_ (i)-(v) defined by the list below, for \(x,y,z,t_{1},\ldots,t_{k}\in X\): * \(x(yz)\), * \((x,t_{1},t_{2})\), * \((x,yt_{1},t_{2})\), * \(h(x,t_{1},t_{2},t_{3})\), * \((xt_{1})\,R_{t_{2}}\ldots R_{t_{k}}\), where the indices \(t_{i}\) are used each time when the corresponding element possesses, by definition, the symmetry property w.r.t. all the subset \(\{t_{i}\}\) of its variables. **Lemma 3.4**.: _The ideal \(\mathfrak{A}^{3}\) is spanned by its base elements._ Proof.: First, taking into account the defining identity (1) for \(\mathsf{WN}^{(2)}\), it's not hard to see that the subspace of polynomials of degree \(3\) in \(\mathfrak{A}\) is spanned by the elements of types (i) and (ii), in view of the trivial equality Further, identities (11), (13)-(15), yield that the subspace of polynomials of degree \(4\) in \(\mathfrak{A}\) is spanned by the elements of types (iii) and (iv). While that the symmetry property established for the elements of type (iii) is a direct consequence of identity (11). At the same time, in the case of elements of type (iv), the required symmetry property can be obtained as follows. On one hand, the polynomial \(h(x,y,z,t)\) is defined as an symmetric element of \(z,t\). On the other hand, identity (15) implies the symmetry of \(h(x,y,z,t)\) with respect to \(y,z\). Furthermore, by Lemma 3.2, the subspace of polynomials of degree \(\geqslant 5\) in \(\mathfrak{A}\) is spanned by \(R\)-words. Moreover, combining the defining identities (1) and (4) of \(\mathsf{WN}^{(2)}\), we have \[wR_{y}R_{z}=wR_{z}R_{y},\quad w\in\mathfrak{A}^{2}.\] Finally, Lemma 3.3 implies \[h(x,t_{1},t_{2},t_{3})\cdot y=(xt_{1})\,R_{t_{2}}R_{t_{3}}R_{y}.\] This proves the required symmetry property of the elements of type (v). ### Base **Theorem 2**.: _The free algebra \(\mathfrak{A}\) possesses a base formed by all elements from \(X\), \(X^{2}\), and all base elements of \(\mathfrak{A}^{3}\) equipped with the multiplication \(\cdot\) introduced by the following rules:_ * _Every nonzero left action of a generator_ \(x\in X\) _on a base element is defined by one of the following formulas:_ \[x\cdot y =xy,\] \[x\cdot yz =x(yz),\] \[x\cdot y(zt) =-(x,zy,t),\] \[x\cdot(y,t_{1},t_{2}) =(y,xt_{1},t_{2})\,,\] _where_ \(y,z,t,t_{1},t_{2}\in X\)_._ * _Every nonzero right action of a generator_ \(y\in X\) _on a base element of degree_ \(\geqslant 2\) _is defined by one of the following formulas:_ \[xz\cdot y =(x,z,y)+x(zy),\] \[x(zt)\cdot y =\big{(}x,[z,t],y\big{)}+(z,xt,y),\] \[(x,t_{1},t_{2},t_{3})\cdot y =(xt_{1})\,R_{t_{2}}R_{t_{3}}R_{y},\] \[(xt_{1})\,R_{t_{2}}\ldots R_{t_{k}}\cdot y =(xt_{1})\,R_{t_{2}}\ldots R_{t_{k}}R_{y},\quad k\geqslant 4,\] _where_ \(x,z,t,t_{1},\ldots,t_{k}\in X\)_._ * _All the other products of base elements not listed in the formulas above are null once by definition._ Proof.: First we stress that the multiplication \(\cdot\) is well-defined. Indeed, by the direct verification, one can confirm that all symmetry properties, required for the elements of types (ii)-(v), are inherited by the corresponding right parts in formulas of multiplication Let \(\mathcal{B}\) be the span of all introduced base elements over the field \(F\). Consider \(\mathcal{B}\) as an algebra with respect to the multiplication \(\cdot\) defined by the rule of Theorem 2. Then, Lemma 3.4 yields that \(\mathfrak{A}\) could be isomorphic to some quotient of \(\mathcal{B}\). Let us prove that actually \(\mathfrak{A}\) is isomorphic to \(\mathcal{B}\) under the isomorphism induced by the identical mapping \(X\mapsto X\). Our proof is by formal checking of defining identities (2)-(4) of the variety \(\mathsf{WN}^{(2)}\) for the algebra \(\mathcal{B}\). First, metabelian identity (4) holds in \(\mathcal{B}\) immediately, by definition of the multiplication \(\cdot\). Further, let us set \[\big{(}a,b,c\big{)}_{\mathcal{B}}\stackrel{{\mathrm{def}}}{{=}}(a \cdot b)\cdot c-a\cdot(b\cdot c).\] In what follows, we assume that \(x,y,z,t,u\) are variables from \(X\) and \(w\) is a common denotation for base elements of \(\mathcal{B}\) such that neither \(w\) nor \(X\cdot w\), \(w\cdot X\) lie in the annihilator \[\mathrm{Annh}(\mathcal{B})=F\cdot(X,X^{2},X)\] of \(\mathcal{B}\), i. e. \[w\in\big{\{}x,\,xy,\,(x,y,z),\,h(x,y,z,t),\,(xy)R_{z}R_{t}\ldots R_{u}\big{\}}.\] Let us check the right symmetry of \(\mathcal{B}\): \[\big{(}a,b,c\big{)}_{\mathcal{B}}=\big{(}a,c,b\big{)}_{\mathcal{B}}.\] First, we compute all nonzero associators of type \(\big{(}w,b,c\big{)}_{\mathcal{B}}\), for \(b,c\in X\), and verify that the obtained elements are symmetric in \(b,c\). Indeed, \[\big{(}x,b,c\big{)}_{\mathcal{B}} =xb\cdot c-x\cdot bc=(x,b,c)+x(bc)-x(bc)=(x,b,c),\] \[\big{(}xy,b,c\big{)}_{\mathcal{B}} =\big{(}xy\cdot b\big{)}\cdot c=\big{(}(x,y,b)+x(yb)\big{)}\cdot c=\] \[=h(x,y,b,c)+\big{(}x,y\circ b,c\big{)}+\big{(}x,[y,b],c\big{)}+(y,xb,c)=\] \[=h(x,y,b,c)+\big{(}x,yb,c\big{)}+\big{(}x,yb,c\big{)}+(y,xb,c),\] \[\big{(}(x,y,z),b,c\big{)}_{\mathcal{B}} =\big{(}(x,y,z)\cdot b\big{)}\cdot c=\] \[=\Big{(}h(x,y,z,b)+\big{(}x,y\circ z,b\big{)}\Big{)}\cdot c=(xy) R_{z}R_{b}R_{c},\] \[\big{(}h(x,y,z,t),b,c\big{)}_{\mathcal{B}} =\big{(}h(x,y,z,t)\cdot b\big{)}\cdot c=(xy)R_{z}R_{t}R_{b}R_{c},\] \[\big{(}(xy)R_{z}R_{t}\ldots R_{u},b,c\big{)}_{\mathcal{B}} =\big{(}(xy)R_{z}R_{t}\ldots R_{u}\cdot b\big{)}\cdot c=(xy)R_{z} R_{t}\ldots R_{u}R_{b}R_{c}.\] Further, we compare the values of associators in pairs of types \[\big{(}a,b,w\big{)}_{\mathcal{B}},\quad\big{(}a,w,b\big{)}_{\mathcal{B}}, \quad a,b\in X,\quad w\notin X.\] In the case \(w\in X^{2}\), we have \[\big{(}a,b,xy\big{)}_{\mathcal{B}} =-a\cdot(b\cdot xy)=-a\cdot b(xy)=(a,xb,y)=(a,xy,b),\] \[\big{(}a,xy,b\big{)}_{\mathcal{B}} =(a\cdot xy)\cdot b-a\cdot(xy\cdot b)=a(xy)\cdot b-a\cdot\big{(}( x,y,b)+x(yb)\big{)}=\] \[=\big{(}a,[x,y],b\big{)}+(x,ay,b)-(x,ay,b)+(a,yx,b)=(a,xy,b).\] Furthermore, let \(\mathcal{B}^{n}\) be the ideal of \(\mathcal{B}\) spanned by all base elements of degree \(\geqslant n\). We observe that all terms of associators of types are null, by virtue of the relations \[X\cdot\mathcal{B}^{4}=0,\quad X\cdot(X^{2}X)\subseteq\operatorname{Annh}( \mathcal{B}),\quad X\cdot(X,X,X)\subseteq\operatorname{Annh}(\mathcal{B}).\] Therefore, \(\mathcal{B}\) is right symmetric. Finally, to complete the proof, we verify that \(\mathcal{B}\) is weakly Novikov. Indeed, in view of the relations above, all terms of the identity \[a\cdot\big{(}b,c,d\big{)}_{\mathcal{B}}=\big{(}b,c,a\cdot d\big{)}_{\mathcal{B}}\] are null whenever at least one of its variables \(a,b,c,d\) takes a value that doesn't lie in \(X\). Otherwise, we have, \[x\cdot\big{(}y,z,t\big{)}_{\mathcal{B}} =x\cdot(y,z,t)=(y,xz,t),\] \[\big{(}y,z,xt\big{)}_{\mathcal{B}} =-y\cdot(z\cdot xt)=-y\cdot z(xt)=(y,xz,t).\] ## 4. Corollaries We stress that combining Lemma 3.2 and the established above symmetry property for \(R\)-words \[wR_{y}R_{z}=wR_{z}R_{y},\quad w\in\mathfrak{A}^{2}\] with the Medvedev's two-term identity theorem [7], one can prove that the variety \(\mathsf{WN}^{(2)}\) is Spechtian (see, also, [6, 9, 10, 19]). In this section, we describe defining identities for nilpotent and non-nilpotent subvarieties of \(\mathsf{WN}^{(2)}\). ### Nilpotent subvarieties **Lemma 4.1**.: _Every proper subvariety of \(\mathsf{WN}^{(2)}\) distinguished by some multilinear identity of degree \(n\geqslant 5\) is nilpotent of index not more then \(n+1\)._ Proof.: Consider a multilinear polynomial \(f=f(x_{1},\ldots,x_{n})\in\mathfrak{A}\) on \(n\geqslant 5\) variables. By Theorem 2, \(f\) can be written down as \[f=\sum_{i=1}^{n}\lambda_{i}\left(x_{i}x_{\sigma_{i}(2)}\right)R_{x_{\sigma_{i} (3)}}\ldots R_{x_{\sigma_{i}(n)}},\] where all \(\lambda_{i}\in F\) and every \(\sigma_{i}\) is a permutation on the set \(\{1,2,\ldots,n\}\) defined by the rule \[\sigma_{i}(1)=i,\quad\sigma_{i}(2)<\sigma_{i}(3)<\cdots<\sigma_{i}(n). \tag{23}\] In other words, rule (23) states that \(\sigma_{1}=\operatorname{id}\) and \[\sigma_{i}=(1\,2\ldots i)^{-1},\text{ for }i=2,3,\ldots,n.\] Suppose that \(\mathcal{V}_{f}\) is a proper subvariety of \(\mathsf{WN}^{(2)}\) distinguished by the identity \(f\). Then, with no loss of generality, one may assume \(\lambda_{1}=1\) and set \(x_{1}=w\in X^{2}\). Hence, by Lemma 3.2, \[f(w,x_{2}\ldots,x_{n})=wR_{x_{2}}\ldots R_{x_{n}}.\] **Remark**.: In particular, Lemma 4.1 states that the variety \(\mathsf{WN}^{(2)}\) has the topological rank\({}^{1}\)\(r_{t}\big{(}\mathsf{WN}^{(2)}\big{)}=2\). **Lemma 4.2**.: _Every proper subvariety of \(\mathsf{WN}^{(2)}\) distinguished by some multilinear identity of degree \(2\) is nilpotent of index not more then \(5\)._ Proof.: Indeed, for \[f(x_{1},x_{2})=x_{1}x_{2}+\lambda\,x_{2}x_{1},\quad\lambda\in F,\] in view of (17), we have \[f\big{(}(x_{1}x_{3})R_{x_{4}}R_{x_{5}},x_{2}\big{)}=(x_{1}x_{3})R_{x_{4}}R_{x_ {5}}R_{x_{2}}.\] Hence, Theorem 2 yield that a subvariety of \(\mathsf{WN}^{(2)}\) distinguished by \(f\) should be nilpotent of index not more then \(5\). ### Non-nilpotent subvarieties **Lemma 4.3**.: _If \(\mathcal{V}_{f}\) is a proper non-nilpotent subvariety of \(\mathsf{WN}^{(2)}\) distinguished by some multilinear identity \(f=f(x_{1},x_{2},x_{3},x_{4})\) of degree \(4\), then \(f\) should have the form_ \[f=\sum_{\delta\in\mathrm{A}_{4}}\lambda_{\delta}\left(x_{\delta(1)},x_{\delta (2)}x_{\delta(3)},x_{\delta(4)}\right),\] _where all \(\lambda_{\delta}\in F\) and \(\mathrm{A}_{4}\) is the alternating group on the set \(\{1,2,3,4\}\)._ Proof.: By Theorem 2, \(f\) can be written down as \[f=\sum_{\delta\in\mathrm{A}_{4}}\lambda_{\delta}\left(x_{\delta(1)},x_{\delta (2)}x_{\delta(3)},x_{\delta(4)}\right)+\sum_{i=1}^{4}\mu_{i}\,h(x_{i},x_{ \sigma_{i}(2)},x_{\sigma_{i}(3)},x_{\sigma_{i}(4)}),\quad\lambda_{\delta},\mu_ {i}\in F,\] where \(\sigma_{i}\) is the permutation defined by rule (23), for \(n=4\). Suppose that at least one of the scalars \(\mu_{i}\) is nonzero. Then, with no loss of generality, it's enough to fix \(\mu_{1}=1\) and set \(x_{1}=w\in X^{2}\). Hence, by Lemma 3.2, we get \[f(w,x_{2},x_{3},x_{4})=wR_{x_{2}}R_{x_{3}}R_{x_{4}}.\] However, in view of Theorem 2, this contradicts with the assumption of non-nilpotency for \(\mathcal{V}_{f}\). The obtained contradiction completes the proof. **Lemma 4.4**.: _If \(\mathcal{V}_{f}\) is a proper non-nilpotent subvariety of \(\mathsf{WN}^{(2)}\) distinguished by some multilinear identity \(f=f(x_{1},x_{2},x_{3})\) of degree \(3\), then \(f\) should have the form_ \[f=\sum_{\sigma\in\mathrm{S}_{3}}\lambda_{\sigma}\,x_{\sigma(1)}\left(x_{\sigma (2)}x_{\sigma(3)}\right),\quad\lambda_{\sigma}\in F,\] _where \(\mathrm{S}_{3}\) is the symmetric group on the set \(\{1,2,3\}\)._ Proof.: Applying, as above, Theorem 2, we write down \(f\) as \[f=\sum_{\sigma\in\mathrm{S}_{3}}\lambda_{\sigma}\,x_{\sigma(1)}\left(x_{\sigma (2)}x_{\sigma(3)}\right)+\sum_{\delta\in\mathrm{C}_{3}}\mu_{\delta}\left(x_{ \delta(1)},x_{\delta(2)},x_{\delta(3)}\right),\quad\lambda_{\sigma},\mu_{ \delta}\in F,\] where \(\mathrm{C}_{3}\) is the cyclic group on the set \(\{1,2,3\}\). If there is at least one of the scalars \(\mu_{\delta}\neq 0\), then, similarly to the above cases, we restrict with the assumption \(\mu_{\mathrm{id}}=1\). Then, applying Lemma 3.2, we obtain \[f\big{(}(x_{1}x_{4})x_{5},x_{2},x_{3}\big{)}=(x_{1}x_{4})R_{x_{5}}R_{x_{2}}R_{x_ {3}}.\] Again by virtue of Theorem 2, we have the contradiction with the hypothesis of non-nilpotency of \(\mathcal{V}_{f}\). ### Acknowledgments The results of this paper were first presented at the scientific seminar of the research group "Algebra and the Universal Algebraic Geometry" of the Federal University of the Rio Grande do Norte, Brazil. The authors are very thankful to all participant of the seminar, especially to Nir Cohen, Arkady Tsukov, Elena V. Aladova, Alexander S. Sivatski, Alan de Araujo Guimaraes, Ana Beatriz Gomez da Silva, and Jose Victor Gomes Teixeira for the creative work atmosphere and the constantly useful discussions.
2301.04257
ODIM: Outlier Detection via Likelihood of Under-Fitted Generative Models
The unsupervised outlier detection (UOD) problem refers to a task to identify inliers given training data which contain outliers as well as inliers, without any labeled information about inliers and outliers. It has been widely recognized that using fully-trained likelihood-based deep generative models (DGMs) often results in poor performance in distinguishing inliers from outliers. In this study, we claim that the likelihood itself could serve as powerful evidence for identifying inliers in UOD tasks, provided that DGMs are carefully under-fitted. Our approach begins with a novel observation called the inlier-memorization (IM) effect-when training a deep generative model with data including outliers, the model initially memorizes inliers before outliers. Based on this finding, we develop a new method called the outlier detection via the IM effect (ODIM). Remarkably, the ODIM requires only a few updates, making it computationally efficient-at least tens of times faster than other deep-learning-based algorithms. Also, the ODIM filters out outliers excellently, regardless of the data type, including tabular, image, and text data. To validate the superiority and efficiency of our method, we provide extensive empirical analyses on close to 60 datasets.
Dongha Kim, Jaesung Hwang, Jongjin Lee, Kunwoong Kim, Yongdai Kim
2023-01-11T01:02:27Z
http://arxiv.org/abs/2301.04257v2
ODIM: an efficient method to detect outliers via inlier-memorization effect of deep generative models ###### Abstract Identifying whether a given sample is an outlier or not is an important issue in various real-world domains. This study aims to solve the unsupervised outlier detection problem where training data contain outliers, but any label information about inliers and outliers is not given. We propose a powerful and efficient learning framework to identify outliers in a training data set using deep neural networks. We start with a new observation called the inlier-memorization (IM) effect. When we train a deep generative model with data contaminated with outliers, the model first memorizes inliers before outliers. Exploiting this finding, we develop a new method called the outlier detection via the IM effect (ODIM). The ODIM only requires a few updates; thus, it is computationally efficient, tens of times faster than other deep-learning-based algorithms. Also, the ODIM filters out outliers successfully, regardless of the types of data, such as tabular, image, and sequential. We empirically demonstrate the superiority and efficiency of the ODIM by analyzing 20 data sets. Machine Learning, ICML ## 1 Introduction Outlier (also anomaly) is an observation that differs significantly from other observations, and outlier detection (OD) is the task of identifying outliers in a given data set. OD has wide applications such as fraud detection, fault detection, and defect detection in images. OD is also used as a pre-processing step in supervised learning to filter out anomalous training samples, which may degrade the performance of a predictive model. OD problems can be categorized into three areas in general: 1) Supervised outlier detection (SOD) requires label information about whether each training sample is inlier (also normal) or outlier and solves the two-class classification task. A limitation of SOD is that it is hard to access the entirely labeled data set in practice. 2) Semi-supervised outlier detection (SSOD) refers to methods that assume all training data being inliers and construct patterns or models for the inliers. SSOD can be interpreted as the one-class classification task since information of outliers is not used during the training procedure. Similarly to SOD, it is not common to have a data set composed of only inliers (Chandola et al., 2009; Chalapathy and Chawla, 2019). 3) Unsupervised outlier detection (UOD) deals with the most realistic situations where training data include some outliers but no label information about anomalousness is available. Most anomaly detection tasks in practice are related to UOD since the information of outliers in massive data is hardly known in advance. In this study, we propose a novel algorithm for UOD problems. Our algorithm is motivated by so called _the memorization effect_ observed in noisy label problems (Arpit et al., 2017; Jiang et al., 2018). The goal of noisy label problems is to learn an accurate classifier when some of the class labels in the training data are contaminated. When standard supervised learning algorithms are applied to such mislabeled data, an interesting phenomenon so called _the memorization effect_ is observed where correctly labeled data are learned earlier and mislabeled data are learned later in the training phase of deep neural networks. The memorization effect makes it possible to detect mislabeled data by comparing per-sample losses in an early stage of the training phase. The aim of this paper is to apply the memorization effect to UOD problems to develop a novel algorithm for detecting outliers with high accuracy as well as high efficiency in computation. There already exists a study utilizing the memorization effect for UOD problems. (Wang et al., 2019) noticed that during a deep discriminative model is trained via the self-supervised learning framework, the model memorizes _inliers first_ and outliers next in the training phase, and named this phenomenon _the inlier-priority effect_. Generating more than a hundred artificial classes with a pre-specified annotation strategy, they suggested a method, called \(E^{3}\)-Outlier, which identifies outliers with high accuracy. Even though it works effectively, but \(E^{3}\)-Outlier is specialized to image data and not easy to be extended for other domains of data such as tabular data, sequential data as well as special image data including wafer images generated from semiconductor fabrications. This is because \(E^{3}\)-Outlier annotates the training data using a method that is specialized for image data. As a domain-agnostic UOD solver which can be used as an off-the-shelf method, we develop a new method that inherits the idea of the memorization effect but does not require any prior expertise in data. We start with finding a new and interesting observation that the memorization effect is also observed in learning a deep generative model. That is, when we train a deep generative model with training data that includes outliers, the inliers' loss values reduce prior to those of outliers at early updates. We call this observation the _inlier-memorization_ (IM) effect. The IM effect occurs because, in the early training phase, decreasing the loss values of inliers rather than outliers is a more beneficial direction to reduce the overall loss function. Detailed discussions about the IM effect are given in Section 3.2 and 3.3. Note that deep generative models does not require class labels, thus domain-specific techniques such as the annotation method used in \(E^{3}\)-Outlier are not necessary and so our method is domain-agnostic. Utilizing the IM effect, we propose a simple but powerful OD solver called the _outlier detection via the IM effect_ (ODIM) to identify outliers from a given training data set. We train a deep generative model with a log-likelihood-based approach such as the VAE (Kingma and Welling, 2013) or IWAE (Burda et al., 2016) for a few updates, and we regard data with large loss values compared to the per-sample loss distribution as outliers. As the IM effect is sensitive to how many updates of a deep generative model proceed, the key to the success of our method is to choose the optimal number of updates to utilize the IM effect maximally. For this purpose, we develop the following strategy. At each update, we fit the Gaussian mixture model with two components to the per-sample loss distribution and evaluate the Wasserstein distance of the two components. We chase the distance as the update proceeds and select the optimal update point at which the distance becomes the largest. Our method has several advantages over existing OD methods. First, as mentioned above, the ODIM is agnostic to data domains such as tabular, sequential or image since it is built on unsupervised learning. By analyzing numerous data sets, 20 in total, rooted in various domains, we demonstrate that the ODIM consistently yields competitive or superior results in identifying outliers. See Section 4. Second, the ODIM is efficient in computational time and resources because it requires only a few training updates, usually a few epochs, in the training phase to detect outliers. Thus, even when we train multiple generative models to utilize an ensemble technique, the ODIM is still much faster than other recent UOD solvers such as (Ruff et al., 2018; Lai et al., 2020) which require at least 200 training epochs. Third, the ODIM is relatively insensitive to the choice of the hyper-parameters; thus, it is easy to apply to real problems without much effort. In contrast, most existing UOD methods have the objective functions with regularized terms that should be controlled carefully (Ruff et al., 2018; Ruff et al., 2020; Lai et al., 2020). For example, they are even sensitive to the choice of the learning scheduler. Sensitivity to the hyper-parameters makes it difficult to use the corresponding algorithms in practice. This paper is organized as follows. Section 2 provides a brief review for related researches for OD problems. The detailed descriptions of the ODIM algorithm with discussions of the IM effect are given in Section 3. Results of various experiments including performance tests and ablation studies are presented in Section 4. Further discussions are provided in Section 5 and concluding remarks follow in Section 6. The key contributions of this work are: * We find a new phenomenon called the IM effect that deep generative models memorize inliers prior to outliers at early training phases. * We develop a simple and domain-agnostic UOD learning method called the ODIM to identify outliers in a given unlabeled training data set contaminated with anomalous samples. * We empirically demonstrate the superiority and efficiency of our method by analyzing various benchmark data sets. ## 2 Related works In this section, we only consider SSOD and UOD problems. Semi-supervised outlier detectionA popular technique for SSOD is the one class classification approach which transforms data into a feature space and distinguishes outliers from inliers by their radii from the center on the feature space. The OCSVM (Scholkopf et al., 2001) and SVDD (Tax and Duin, 2004) are two representative algorithms, which use kernel techniques to construct the feature space. Succeeding their ideas, plenty of SSOD algorithms using deep neural networks have been developed. The DeepSVDD (Ruff et al., 2018) extends the SVDD by utilizing a deep autoencoder (AE) for learning a feature map, and the DeepSAD (Ruff et al., 2020) modifies the DeepSVDD to incorporate labeled outliers to training data. Modifications of the DeepSVDD have been developed by (Zong et al., 2018; Mahmood et al., 2021; Xia et al., 2015). In addition to AE, deep generative models are also popularly used for SSOD (Ryu et al., 2018; Nalisnick et al., 2019; Jiang et al., 2022). There are methods for SSOD other than the one class classification approach. The SimCLR (Chen et al., 2020) and BERT (Devlin et al., 2019) utilize self-supervised learning by generating artificial labels automatically to obtain a desirable feature map, and various algorithms based on this idea have been developed (Golan and El-Yaniv, 2018; Bergman and Hoshen, 2020; Tack et al., 2020; Sehwag et al., 2021). When some labels (not related to inliers or outliers) are available, feature maps for classification of those labels can be used for distinguishing outliers from inliers (Hendrycks and Gimpel, 2017; Liang et al., 2018; Gomes et al., 2022). Unsupervised outlier detectionAs for traditional approaches, the LOF (Breunig et al., 2000a) compares the density of a given datum compared to the densities of its neighborhoods, and the IF (Liu et al., 2008a) utilizes the fact that outliers can be separated out by random trees with relatively small sizes. The UOCL (Liu et al., 2014) solves UOD problems by employing pseudo soft labels and training them jointly with the one-class classification model. There are various methods to solve UOD problems with deep learning models. The RDA (Zhou and Paffenroth, 2017) combines the robust PCA and AE to detect outliers. The DSEBM (Zhai et al., 2016) utilizes the energy-based model for density estimation and uses the energy score or reconstruction error to identify outliers. The RSRAE (Lai et al., 2020b) devises a new hidden layer called RSR, inserting it between encoder and decoder of a deep AE to separate inliers and outliers effectively. The \(E^{3}\)-Outlier (Wang et al., 2019a) trains a deep neural network by self-supervised learning and identifies outliers based on how fast the loss decreases as the training proceeds. ## 3 Proposed method ### Notations and definitions For a given input vector \(\mathbf{x}\in\mathbb{R}^{D}\), we denote its anomalousness by \(y^{o}\in\{0,1\}\), that is, \(y^{o}=0\) if \(\mathbf{x}\) is an inlier and \(y^{o}=1\) otherwise. Note that only \(\mathbf{x}\) is observable but \(y^{o}\) is not under the UOD task. Let \(\mathcal{U}^{tr}=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\}\) be unlabeled training data. Our goal is to detect outlier samples, i.e. \(\mathbf{x}\) with \(y^{o}=1\), from \(\mathcal{U}^{tr}\) as accurately as possible. Let \(p(\mathbf{x}|\mathbf{z};\theta)\) and \(q(\mathbf{z}|\mathbf{x};\phi)\) be given encoder and decoder parameterized by \(\theta\) and \(\phi\), respectively, where \(\mathbf{z}\in\mathbb{R}^{d}\) (generally assuming \(d<D\)) is a latent vector. For a given \(p\in\mathbb{N}\), we denote the \(l_{p}\)-norm of a vector \(\mathbf{a}\) by \(\|\mathbf{a}\|_{p}\). For two real-valued functions defined on \(\mathbb{R}^{+}\), \(f(t)\) and \(g(t)\), \(f(t)\) is said to be \(\Theta(g(t))\) if there exist positive constants \(C_{1},C_{2}\) and \(T\) such that \(C_{1}\cdot g(t)\leq f(t)\leq C_{2}\cdot g(t)\) holds for all \(t\geq T\). ### Main motivation: inlier-memorization effect Before proposing our method, we first explain the main motivation - the _inlier-memorization_ effect. Suppose that we are training a deep generative model with a certain learning framework where the training data contain both inliers and outliers. For an illustration of the IM effect, we analyze Cardio data set and train a deep generative model using the VAE method (Kingma and Welling, 2013). The encoder and decoder architectures are 2-layered deep neural networks (DNNs) with \(d=5\) and 50 hidden nodes for each hidden layer. We are to closely look at the per-sample loss distribution in an early training phase. For this purpose, we train the encoder and decoder by minimizing the VAE loss function only for one epoch. The middle panel in Figure 1 shows the empirical distribution of the per-sample loss values of the training data. We can observe that the loss values of inliers tend to be smaller than those of outliers, which means that the generative model is trained in the direction of memorizing inliers first at the beginning of the training phase, and we call this phenomenon the _inlier-memorization_ (IM) effect. Conceptually, the IM effect is not a surprising phenomenon. Assume that the per-sample loss function is continuous on the input space. Hence, reducing the loss function on dense regions of the input space is beneficial to reduce the overall loss function (e.g. the negative log-likelihood). Since inliers usually locate on dense regions while outliers locate on sparse regions, reasonable learning algorithms would focus more on dense regions in an early stage of the training phase, which results in the IM effect. Note that the IM effect is observed only in an early training phase since the learned model memorizes both of inliers and outliers in a later training phase. ### Theoretical analysis We provide a theoretical explanation of the inlier-memorization effect with a toy example where we train a linear factor model using the VAE. That is, \(p(\mathbf{x}|\mathbf{z};\theta)\) is the density function of \(W\mathbf{z}+b+\epsilon\), where \(W\in\mathbb{R}^{D\times d},b\in\mathbb{R}^{D}\) are the loading matrix and bias vector and \(\epsilon\sim N(0_{D},\sigma^{2}I_{D})\) is a noise vector. For \(q,\) we set \(q(\mathbf{z}|\mathbf{x};\phi)\) as the density function of \(U\mathbf{x}+v+\tau\), where \(U\in\mathbb{R}^{d\times D}\), \(U\in\mathbb{R}^{d}\), and \(\tau\sim\mathcal{N}(0_{d},\eta^{2}I_{d})\). Here, we set \(\sigma\) and \(\eta\) as fixed values. Thus, \(\theta\) and \(\phi\) are \((W,b)\) and \((U,v)\), respectively. Note that the objective function of the VAE for a given input vector \(\mathbf{x}\) is given as \[L^{\text{VAE}}(\theta,\phi;\mathbf{x}):=\mathbb{E}_{\mathbf{z}\sim q(\mathbf{z}| \mathbf{x};\phi)}\left[\log\left(\frac{p(\mathbf{x}|\mathbf{z};\theta)p(\mathbf{ z})}{q(\mathbf{z}|\mathbf{x};\phi)}\right)\right], \tag{1}\] where \(p(\mathbf{z})\) is the density function of the standard multivariate Gaussian distribution. We assume that each element in \(W,b,U,\) and \(v\) is randomly initialized by the i.i.d. uniform distribution on \([-1,1]\). Then we have the following proposition whose proof is given in Appendix. **Proposition 3.1**.: 1 Footnote 1: Since the generative model \(p(\mathbf{x};\theta)\) is related only with the parameter \(\theta\), we only consider the gradient with respect to \(\theta\). \[\mathbb{E}_{\theta,\phi}\left\|\frac{\partial}{\partial\theta}L^{\text{VAE}}( \theta,\phi;\mathbf{x})\right\|_{2}^{2}=\Theta\left(\|\mathbf{x}\|_{2}^{4} \right).\] Proposition 3.1 implies that at the beginning of learning, the Euclidean norm of the gradient of the VAE depends on the euclidean norm of the input vector. It implies that if the norms of inliers and outliers are similar, the initial update direction of \(\theta\) is influenced more by inliers than outliers due to their imbalanced proportions. Therefore, in the early training phase, the generative model is trained in the direction of memorizing inliers before outliers and thus the IM effect emerges. Even though Proposition 3.1 considers a linear generated model, its implication is still valid for a deep generated model. To confirm this statement, we conduct a simple experiment to support our theoretical explanation of the IM effect by analyzing Cardio data set, already explained in Section 3.2. We normalize the input features of Cardio data so that each feature is ranged between 0 and 1. Using these normalized data, we train a deep generative model by minimizing the VAE objective function for a single epoch, and take a look at the per-sample loss distributions of inliers and outliers. The result is given in the middle panel of Figure 1 where the per-sample losses of the inliers are much smaller than those for the outliers. This IM effect can be explained as follows. The similarity of the distributions of the norms of inliers and outliers is shown in the left panel of Figure 1 and Proposition 3.1 implies that the norm of the per-sample gradient is similar for inliers and outliers and thus the initial update direction of \(\theta\) is determined mainly by the inliers (because the number of inliers is much larger) to yield the IM effect. ### Outlier detection via inlier-memorization effect: Algorithm The IM effect provides a way of detecting outlier by utilizing the per-sample loss values of a deep generative model at an early training phase. In this subsection, we propose a new learning algorithm for solving UOD problems, which is called the outlier detection via the inlier-memorization effect (ODIM). The ODIM algorithm consists of two steps: 1) train a deep generative model for a pre-specified number of updates and 2) based on the per-sample loss values, regard a given sample as an outlier when the corresponding loss value is large. To implement this idea, a couple of additional techniques are needed which are explained below. Choice of the learning algorithm for a deep generative modelWe should choose a learning algorithm for a deep Figure 1: (**Left**) The Euclidean norm distribution of Cardio data. There is no significant distributional discrepancy between inliers and outliers. (**Middle**) The distribution of the per-sample loss values of Cardio data after training a single epoch. A significant gap between the distributions of inliers and outliers is seen. (**Right**) The positive relationship between the Wasserstein distance and identifying performance (AUC) on Cardio data. generative model carefully to make the IM effect appear more clearly. There exist numerous algorithms to train a deep generative model, which can be roughly divided into two approaches: 1) maximizing the log-likelihood (Kingma & Welling, 2013; Burda et al., 2016; Rezende & Mohamed, 2015; Tomczak & Welling, 2018; Kim et al., 2020) and 2) using adversarial networks (Goodfellow et al., 2014; Nowozin et al., 2016; Arjovsky et al., 2017; Gulrajani et al., 2017). It is well-known that adversarial-network-based methods provide more realistic data. But due to their inherent feature of searching the saddle points (i.e. solving the min-max problem), it is difficult to define the per-sample loss and thus it is not suitable for the ODIM. In contrast, the per-sample loss in the likelihood approach is naturally defined as the per-sample negative log-likelihood value, and thus it is easy to develop the ODIM algorithm with the log-likelihood-based approach. Calculation of the likelihood function, however, is computationally difficult. To resolve this problem, we use a computable lower bound of the log-likelihood, such as the evidence lower bound (ELBO) used in the variational autoencoder (VAE, (Kingma & Welling, 2013)). There exist several lower bounds on the log-likelihood tighter than ELBO (Burda et al., 2016; Tomczak & Welling, 2018; Kim et al., 2020). Among them, we decide to employ the importance weighted autoencoder (IWAE, (Burda et al., 2016)) because IWAE can control the tightness of its lower bound to the log-likelihood and is relatively easy to compute. Usually, the IM effect becomes more obvious when the lower bound is tighter, but more computation is required to make the lower bound be tighter. Figure 3: Distributions of per-sample (**Left**) data norms and (**Right**) gradient norms on Cardio. We consider two pre-processing schemes to normalize each feature: 1) (**Upper**) min-max pre-processing, and 2) (**Lower**) standardization pre-processing. Figure 2: Comparison results of (**Left**) AUC and (**Right**) AP values varying the pollution rate \(r\) from 0.1 to 0.3. We analyze two data sets: (**Upper**) MNIST and (**Lower**) FMNIST. Vertical bars present the standard deviations. The objective function of the IWAE is given as: \[L^{\text{IWAE}}(\theta,\phi;\mathbf{x})\] \[:= \mathbb{E}_{\mathbf{z}_{1},\dots,\mathbf{z}_{K}\sim q(\mathbf{z}| \mathbf{x};\phi)}\left[\log\left(\frac{1}{K}\sum_{k=1}^{K}\frac{p(\mathbf{x}| \mathbf{z}_{k};\theta)p(\mathbf{z}_{k})}{q(\mathbf{z}_{k}|\mathbf{x};\phi)} \right)\right], \tag{2}\] where \(p(\mathbf{z})\) is the density of the standard multivariate Gaussian distribution and \(K\) is the number of samples. Note that the IWAE reduces to the VAE when \(K=1\). In practice, the IWAE utilizes the Monte Carlo method to approximate (2) by \[\widehat{L}^{\text{IWAE}}(\theta,\phi;\mathbf{x}):=\log\left(\frac{1}{K}\sum_{ k=1}^{K}\frac{p(\mathbf{x},\mathbf{z};\theta)p(\mathbf{z}_{k})}{q(\mathbf{z}_{k}| \mathbf{x};\phi)}\right), \tag{3}\] where \(\mathbf{z}_{k},k=1,\dots,K\) are independently drawn from \(q(\mathbf{z}|\mathbf{x};\phi)\). We train the encoder and decoder simultaneously by maximizing the empirical expectation of (3) \[\mathbb{E}_{\mathbf{x}\sim\mathcal{U}^{tr}}\left[\widehat{L}^{\text{ IWAE}}(\theta,\phi;\mathbf{x})\right]=\frac{1}{n}\sum_{i=1}^{n}\widehat{L}^{ \text{IWAE}}(\theta,\phi;\mathbf{x}_{i}), \tag{4}\] with respect to \(\theta\) and \(\phi\) simultaneously over \(\mathcal{U}^{tr}\). It is known that \(L^{\text{IWAE}}(\theta,\phi;\mathbf{x})\) converges to the log-likelihood as \(K\) goes to infinity (Burda et al., 2016). However, a larger \(K\) requires more computation and hence we need to choose \(K\) carefully to compromise performance and computation. In this study, we set the value of \(K\) to 50. An ablation study for the role of \(K\) is done whose results are given in Section 4.4. Choice of the optimal number of updatesWe empirically find out that the IM effect usually appears at very early in the training phase, even sometimes fewer than a single epoch, and the degree of the IM effect (i.e., the difference of the loss distributions between inliers and outliers) depends sensitively on the number of updates of the model. Moreover, the optimal number of updates varies from data to data. Thus, it would be a key for the success of the ODIM algorithm to choose the optimal number of updates data adaptively. We devise a heuristic strategy to decide the number of updates where the IM effect is maximized. At each update of the model, we assess the _degree of bimodality_ of the per-sample loss distribution and select the optimal number of updates at which the degree of bimodality is maximized. To be more specific, let \(l_{1},\dots,l_{n}\) be \(n\) be the normalized loss values calculated with the current estimated generative model, having values between 0 and 1, of the training data. We fit the two component Gaussian mixture model (GMM-2) \(\pi_{1}\mathcal{N}(\mu_{1},\sigma_{1}^{2})+\pi_{2}\mathcal{N}(\mu_{2},\sigma_ {2}^{2})\) on these loss values, and then we investigate how much the two normal distributions in the fitted Gaussian mixture model are different to measure the degree of bimodality. For the discrepancy measure of two normal distributions, we use the Wasserstein distance \begin{table} \begin{tabular}{l|c c c c|c c c} \hline **Data** & **Size** & **\# features** & **\# outliers (\%)** & **Data** & **Size** & **\# features** & **\# outliers (\%)** \\ \hline BreastW & 683 & 9 & 239 (35\%) & Vowels & 1456 & 12 & 50 (3.4\%) \\ Cover & 286048 & 10 & 2747 (0.9\%) & Wbc & 278 & 30 & 21 (5.6\%) \\ Glass & 214 & 9 & 9 (4.2\%) & Arrhythmia & 452 & 274 & 66 (15\%) \\ Ionosphere & 351 & 33 & 126 (36\%) & Cardio & 1831 & 21 & 176 (9.6\%) \\ Mammography & 11183 & 6 & 260 (2.32\%) & Satellite & 6435 & 36 & 2036 (32\%) \\ Musk & 3062 & 166 & 97 (3.2\%) & Satimage-2 & 5803 & 36 & 71 (1.2\%) \\ Pendigits & 6870 & 16 & 156 (2.27\%) & Shuttle & 49097 & 9 & 3511 (7\%) \\ Pima & 768 & 8 & 268 (35\%) & Thyroid & 3772 & 6 & 93 (2.5\%) \\ \hline \end{tabular} \end{table} Table 1: Description of tabular data sets Figure 4: (**Left**) AUC results on tabular data sets with various number of samples drawn from \(q(\mathbf{z}|\mathbf{x};\phi)\). (**Middle**) AUC results on tabular data sets with various number of models from one to twenty to take ensemble. (**Right**) AUC results on FMNIST for each class with various learning rates. We vary the learning rate from 1e-4 to 1e-1. which is given as \[W_{2}(\mathcal{N}(\mu_{1},\sigma_{1}^{2}),\mathcal{N}(\mu_{2},\sigma_{2}^{2}))=( \mu_{1}-\mu_{2})^{2}+(\sigma_{1}-\sigma_{2})^{2}. \tag{5}\] The right panel in Figure 1 illustrates the values of AUC on the training data of Cardio at the first \(10\times m\) updates for \(m=1,\ldots,50\) and their corresponding Wasserstein distances. We can clearly see that the Wasserstein distance is a useful measure for selecting the optimal number of updates. In practice, we calculate the Wasserstein distance at every \(N_{\text{u}}\) update and terminate the update when the largest Wasserstein distance has not been improved for \(N_{\text{pat}}\) many calculations of the Wasserstein distance in a row, and select the optimal number of updates as the one at which the Wasserstein distance is maximized. We set \(N_{\text{u}}\) and \(N_{\text{pat}}\) to 10 in all the following numerical experiments unless specifically stated. We summarize the ODIM's pseudo algorithm in Algorithm 1. ``` 0: Training data set \(\mathcal{U}^{tr}=\{\mathbf{x}_{1},...,\mathbf{x}_{n}\}\) 0: : Decoder: \(p(\mathbf{x}|\mathbf{z};\eta)\), Encoder: \(q(\mathbf{z}|\mathbf{x};\phi)\), GMM-2: \(\pi_{1}\mathcal{N}(\mu_{1},\sigma_{1}^{2})+\pi_{2}\mathcal{N}(\mu_{2},\sigma_ {2}^{2})\), Mini-batch size: \(n_{\text{mb}}\), Optimizer: \(\mathcal{O}\), Number of samples in IWAE: \(K\), Update unit number: \(N_{\text{u}}\), Maximum patience: \(N_{\text{pat}}\) 0: : Auto-encoder: \(p(\mathbf{x}|\mathbf{z};\eta)\) and \(q(\mathbf{z}|\mathbf{x};\phi)\), Classifier: \(f(\mathbf{x};\theta)\), GMM-2: \(\pi_{1}\mathcal{N}(\mu_{1},\sigma_{1}^{2})+\pi_{2}\mathcal{N}(\mu_{2},\sigma _{2}^{2})\) while\(n_{\text{pat}}<N_{\text{pat}}\)do for k in \(1:N_{\text{u}}\)do 1. Generate a subset \(U\) with the mini-batch size of \(n_{\text{mb}}\) from \(\mathcal{U}^{tr}\). 2. Using the mini-batch \(U\) and optimizer \(\mathcal{O}\), update \(\theta\) and \(\phi\) with the mini-batch version of (4). \(l_{i}\leftarrow\widehat{L}^{\text{ IWAE}}(\mathbf{x}_{i};\theta,\phi)\) for \(i=1,\ldots,n\). // per-sample loss \(\{\tilde{l}_{i}\}_{i=1}^{n}\gets normalize(\{l_{i}\}_{i=1}^{n})\). // normalize loss values Fit the parameters in GMM-2 using \(\{\tilde{l}_{i}\}_{i=1}^{n}\). if\(d_{\text{WD}}>D_{\text{WD}}^{\text{max}}\)then \(D_{\text{WD}}^{\text{max}}\gets d_{\text{WD}}\)// update the maximum Wasserstein distance \(l_{i}^{*}\gets l_{i}\), \(i=1,\ldots,n\)// update the best inlier scores \(\theta^{*},\phi^{*}\leftarrow\theta,\phi\)// save the best parameters \(n_{\text{pat}}\gets 0\) else \(n_{\text{pat}}\gets n_{\text{pat}}+1\) endif endfor endwhile Output: \(l_{i}^{*},i=1,\ldots,n\)// best inlier scores Output: \(\theta^{*},\phi^{*}\) // best parameters ``` **Algorithm 1****ODIM** Ensemble of ODIM scoresTo improve and stabilize our method, we adopt an ensemble strategy. We train multiple models to have multiple best models for detecting outliers. From the best models, we obtain multiple per-sample loss values for each datum, and take the average of them to yield the final score. The number of multiple models is fixed to 10 in our experiments. We will empirically show the effectiveness of using multiple models in Section 4.4. Since the IM effect is maximized at early in the training phase and learning multiple models can be done in parallel, the ensemble ODIM is still faster than other deep learning algorithms which require hundreds of training epochs. ## 4 Numerical experiments In this section, we show the superiority of our proposed method empirically through extensive experiments. We analyze numerous data sets, 20 in total, that cover tabular, image, and sequential types. We show that the performance of the ODIM for detecting outliers is favorably compared with other competitors regardless of data types. We also carry out ablation studies including the running time analysis and effect of hyper-parameters. In the experiments, we report the averaged results based on five trainings with randomly initialized parameters. We apply the Pytorch framework to implement our algorithm using a single NVIDIA TITAN XP GPU. We leave our implementation code on our GitHub web page. 2 ### Data set description Tabular data setsWe consider 16 tabular data sets which are frequently analyzed in the OD literature. They are obtained from various domains, such as clinical pathology or astronomy, and are publicly accessible on (Rayana, 2016). We refer Table 1 for the detailed descriptions of the data sets. All the data sets have the labels about the anomalous information, but we exclude them in the training phase to conduct unsupervised learning tasks, but use them in the test phase to assess the accuracy of outlier detection. MNIST & Fashion MNISTWe analyze two synthetic image data sets made from MNIST (LeCun et al., 1998) and FMNIST (Xiao et al., 2017). MNIST and FMNIST contain grey-colored \(28\times 28\) images of handwritten digits and clothing, respectively. Both data sets consist of 60K training data and 10K test data. To conduct UOD tasks, we pre-process the two data sets in advance, as is done in the previous works (Ruff et al., 2018; Chalapathy et al., 2018; Golan and El-Yaniv, 2018; Wang et al., 2019). We choose a class \(c\) to be the normal class and regard the others as abnormal classes. We select all the training images whose class is \(c\) and randomly draw samples from the remaining data so that the ratio between the number of inlier and outlier data is \(1:r\), where \(r\in[0,1]\) is a pre-specified pollution rate. Then, we combine the normal and abnormal data and discard their class labels to make unlabeled data. Note that both MNIST and FMNIST have ten classes; thus, we can generate ten types of training data with the pollution rate \(r\). We report averaged performance on the ten training data unless otherwise stated. WM-811kWe also investigate the real-world image data set, WM-811K (Wu et al., 2014), containing wafer images obtained from the semiconductor fabrication process. Among 811K images, we pick a part of images with label information about whether a given image is a failure (outlier). As a result, we utilize 172,950 wafers that include 25,519 failure images. Since the data have diverse sizes, we resize them to have the unified shape of \(28\times 28\) before analyzing them. Reuters-21578Besides tabular and image data, we additionally analyze text type data. Reuters-21578 data set is a collection of 21,578 documents from Reuters newswire in 1987. We follow the procedure in (Lai et al., 2020) to make analyzable data. We use the pre-processed data with a fixed dimension of 26,147 by applying the TFIDF transformer. We consider the five largest classes among 90 classes and randomly draw 360 samples randomly from each class. Similar to MNIST and FMNIST, we build training data using a pre-specified pollution rate \(r\) for each class and report averaged results over the five classes. ### Implementation details Data pre-processIn addition to identifying outliers in a given training data set, we also assess the identification performance in unseen (test) data. We use the given test data for MNIST and FMNIST. For the other data sets, we randomly split each data set into two partitions with the rate of 6:4 and use the first and second partitions as training and test data sets, respectively. After the training and test data are constructed, we apply the min-max normalization to them so that each feature has \begin{table} \begin{tabular}{l|c c c c c c} \hline **Data** & **IF** & **OCSVM** & **LOF** & **DeepSVDD** & **RSRAE** & **Ours** \\ \hline BreastW & 0.983 & 0.817 & 0.447 & 0.863 & 0.507 & **0.992** \\ Cover & 0.894 & **0.914** & 0.553 & 0.522 & 0.820 & 0.837 \\ Glass & 0.714 & 0.265 & 0.802 & **0.815** & 0.480 & 0.725 \\ Ionosphere & 0.859 & 0.758 & 0.894 & 0.827 & **0.969** & 0.914 \\ Mammography & **0.862** & 0.846 & 0.754 & 0.696 & 0.535 & 0.848 \\ Musk & 0.999 & 0.816 & 0.370 & 0.759 & 0.726 & **1.000** \\ Pendigits & **0.957** & 0.936 & 0.486 & 0.613 & 0.907 & 0.953 \\ Pima & 0.672 & 0.626 & 0.648 & 0.027 & 0.587 & **0.706** \\ Vowels & 0.807 & 0.607 & 0.946 & 0.787 & 0.876 & **0.904** \\ Wbc & 0.938 & 0.934 & 0.937 & 0.763 & 0.598 & **0.941** \\ Arrhythmia & 0.810 & 0.807 & 0.786 & 0.677 & **0.814** & 0.800 \\ Cardio & **0.931** & 0.913 & 0.716 & 0.589 & 0.245 & 0.916 \\ Satellite & 0.690 & 0.598 & 0.558 & 0.637 & **0.731** & 0.690 \\ Satimage-2 & 0.992 & 0.979 & 0.491 & 0.739 & 0.988 & **0.997** \\ Shuttle & 0.997 & **0.983** & 0.508 & 0.646 & 0.972 & 0.981 \\ Thyroid & **0.977** & 0.847 & 0.942 & 0.781 & 0.815 & 0.928 \\ \hline \end{tabular} \end{table} Table 2: Training AUC value comparisons on tabular data sets the minimum zero and maximum one. We mainly focus on the detecting performance on training data, and present the performance on test data in Section 4.4 of ablation study. Architecture & learning scheduleWe use two hidden layered DNN architectures for building the encoder and decoder and set \(K\), the number of samples drawn from the encoder used for constructing the IWAE objective function, to 50. We optimize the IWAE loss function with the Adam optimizer (Kingma & Ba, 2014) with the mini-batch size of 128 and the learning rate of 5e-4. To run the ODIM, we fix the two hyper-parameters, \(N_{\text{u}}\) and \(N_{\text{pat}}\), to 10. For the ensemble learning, we train 10 pairs of encoder and decoder, each of which is trained from different initialized parameter values. See our GitHub web page for the detailed implementations. BaselineFor baselines to be compared with the ODIM, we consider three machine-learning-based methods: 1) isolation forests (IF, (Liu et al., 2008)), 2) one-class SVM (OCSVM, (Scholkopf et al., 2001)), and 3) local outlier factor (LOF, (Breunig et al., 2000)), and two deep-learning-based methods: 4) deep support vector data description (DeepSVDD, (Ruff et al., 2018)) and 5) robust subspace recovery autoencoder (RSRAE, (Lai et al., 2020)). As for the LOF, OCSVM, and IF methods, we implement them using the scikit-learn package, and we refer to the official GitHub websites of the DeepSVDD 3 and RSRAE 4 for their re-implementations. Like the ODIM, we report their average results on five runs with different initials. Footnote 3: [https://github.com/lukasruff/Deep-SVDD-PyTorch](https://github.com/lukasruff/Deep-SVDD-PyTorch) Footnote 4: [https://github.com/dmzou/RSRAE](https://github.com/dmzou/RSRAE) Footnote 5: [https://github.com/durz/pytorch](https://github.com/durz/pytorch) ### Performance for outlier identification We first compare the ODIM with the other baselines in terms of the performance for identifying outliers in a given training data set. We evaluate the two performance scores: the area under receiver operating characteristic (AUC or ROAUC) and average precision (AP). Results for tabular dataTables 2 and 3 list the AUC and AP results on the tabular data sets (We mark the best score for each data set as the bold face.). The results amply show the superiority of the ODIM compared to other baselines in identifying outliers since the ODIM achieves the best scores most frequently on both AUC and AP (6 for each). For example, on BreastW and Musk, the ODIM separates inliers and outliers (almost) perfectly. In addition, it is notable that the AP values of the ODIM are not much different from the best scores when it is not the best, while the AP values of the others have large variations. In particular, the two deep learning based methods, DeepSVDD and RSRAE, show relatively unstable results. For example, the AP values of RSRAE for BreastW and Cardio are much smaller than those of the ODIM. To check further the stability of the performance for detecting outliers according to the data sets, we rank the AUC and AP scores of the algorithms for each data set and take the av \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline & **IF** & **OCSVM** & **LOF** & **DeepSVDD** & **RSRAE** & **Ours** \\ \hline **AUC** & **2.094** & 3.813 & 4.250 & 4.750 & 4.000 & **2.094** \\ **AP** & 2.313 & 3.813 & 4.250 & 4.813 & 3.688 & **2.125** \\ \hline \hline \end{tabular} \end{table} Table 4: Averaged ranks of AUC and AP over tabular data sets \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline **Data** & **IF** & **OCSVM** & **LOF** & **DeepSVDD** & **RSRAE** & **Ours** \\ \hline BreastW & 0.961 & 0.843 & 0.308 & 0.678 & 0.392 & **0.988** \\ Cover & 0.049 & 0.085 & 0.012 & 0.012 & **0.149** & 0.032 \\ Glass & 0.080 & 0.032 & **0.170** & 0.149 & 0.104 & 0.119 \\ Ionosphere & 0.821 & 0.730 & 0.850 & 0.757 & **0.951** & 0.867 \\ Mammography & **0.211** & 0.108 & 0.103 & 0.053 & 0.024 & 0.098 \\ Musk & 0.999 & 0.124 & 0.025 & 0.132 & 0.057 & **1.000** \\ Pendigits & 0.291 & 0.211 & 0.038 & 0.039 & 0.215 & **0.302** \\ Pima & **0.507** & 0.456 & 0.446 & 0.431 & 0.427 & 0.491 \\ Vowels & 0.149 & 0.085 & **0.442** & 0.124 & 0.227 & 0.375 \\ Wbc & 0.560 & 0.553 & 0.557 & 0.206 & 0.113 & **0.710** \\ Arrhythmia & 0.448 & 0.415 & 0.394 & 0.362 & **0.518** & 0.443 \\ Cardio & 0.536 & 0.543 & 0.195 & 0.140 & 0.059 & **0.564** \\ Satellite & 0.673 & 0.583 & 0.389 & 0.461 & **0.688** & 0.652 \\ Satimage-2 & 0.902 & 0.847 & 0.027 & 0.031 & 0.887 & **0.949** \\ Shuttle & **0.980** & 0.951 & 0.084 & 0.167 & 0.713 & 0.947 \\ Thyroid & **0.541** & 0.148 & 0.239 & 0.124 & 0.186 & 0.327 \\ \hline \hline \end{tabular} \end{table} Table 3: Training AP value comparisons on tabular data sets erage of the ranks for each algorithm to obtain the averaged rank, which is summarized in Table 4. The ODIM achieves the best averaged ranks on both AUC and AP. These stability results imply that the ODIM can be used as an off-the-shelf method for outlier identification of tabular data. In the subsequent analyses, we demonstrate that the ODIM is still stable when dealing with other types of data sets. MNIST & FMNIST resultsFigures 2 compare the results for MNIST and FMNIST data sets. Similar to the tabular data, the ODIM works quite well for image data. Our method achieves the second best performances for all considering pollution rates on MNIST and the best performances on FMNIST. WM-811K resultsWe analyze an image data set called WM-811K consisting of various wafer images. Table 5 shows that our method marks the best and the second-best results on AUC and AP, respectively. In contrast, the two deep-learning-based methods do not work well. Reuters-21578 resultsWe analyze Reuters to check the efficiency of our method on super-high-dimensional data analysis, whose results are summarized in Table 6. The ODIM records the second-best AUC and AP values succeeding the RSRAE. Meanwhile, the IF, which is compared favorably to the ODIM for tabular data, shows sub-optimal performances. ConclusionThroughout the empirical experiments, we have noticed that among the various methods for outlier identification, the ODIM is the only one that performs consistently well regardless of data types. ### Ablation study Number of samples used in the IWAERecall that the IWAE objective function uses multiple samples, \(\mathbf{z}_{1},\ldots,\mathbf{z}_{K}\), independently drawn from \(q(\mathbf{z}|\mathbf{x};\phi)\), to achieve a tighter lower bound of the log-likelihood. We empirically check that using a tighter bound actually yields a clearer IM effect. The left panel in Figure 4 summarizes the AUC values on several tabular data sets with various \(K\)s from 1 to 100. For the results of the other tabular data sets, see Appendix. Note that the IWAE with \(K=1\) equals the original VAE. As expected, a lower bound closer to the log-likelihood tends to provide more obvious IM effect, leading to better identification performances. We can also observe that when the value of \(K\) becomes larger than 50, the enhancement seems saturated. For this reason, we set \(K=50\) in our experiments. Implementation timeWe investigate how fast our method is executed compared to other competitors. Table 7 summarizes the running time comparisons on several data sets, including tabular and image data. As for our method, we only display the running time of a single model learning since training multiple models to make an ensemble can be done in parallel. The results for the other data sets are listed in Appendix. As expected, the two deep learning methods are much slower than the non-deep-learning-based anomaly detection methods. Our method is generally faster than the deep learning methods. In particular, our method is 45 and 82 times faster than the RSRAE on FMNIST, and Wafer, respectively. This is surprising because our method identifies outliers better than the RSRAE on these data sets. When analyzing data with large sample sizes such as Cover, FMNIST, and Wafer, our method is similar or even faster than non-deep-learning methods. Our method has relatively inferior performance in running time on Reuters. Unlike the other data sets, Reuters is a super high-dimensional data set with more than 26,000 features. We conjecture that the high dimensionality of data might put off the occurrence of the IM effect. Accelerating the IM effect to overcome the running time issue when analyzing super high-dimensional data should be done, which we will do in a near future. Number of models to ensembleWe investigate the effect of the ensemble in the ODIM. We vary the number of models \begin{table} \begin{tabular}{l|c c c c c c} \hline & **IF** & **OCSVM** & **LOF** & **DeepSVDD** & **RSRAE** & **Ours** \\ \hline **AUC** & 0.574 (0.063) & 0.878 & 0.812 & 0.501 (0.033) & **0.915 (0.022)** & 0.888 (0.043) \\ **AP** & 0.202 (0.051) & 0.536 & 0.388 & 0.130 (0.022) & **0.610 (0.063)** & 0.553 (0.096) \\ \hline \end{tabular} \end{table} Table 6: Results of AUC and AP over the Reuters-21578 data set. We write the standard deviations of each method inside the parentheses. \begin{table} \begin{tabular}{l|c c c c c c} \hline & **IF** & **OCSVM** & **LOF** & **DeepSVDD** & **RSRAE** & **Ours** \\ \hline **AUC** & 0.680 (0.004) & 0.731 & 0.327 & 0.505 (0.114) & 0.672 (0.011) & **0.747 (0.005)** \\ **AP** & 0.354 (0.008) & **0.418** & 0.072 & 0.151 (0.063) & 0.203 (0.019) & 0.379 (0.003) \\ \hline \end{tabular} \end{table} Table 5: Results of AUC and AP over the WM-811K data set. We write the standard deviations of each method inside the parentheses. used in the ensemble from one to twenty and compare the performances on several tabular data sets, whose results are presented in the middle panel of Figure 4. The results for the other data sets are summarized in Appendix. There is a general tendency that using more models helps improve the identifying performance. The optimal number of models varies according to data sets, but the performance is not sensitive to the number of models used in the ensemble unless it is too small. Learning scheduleWe evaluate the robustness of the ODIM to the learning schedule. We consider the Adam optimizer with various learning rates from 1e-4 to 1e-1, whose results on FMNIST are depicted in the right panel of Figure 4. We present the results of the ten classes separately, where each class is regarded as the inlier class. Note that the identifying performances rarely change until we use a learning rate larger than 1e-2. As we usually do not consider a learning rate much larger than 1e-3 when we apply the Adam optimizer, we can conclude that our method is stable with respect to the learning schedule, which implies that our method can be used in practice without delicate settings. Identifying unseen dataAs we mentioned in Section 4.2, we have left some portion of data to examine outlier identification ability for unseen data. Table 8 and 9 summarize the test AUC and AP results. On tabular data sets, the results for unseen data are similar to those for training data, indicating that our method identifies normal samples well from unseen data. \begin{table} \begin{tabular}{l|r r r r r r} \hline **Data** & **IF** & **OCSVM** & **LOF** & **DeepSVDD** & **RSRAE** & **Ours** \\ \hline Cover & 6.192 & 2164.270 & 24.959 & 3463.820 & 2451.950 & 358.378 \\ Mammography & 0.408 & 2.482 & 0.221 & 135.958 & 98.356 & 21.002 \\ Pendigits & 0.350 & 1.246 & 1.899 & 83.944 & 62.739 & 15.998 \\ Satellite & 0.369 & 1.152 & 1.735 & 78.876 & 71.493 & 13.846 \\ Satimage-2 & 0.361 & 0.916 & 1.340 & 72.048 & 69.147 & 8.580 \\ Shuttle & 0.987 & 63.146 & 4.426 & 594.588 & 427.291 & 61.471 \\ FMNIST & 4.298 & 52.114 & 15.446 & 744.652 & 1495.926 & 32.859 \\ Wafer & 89.499 & 11498.385 & 910.600 & 4561.028 & 12363.401 & 149.986 \\ Reuters & 4.984 & 106.567 & 1.394 & 16.567 & 23.931 & 92.486 \\ \hline \end{tabular} \end{table} Table 7: Running time comparison for the ODIM and other competitors. All records are measured in seconds. \begin{table} \begin{tabular}{l|r r r r} \hline **Data** & **IF** & **OCSVM** & **LOF** & **Ours** \\ \hline BreastW & 0.968 & 0.890 & 0.311 & 0.988 \\ Cover & 0.056 & 0.095 & 0.013 & 0.034 \\ Glass & 0.140 & 0.122 & 0.213 & 0.152 \\ Ionosphere & 0.803 & 0.729 & 0.857 & 0.815 \\ Mammography & 0.261 & 0.109 & 0.066 & 0.106 \\ Musk & 0.997 & 0.126 & 0.024 & 1.000 \\ Pendigits & 0.300 & 0.241 & 0.028 & 0.295 \\ Pima & 0.537 & 0.517 & 0.478 & 0.539 \\ Vowels & 0.120 & 0.058 & 0.517 & 0.266 \\ Wbc & 0.605 & 0.332 & 0.353 & 0.549 \\ Arrhythmia & 0.490 & 0.324 & 0.281 & 0.403 \\ Cardio & 0.583 & 0.525 & 0.207 & 0.612 \\ Satellite & 0.679 & 0.593 & 0.376 & 0.659 \\ Satimage-2 & 0.925 & 0.872 & 0.029 & 0.959 \\ Shuttle & 0.951 & 0.950 & 0.061 & 0.837 \\ Thyroid & 0.587 & 0.065 & 0.068 & 0.392 \\ MNIST & 0.974 & 0.978 & 0.954 & 0.976 \\ FMNIST & 0.984 & 0.982 & 0.905 & 0.987 \\ Wafer & 0.423 & 0.496 & 0.109 & 0.451 \\ Reuters & 0.782 & 0.971 & 0.933 & 0.923 \\ \hline \end{tabular} \end{table} Table 8: Test AUC results of the ODIM. For MNIST, FMNIST, and Reuters, we set \(r=0.1\). \begin{table} \begin{tabular}{l|r r r r} \hline **Data** & **IF** & **OCSVM** & **LOF** & **Ours** \\ \hline BreastW & 0.968 & 0.890 & 0.311 & 0.988 \\ Cover & 0.056 & 0.095 & 0.013 & 0.034 \\ Glass & 0.140 & 0.122 & 0.213 & 0.152 \\ Ionosphere & 0.803 & 0.729 & 0.857 & 0.815 \\ Mammography & 0.261 & 0.109 & 0.066 & 0.106 \\ Musk & 0.997 & 0.126 & 0.024 & 1.000 \\ Pendigits & 0.300 & 0.241 & 0.028 & 0.295 \\ Pima & 0.537 & 0.517 & 0.478 & 0.539 \\ Vowels & 0.120 & 0.058 & 0.517 & 0.266 \\ Wbc & 0.605 & 0.332 & 0.353 & 0.549 \\ Arrhythmia & 0.490 & 0.324 & 0.281 & 0.403 \\ Cardio & 0.583 & 0.525 & 0.207 & 0.612 \\ Satellite & 0.679 & 0.593 & 0.376 & 0.659 \\ Satimage-2 & 0.925 & 0.872 & 0.029 & 0.959 \\ Shuttle & 0.951 & 0.950 & 0.061 & 0.837 \\ Thyroid & 0.587 & 0.065 & 0.068 & 0.392 \\ MNIST & 0.974 & 0.978 & 0.954 & 0.976 \\ FMNIST & 0.984 & 0.982 & 0.905 & 0.987 \\ Wafer & 0.423 & 0.496 & 0.109 & 0.451 \\ Reuters & 0.782 & 0.971 & 0.933 & 0.923 \\ \hline \end{tabular} \end{table} Table 9: Test AP results of the ODIM. For MNIST, FMNIST, and Reuters, we set \(r=0.1\). Interestingly, the test AP results for the other complex data sets, image and sequential data sets, tend to be higher than those for training data sets regardless of learning methods. We think that for such data the distributions of outliers for training and test data sets are quite different compared to the those of inliers due to their high dimensionality, which results in larger loss values for test outliers to make detecting them easier. ## 5 Further discussion ### Effect of data pre-processing on the ODIM Proposition 3.1 indicates that the gradient norm is proportional to the input norm. An interesting fact is that the input norm is not invariant to the normalization process, and so is the IM effect. This seemingly counter-intuitive phenomenon can be explained as follows. Since the initial parameters in the model considered in Proposition 3.1 are independent mean 0 random variables, data far from the origin are not explained well by an initial linear factor model, and thus, the norms of the corresponding gradients become larger. The above observation provides an important implication regarding the IM effect. The IM effect depends on the choice of the normalization process. For illustration, we apply the ODIM to the standardization scaling - a normalization process to force every feature to have mean zero and variance one. Figure 3 compares the input norms and gradient norms of the initial model of the min-max scaled data and the standardization scaled data. It is observed that the input norms of inliers in the min-max scaled data are similar to those of outliers while the input norms of inliers in the standardization scaled data are smaller. In turn, as implied by Proposition 3.1, the gradient norms of inliers are similar to those of outliers for the min-max scaled data while the gradient norms of outliers are much larger for the standardization scaled data. Note that inliers in the standardization scaled data mostly locate around the origin and thus the input norms of inliers become smaller. Smaller gradients of inliers generally result in the performance degradation. In Table 10, we empirically observe that among 16 tabular data sets, the ODIM with the standardization scaled data has better results only on five data sets. Even though it is better than the standardization scaling, the min-max scaling is by no means optimal for the IM effect. We leave the optimal choice of the normalization process and/or choice of initial parameters as a future research topic. ### ODIM with labeled data When outlier labels are available, some studies have exploited this additional information to solve outlier detection tasks more efficiently (Ruff et al., 2020; Daniel et al., 2019). But as far as we know, these existing works require that all outliers should be labeled, which is equivalent to the SOD setting. The only difference of (Ruff et al., 2020; Daniel et al., 2019) compared to the conventional SOD solvers is that they cast the problem into an one-class classification problem rather than a two-class one. While it is very costly to obtain perfectly labeled data, partially labeled data are frequently met in practice. In this subsection, we modify the ODIM for such a situa \begin{table} \begin{tabular}{l|c c} \hline **Data** & **Min-max** & **Standardization** \\ \hline BreastW & **0.992** & 0.664 \\ Cover & 0.837 & **0.961** \\ Glass & 0.**725** & 0.630 \\ Ionosphere & **0.914** & 0.863 \\ Mammography & **0.848** & 0.767 \\ Musk & **1.000** & 0.799 \\ Pendigits & **0.953** & 0.941 \\ Pima & **0.706** & 0.437 \\ Vowels & **0.904** & 0.639 \\ Wbc & **0.941** & 0.856 \\ Arrhythmia & **0.800** & 0.769 \\ Cardio & 0.916 & **0.943** \\ Satellite & 0.690 & **0.740** \\ Satimage-2 & **0.997** & 0.969 \\ Shuttle & 0.981 & **0.995** \\ Thyroid & 0.928 & **0.965** \\ \hline \end{tabular} \end{table} Table 10: Comparison of two data pre-processing methods: 1) min-max and 2) standardization. We report the AUC results. \begin{table} \begin{tabular}{l|c c c} \hline Data & \(l=0.0\) & \(l=0.3\) & \(l=0.5\) \\ \hline Arrhythmia & 0.800 & 0.837 & 0.888 \\ Cardio & 0.916 & 0.991 & 0.993 \\ Satellite & 0.690 & 0.868 & 0.881 \\ Satimage-2 & 0.997 & 0.998 & 0.999 \\ Shuttle & 0.981 & 0.990 & 0.990 \\ Thyroid & 0.928 & 0.995 & 0.995 \\ \hline \end{tabular} \end{table} Table 11: Training AUC values with various values of \(l\). We consider three values for \(l\), \(l=0.0,0.3,0.5\). \begin{table} \begin{tabular}{l|c c c} \hline Data & \(l=0.0\) & \(l=0.3\) & \(l=0.5\) \\ \hline Arrhythmia & 0.443 & 0.767 & 0.772 \\ Cardio & 0.564 & 0.934 & 0.943 \\ Satellite & 0.652 & 0.849 & 0.849 \\ Satimage-2 & 0.949 & 0.954 & 0.958 \\ Shuttle & 0.947 & 0.977 & 0.979 \\ Thyroid & 0.327 & 0.844 & 0.845 \\ \hline \end{tabular} \end{table} Table 12: Training AP values with various values of \(l\). We consider three values for \(l\), \(l=0.0,0.3,0.5\). tion. That means, apart from the unlabeled training data set \(\mathcal{U}^{tr}=\{\mathbf{x}_{1},\ldots,\mathbf{x}_{n}\}\), we also have a labeled outlier data set \(\mathcal{L}^{tr}=\{(\mathbf{x}_{1}^{l},1),\ldots,(\mathbf{x}_{m}^{l},1)\}\). Note that there exist outliers in \(\mathcal{U}^{tr}\). We simply adopt the idea of (Daniel et al., 2019), which encourages the log-likelihood of known outliers to decrease with the variational _upper_ bound. For \(u>1\), the upper bound, called \(\chi\) upper bound (CUBO), is given as: \[\begin{split} L^{\text{CUBO}}(\theta,\phi;\mathbf{x})\\ :=&\frac{1}{u}\log\mathbb{E}_{\mathbf{z}\sim q( \mathbf{z}|\mathbf{x};\phi)}\left[\left(\frac{p(\mathbf{x}|\mathbf{z};\theta) p(\mathbf{z})}{q(\mathbf{z}|\mathbf{x};\phi)}\right)^{u}\right].\end{split} \tag{6}\] With the above CUBO term (6), we modify the loss function of the ODIM by subtracting the expected CUBO on \(\mathcal{L}^{tr}\) from the original IWAE loss function to have: \[\mathbb{E}_{\mathbf{x}\sim\mathcal{U}^{tr}}\widehat{L}^{\text{ IWAE}}(\theta,\phi;\mathbf{x})-\gamma\cdot\mathbb{E}_{\mathbf{x}\sim \mathcal{L}^{tr}}\widehat{L}^{\text{CUBO}}(\theta,\phi;\mathbf{x}),\] where \[\widehat{L}^{\text{CUBO}}(\theta,\phi;\mathbf{x}):=\left(\frac{p(\mathbf{x}| \mathbf{z};\theta)p(\mathbf{z})}{q(\mathbf{z}|\mathbf{x};\phi)}\right)^{u},\] for \(\mathbf{z}\) being a random sample drawn from \(q(\mathbf{z}|\mathbf{x};\phi)\) and \(\gamma>0\) is a tuning parameter controlling the degree of the CUBO loss. The CUBO loss generally increases the IWAE per-sample loss values of outliers, leading to separating out the two components of the GMM-2 more clearly. In our paper, we set the value of \(\gamma\) to one. To investigate the improvements of our modified method, we assess the training AUC and AP values over several tabular data sets with various proportions of labeled outliers, which are summarized in Table 11 and 12. Here, \(l\in[0,1]\) means the ratio of the labeled and entire outliers. Note that the case of \(l=0.0\) is equivalent to when we only use \(\mathcal{U}^{tr}\). It is clearly seen that using label information helps to enhance identifying performance with a large margin, especially for the AP, even when the proportion of the labeled data is small. Note that the proposed modification of the ODIM for partially labeled data can be improved further. For example, we can use the labeled information to select the optimal number of updates. There would be other rooms to improve the ODIM for partially labeled data, which we leave for future works. ## 6 Concluding remarks This paper proposed a fast, powerful and easy to use UOD method called the ODIM. The ODIM is inspired by a new observation called the IM effect that deep generative models tend to memorize inliers first when they are trained. Combined with the technique to select the optimal number of training updates and the ensemble method, we found that the ODIM identified outliers effectively and efficiently; the ODIM provided consistently superior results regardless of data types with much faster running times. As far as the authors know, there are not many theoretical studies about the behavior of deep neural networks at the early training phase. It would be valuable to understand theoretically when and why the IM effect (or the memorization effect) emerges. Based on theoretical understandings, the suboptimal behavior of the ODIM on super high dimensional data could be improved.
2302.12165
Prosodic features improve sentence segmentation and parsing
Parsing spoken dialogue presents challenges that parsing text does not, including a lack of clear sentence boundaries. We know from previous work that prosody helps in parsing single sentences (Tran et al. 2018), but we want to show the effect of prosody on parsing speech that isn't segmented into sentences. In experiments on the English Switchboard corpus, we find prosody helps our model both with parsing and with accurately identifying sentence boundaries. However, we find that the best-performing parser is not necessarily the parser that produces the best sentence segmentation performance. We suggest that the best parses instead come from modelling sentence boundaries jointly with other constituent boundaries.
Elizabeth Nielsen, Sharon Goldwater, Mark Steedman
2023-02-23T17:03:36Z
http://arxiv.org/abs/2302.12165v1
# Prosodic features improve sentence segmentation and parsing ###### Abstract Parsing spoken dialogue presents challenges that parsing text does not, including a lack of clear sentence boundaries. We know from previous work that prosody helps in parsing single sentences Tran et al. (2018), but we want to show the effect of prosody on parsing speech that isn't segmented into sentences. In experiments on the English Switchboard corpus, we find prosody helps our model both with parsing and with accurately identifying sentence boundaries. However, we find that the best-performing parser is not necessarily the parser that produces the best sentence segmentation performance. We suggest that the best parses instead come from modelling sentence boundaries jointly with other constituent boundaries. ## 1 Introduction Parsing spoken dialogue poses unique difficulties, including speech disfluencies and a lack of defined sentence boundaries. Because of these difficulties, current parsers struggle to accurately parse English speech transcripts, even when they handle other English text well. Research has shown that prosody can improve parsing performance for speech that is already divided into sentence-like units (SUs).1 Tran et al. (2018, 2019). Footnote 1: We follow Kahn et al. (2004) in using the term ‘sentence-like units’ rather than ‘sentences’ throughout, since conversational speech doesn’t always consist of syntactically complete sentences. In this work, we hypothesize that prosodic features from the speech signal will help with parsing speech that _isn't_ segmented into SUs, by improving the parser's ability to find SU boundaries. We test this hypothesis by inputting entire dialog turns to a neural parser without SU boundaries. These turns resemble the input a dialog agent would receive from a user. We try two approaches: an end-to-end model that jointly segments and parses input, and a pipeline model that first segments and then parses the input. To our knowledge, there hasn't been previous research on combining SU segmentation and parsing into a single task. Following Tran et al. (2018, 2019), we consider two experimental conditions for each model: inputting text features only, and inputting both text and prosodic features extracted directly from the audio signal. We also follow them in using the Switchboard corpus of English conversational dialogue. Although overall parse scores are lower for parsers that don't have access to gold standard SU boundaries, our main hypothesis holds: that parsers using both text and prosodic features are more accurate than those using text alone. Unsurprisingly, the end-to-end model performs parsing better than the pipeline model because it doesn't suffer from error propagation. We expected to find that gains in parsing quality would come primarily because models with access to prosody would perform SU segmentation better. We do find that prosody helps all models improve their SU segmentation. However, the pipeline model produces much better segmentation scores than the end-to-end model, and yet it still does worse at parsing. In Section 5, we discuss why segmentation and parsing quality do not always correlate in this task. However, even though the best parses and segmentations are not always produced by the same model, all models perform better at both tasks with prosodic information. Our primary contributions are: * We build an end-to-end model that jointly performs SU segmentation and parsing. * We show that prosodic features are helpful for both SU segmentation and parsing, whether using an end-to-end or pipeline model. * We show that an end-to-end model performs parsing better than a pipeline model, specifically because the end-to-end model is able to model SU boundaries jointly with other constituent boundaries. Background: prosody and syntax Prosodic signals divide speech into units (Pierrehumbert, 1980). The location and type of these prosodic units are determined by information structure (Steedman, 2000), disfluencies (Shriberg, 2001), and to some extent, syntax (Cutler et al., 1997). Some psycholinguistic research shows that in experimental conditions, speakers can use prosody to predict syntax (e.g., Kjelgaard and Speer (1999)). However, Cutler et al. (1997) argues that English speakers often "fail to exploit" this prosodic information even when it is present, so it isn't actually a signal for syntax in practice. Many computational linguists have experimented with this possible link between syntax and prosody by incorporating prosody into syntactic parsers, which improves performance in some cases, but not all (e.g., Noeth et al. (2000); Gregory et al. (2004); Kahn et al. (2005); Tran et al. (2018)). Prosody's mixed record may be caused by the fact that prosodic units below the SU don't always coincide with traditional syntactic constituents (Selkirk, 1995, 1984). In fact, the only prosodic boundaries that consistently coincide with syntactic boundaries are the ends of SUs (Wagner and Watson, 2010). Prosodic boundaries at the end of SUs are more distinctive, with longer pauses and more distinctive pitch and intensity variations, making prosody a reliable signal for SU boundaries, even though it's less helpful for lower-level syntactic structure. Some researchers have used prosody to help in SU boundary detection. Examples of SU segmentation models that benefit from prosody include Gotoh and Renals (2000); Kolar et al. (2006); Kahn et al. (2004); Kahn and Ostendorf (2012), who all used traditional statistical models (e.g., HMMs, finite state machines, and decision trees), and Xu et al. (2014), who used a neural model. ## 3 Task and data We use the American English corpus Switchboard NXT (henceforth SWBD-NXT) (Calhoun et al., 2010) to allow us to compare performance with Tran et al. (2018) and Tran et al. (2019). SWBD-NXT comprises 642 telephone dialogues between strangers.2 These dialogues are transcribed and hand-annotated with Penn Treebank-style constituency parses, and have no punctuation. For acoustic features, we extract features for pitch, intensity, pause duration, and word duration from the audio signal, largely following the feature extraction procedure of Tran et al. (2018), summarized in Appendix A.1. Footnote 2: This corpus is relatively small compared to many speech datasets used today, but it is the largest speech corpus we know of with hand-annotated constituency parses. While other corpora with hand-annotated dependency parses exist (e.g., UD corpora such as Wong et al. (2017) and Dobrovoljc and Nivre (2016)), these are all significantly smaller than Switchboard NXT. The transcript divides the corpus into _SUs_ and _turns_. Since not all utterances are full sentences, we use the generic term'sentence-like unit' (SU). A turn is a contiguous span of speech by a single speaker. Not all turns in the SWBD-NXT contain multiple SUs: of a total 60.1k turns, 35.8k consist of a single SU. The average number of SUs per turn is 1.82. We follow the general approach of Tran et al. (2018), but where they parse a single SU at a time, we parse a whole turn. We approach this task in two ways: an end-to-end model (SU-segmentation and parsing done jointly) and a pipeline model (SU-segmentation done before parsing). Both models return constituency parses for each turn in the form of Penn Treebank (PTB)-style trees. In order to keep the output in the form of valid PTB trees for the end-to-end-model, we add a top-level constituent, labelled turn, to all turns, however many SUs they consist of. As we discuss in Section 6, this innovation allows the end-to-end model to treat SUs in the same way that it treats other syntactic units. To avoid memory problems from too-long inputs, we filter out two problematically long turns from the training set (out of 49,294 turns). We do not have to remove any turns from the development or test sets. This leaves the maximum turn length at 270 tokens. We also remove any turns for which some or all of the audio is missing. ## 4 Model For both the end-to-end model and the pipeline parser, we use Tran et al. (2019)'s parser, extending the code base described in their paper.3 The model is a neural constituency parser based on Kitaev and Klein (2018)'s text-only parser, with a transformer-based encoder and a chart-style decoder based on Stern et al. (2017) and Gaddy et al. (2018). This encoder-decoder is augmented with a CNN on the input side that handles prosodic features Tran et al. (2019). For further description of the model and hyperparameters, see Appendices A.2 and A.3. The text is encoded using 300-dimensional GloVe embeddings Pennington et al. (2014). For the pipeline, we first segment into SUs, and then parse the resulting SUs. For segmentation, we use a modified version of the parser: With the same encoder, we change the decoder to only do sequence labeling, marking tokens as either SU-internal or SU-final. We then parse the predicted SUs. Rather than using a parser that was trained on gold SUs, we train a parser on the SUs that the segmenter predicted on the train set. This allows the model to learn to produce the parses on imperfectly segmented SUs and leads to better parsing scores. We report two metrics for both the pipeline and end-to-end models: parse and SU segmentation F1 scores. Parse F1 is calculated on the whole turn using a Python implementation of EVALB.4 We don't count turn constituents, so that turn-based and SU-based parse scores are comparable. The SU segmentation F1 score is calculated on all turn-medial SU boundaries; turn-final SU boundaries are not counted. In order to calculate the SU segmentation F1 score for the end-to-end model, we consider every node that is a direct child of the tree's top turn node to be an SU. That is, SUs are just one kind of syntactic constituent, differentiated only by their location in the tree. Footnote 4: [https://github.com/ekayen/PYEVALB](https://github.com/ekayen/PYEVALB) Unless stated otherwise, we train each model on five random seeds, and report the mean of each metric over all five seeds. To determine the statistical significance of differences between model performance, we use bootstrap resampling Efron and Tibshirani (1994), resampling \(10^{5}\) times. ## 5 Results and discussion Experiments with the end-to-end model show that prosody shows a statistically significant effect on parsing performance, though this effect is small (see Table 1). In the pipeline model, the effect of prosody is larger: The difference in parse F1 score from adding prosody is 0.94 for the pipeline model, where it is only 0.52 for the end-to-end model. However, the pipeline model's parse F1 score is lower than the end-to-end model's. It's not surprising that the end-to-end model does better at parsing, since we expect errors to propagate from the segmentation step to the parsing step in the pipeline model. What is surprising is that the pipeline model has a much higher segmentation score than the end-to-end model, which complicates the error propagation account. We can explain this discrepancy by the kinds of errors each model makes. First, the end-to-end model tends not to predict top nodes for each gold SU, instead connecting lower nodes in the tree directly to the turn node, leading to oversegmentation, as shown in Figure 0(a). In Appendix A.4 we describe how we discouraged oversegmentation, but the end-to-end model still tends to oversegment severely. The pipeline model tends instead to undersegment. This tendency is reflected in the end-to-end and the pipeline models' similar segmentation recall, compared to the end-to-end model's very low precision, shown in Table 2. The examples in Figure 1 show the effect this has on segmentation: The end-to-end example shown here has a segmentation F1 score 57.1, where the pipeline has a score of 66.7. However, when scoring the parses, the end-to-end model is penalized much less. Both the end-to-end and pipeline models omit three nodes from the \begin{table} \begin{tabular}{l c c c} \hline \hline & **Gold SUs** & **E2E** & **Pipeline** \\ \hline **Dev. set:** & & & \\ Text only & \(90.31\) & \(85.70\) & \(84.34\) \\ Text+proosody & \(90.90\) & \(86.21\) & \(85.28\) \\ **Test set:** & & & \\ Text only & \(90.29\) & \(86.03\) & \(84.68\) \\ Text+proosody & \(90.65\) & \(86.55\) & \(85.62\) \\ \hline \hline \end{tabular} \end{table} Table 1: Development and test set parsing F1 score of the end-to-end and pipeline models (and for comparison, a model that receives gold standard SUs as input). Results averaged over 5 random seeds. \begin{table} \begin{tabular}{l c c c} \hline \hline & **E2E** & **Pipeline** \\ \hline **Dev. set:** & & & \\ Text only & F1 & \(66.32\) & \(63.74\) \\ & F1 & \(72.95\) & \(77.38\) \\ Text+proosody \(\left\{\begin{array}{l}\text{Prec}\\ \text{Rec}\end{array}\right.\) & \(69.46\) & \(79.44\) \\ & Rec & \(76.92\) & \(75.69\) \\ **Test set:** & & & \\ Text only & F1 & \(71.01\) & \(66.98\) \\ Text+proosody & F1 & \(72.94\) & \(77.38\) \\ \hline \hline \end{tabular} \end{table} Table 2: Segmentation F1 score of the turn-based models compared to the SU-based model, with precision and recall given for some cases, averaged over 5 random seeds. tree in Figure 1. However, the pipeline model also predicts several nodes high up in the tree that the end-to-end model does not. This leads to a much lower parse F1 score for the pipeline model: 69.2, compared to the end-to-end model's 88.0. In fact, the pipeline's better segmentation seems to actively _worsen_ its parse score. The pipeline's undersegmentation comes from predicting nodes that dominate entire predicted SUs -- like the s node in Figure 0(b). Because the s node erroneously spans the entire turn, it is far more likely that its vp daughter will also span too many nodes, as it does in this example. In Appendix A.5, we discuss measures we took to reduce this kind of error propagation, none of which eliminated this problem. In order to demonstrate how the phenomena shown in this example affect the performance on the whole development set, we look at the overall interaction of parse precision and segmentation accuracy. On examples where the SU boundaries are incorrectly predicted, as in the example in Figure 1, the pipeline model predicts many more incorrect nodes, and its parse precision declines by 5.22% on the development set (from 85.72 for all parses to 80.50 for parses with incorrect SU boundaries). By comparison, the end-to-end model's parse precision is less affected by segmentation quality - it only drops by 3.96 % (from 86.20 to 82.24). These results suggest that the end-to-end model's superior parsing ability comes from the fact that it is able to model all syntactic units, including SUs, in the same way. The pipeline model has to treat SUs as being logically prior to and distinct from all sub-SU units, which leads to the error propagation described above. By modeling SUs and other syntactic constituents similarly, the end Figure 1: Comparison of gold parse to the pipeline’s predicted tree. The end-to-end model’s tree isn’t shown because the only differences between it and the gold parse is the omission of the three nodes that are highlighted in green in the gold parse. This example has been edited for length and clarity, and part-of-speech tags have been omitted. to-end model is able to propose the sub-SU nodes that lead to the best parses overall, without being bound to a certain SU segmentation. ## 6 Conclusion Previous work has shown that prosody improves parse quality. In this work, we show that prosody improves parse and SU segmentation quality simultaneously. We show that parse and SU segmentation score are not necessarily correlated: A pipeline model does SU segmentation better, but an end-to-end model produces better parses. We propose that this is because the end-to-end models SU boundaries the same way as from other syntactic boundaries. By treating SUs as just another kind of syntactic unit, our model is able to take advantage of prosody to produce better parses overall. ## 7 Limitations A primary limitation of this work is the data used. We chose to use Switchboard-NXT because it is one of a very few speech corpora with gold constituency parses available, and it is used in the work that is most directly comparable to ours. Switchboard-NXT includes only North American English, and was specifically recorded in the early 1990s. We therefore don't know the extent to which our results hold for other varieties of English or for other languages. Another limitation of this corpus is its size and quality. Switchboard-NXT is relatively small (the section we used was approximately 50k turns), due to the difficulty of producing the thorough annotations this corpus includes. The audio quality is also limited due to the age of the data: The original Switchboard recordings have a sample rate of just 8 kHz (compare to standard compact disk sample rates of 44.1kHz). This reduces the resolution of acoustic features, and particularly affects high frequencies. This could affect the quality of the pitch feature in particular. We would strongly prefer more and better quality data; unfortunately, collecting and annotating a comparable corpus would be prohibitively expensive. Some of our conclusions about model behavior may be somewhat contingent on architecture. In particular, the end-to-end model's tendency to oversegment might not be as severe in other parsers. We chose to use Kitaev and Klein (2018)'s parser in order to facilitate comparisons with the work of Tran et al. (2019), but this parser is unusual in that there is no direct modeling of the relationship between parent and child nodes, which is part of how what makes it an especially efficient parser. This means that the model doesn't directly penalize things like connecting leaf nodes to a top-level turn node, which is seen in many cases of oversegmentation by the end-to-end model. If there were a mechanism for incorporating parent-child relationships between nodes into the loss, then it's quite possible that oversegmentation would be less pronounced in the end-to-end model. Experimenting with different parser architectures with this goal is a possible direction for future work. ## 8 Ethical statement In Sections 3 and 7, we describe the demographic composition of our dataset in as much detail as we have access to (noting that it contains North American English of the 1990s). It is likely that the system as we develop it here would not generalize consistently to speakers of other varieties of English. We anticipate that a parser like ours, which handles data that doesn't have gold SU boundaries, would be most likely deployed in spoken dialog systems. Our innovations are very unlikely to increase the harms that a spoken dialog system already poses (which include issues such as not being accessible to speakers of marginalized language varieties and potentially compromising user privacy).
2305.19097
A generalized framework to predict continuous scores from medical ordinal labels
Many variables of interest in clinical medicine, like disease severity, are recorded using discrete ordinal categories such as normal/mild/moderate/severe. These labels are used to train and evaluate disease severity prediction models. However, ordinal categories represent a simplification of an underlying continuous severity spectrum. Using continuous scores instead of ordinal categories is more sensitive to detecting small changes in disease severity over time. Here, we present a generalized framework that accurately predicts continuously valued variables using only discrete ordinal labels during model development. We found that for three clinical prediction tasks, models that take the ordinal relationship of the training labels into account outperformed conventional multi-class classification models. Particularly the continuous scores generated by ordinal classification and regression models showed a significantly higher correlation with expert rankings of disease severity and lower mean squared errors compared to the multi-class classification models. Furthermore, the use of MC dropout significantly improved the ability of all evaluated deep learning approaches to predict continuously valued scores that truthfully reflect the underlying continuous target variable. We showed that accurate continuously valued predictions can be generated even if the model development only involves discrete ordinal labels. The novel framework has been validated on three different clinical prediction tasks and has proven to bridge the gap between discrete ordinal labels and the underlying continuously valued variables.
Katharina V. Hoebel, Andreanne Lemay, John Peter Campbell, Susan Ostmo, Michael F. Chiang, Christopher P. Bridge, Matthew D. Li, Praveer Singh, Aaron S. Coyner, Jayashree Kalpathy-Cramer
2023-05-30T15:00:49Z
http://arxiv.org/abs/2305.19097v1
# A generalized framework to predict continuous scores from medical ordinal labels ###### Abstract Background: Many variables of interest in clinical medicine, like disease severity, are recorded using discrete ordinal categories such as normal/mild/moderate/severe. These labels are used to train and evaluate disease severity prediction models. However, ordinal categories represent a simplification of an underlying continuous severity spectrum. Using continuous scores instead of ordinal categories is more sensitive to detecting small changes in disease severity over time. Here, we present a generalized framework that accurately predicts continuously valued variables using only discrete ordinal labels during model development. Methods: We study the framework using three datasets: disease severity prediction for retinopathy of prematurity and knee osteoarthritis and breast density prediction from mammograms. For each dataset deep learning models are trained using discrete labels, and the model outputs were converted into continuous scores. The quality of the continuously valued predictions was compared to expert severity scores that were more detailed than the discrete training labels. We study the performance of conventional and Monte Carlo dropout multi-class classification, ordinal classification, regression, and Siamese models. Findings: We found that for all three clinical prediction tasks, models that take the ordinal relationship of the training labels into account outperformed conventional multi-class classification models. Particularly the continuous scores generated by ordinal classification and regression models showed a significantly higher correlation with expert rankings of disease severity and lower mean squared errors compared to the multi-class classification models. Furthermore, the use of MC dropout significantly improved the ability of all evaluated deep learning approaches to predict continuously valued scores that truthfully reflect the underlying continuous target variable. Interpretation: We showed that accurate continuously valued predictions can be generated even if the model development only involves discrete ordinal labels. The novel framework has been validated on three different clinical prediction tasks and has proven to bridge the gap between discrete ordinal labels and the underlying continuously valued variables. Keywords:Retinopathy of prematurity, knee osteoarthritis, breast density, deep learning, continuous score, weakly supervised learning + Footnote †: Corresponding author: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: E: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: E: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: E: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: E: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: E: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: E: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: E: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: E: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: E: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: E: Email: Email: Email: Email: Email: E: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: Email: E: Email: Email: Email: Email: Email: Email: Email: E: Email for numerous diseases such as diabetic retinopathy, retinopathy of prematurity, and osteoarthritis.[3, 4, 5] Yet, these successes have been build on the simplifying assumption that disease severity prediction can be formulated as a simple classification task. Researchers mostly use DL architectures that are intended for the classification of nominal categories and ignore the inherent ordinal nature of the available training labels. More so, disease severity prediction tasks are often simplified even more by treating them as binary problems, e.g., the identification of severe disease [6], cases for referral [7, 8], or disease detection [3]. #### Advantages of continuous scores In addition to the class, the position of a case on the continuous spectrum contains valuable clinical information that is not captured by current approaches.[9] Therefore, the use of continuous scores to describe clinical variables that are distributed on a continuous spectrum provides several advantages over discrete ordinal variables. First, continuous metrics allow for the detection and quantification of changes within a class, i.e., an increase in disease severity that does not constitute a transition between class \(n\) and \(n+1\).[10, 11] In Figure 1 the purple and magenta arrows represent similar large increases in disease severity. While the magenta transition would get detected by the traditional classification approach, the Figure 1: **Relationship between the underlying continuous variable of interest and the training and evaluation labels.** The conversion of a latent continuously distributed variable into discrete ordinal variables for model development represents a loss of information. The purple and magenta arrows represent temporal changes in disease severity. The available annotation types for evaluating the continuous predictions are presented in the green box: Rankings, ordinal ratings on a more detailed scale than the training labels, and continuously valued measurements. purple transition would not. The only difference between the two transitions is that the magenta one crosses the class boundary from moderate to severe, while the purple arrow represents a within class change. The detection of within-class changes allows to detect disease deterioration earlier and act upon it if required. Second, the higher degree of information presented in continuous vs. ordinal scores can be useful for efficient patient stratification, particularly the identification of cases close to a decision boundary. Third, expert perception of class boundaries can be subject to changes over time [12]. Therefore, models ignoring the continuous nature of disease severity could become less valuable over time as, e.g., the perception of what constitutes mild versus moderate disease severity shifts. Lastly, an algorithm that predicts a continuous score is more likely to fulfill notions of individual fairness as similar individuals that are close to the label decision boundaries will be more likely to receive similar scores compared to using a simple classification algorithm [13]. #### Related work Previous attempts to predict continuous disease severity scores have made use of either conventional classification networks or Siamese networks. Redd et al. proposed to aggregate the softmax outputs from a conventional 3-class convolutional network into one continuously valued vascular severity score for disease severity classification in retinopathy of prematurity (ROP) [14]. This score strongly correlates with experts' ranking of overall disease severity. Furthermore, changes in this score over time accurately reflect disease progression [11]. However, the training objective of multi-class classification models is to separate the latent space representation of classes as much as possible. This could therefore lead to unstable predictions and confusion at the class boundaries. Using Siamese networks, Li et al. showed that the continuously valued difference relative to a reference pool of images correlates with expert's rankings of disease severity and reflects temporal changes in severity in knee osteoarthritis and ROP [10]. Similarly, in a study on an x-ray based severity score for COVID-19, a score generated by a Siamese network highly correlated with radiologist-determined severity scores. However, the performance of Siamese networks for the predictions of continuous scores has not been compared to other methods and their calibration has not been studied yet. #### Study aim Here, we aim to identify model development strategies that lead to the prediction of accurate continuous scores. Most importantly, while we utilize widely available discrete ordinal labels for training, the models' performance to predict accurate continuous scores is evaluated using labels on a finer scale than the training ground truth (illustrated in the green box in Figure 1) on three datasets: disease severity prediction for ROP and knee osteoarthritis and breast density estimation from mammograms. Following this process, we aim to show that it is possible to develop models that are capable to recover the information lost through the discretization of the continuous target variable. ## 2 Methods ### Datasets All images were de-identified prior to data access; ethical approval for this study was therefore not required. Dataset splits were performed on a patient level. The size of all dasets is listed in Table 1 and class distributions for each dataset are listed in Appendix A. #### Rop ROP is an eye disorder mainly developed by prematurely born babies and is among the leading causes of preventable childhood blindness [15]. It is characterized by a continuous spectrum of abnormal growth of retinal blood vessels which is typically categorized into three discrete severity classes: normal, pre-plus, or plus [1, 16]. We use the same dataset, labels, and preprocessing as described by Brown et al. [4]. In addition to the standard diagnostic labels, the test set was labeled by five raters on a scale from 1 to 9 and five experts ranked an additional 100 ROP photographs based on severity [4, 17]. #### Knee osteoarthritis The global prevalence of knee osteoarthritis is 22.9% for individuals over 40, causing chronic pain and functional disability [18]. Knee osteoarthritis can be diagnosed with radiographic images and disease severity is typically evaluated using the Kellgren-Lawrence (KL) scale consisting of the following severity categories: none, doubtful, mild, moderate, and severe [19]. We use the the Multicenter Osteoarthritis Study (MOST) dataset. 100 images from the test were ranked by their severity by three experts [10]. All images were center cropped to 224x224 pixels and intensity scaled between 0 and 1 as preprocessing. #### Breast density Breast density is typically categorized as fatty, scattered, heterogeneous, or dense, depending on the amount of fibroglandular tissue present [20]. Women with high breast density are at a higher risk of developing breast cancer and require additional MRI screening [21, 22]. We use a subset of the Digital Mammographic Imaging Screening Trial (DMIST) dataset [23]. Furthermore, for 1892 mammographs from the test dataset, an automatic assessment of the volumetric breast density was obtained using the commercially available Volpara Density software which has demonstrated a good agreement with expert ratings of breast density (see Figure 5).[24, 25] Preprocessed mammograms were of size 224x224 pixels. ### Model training ### Model types Four model types were trained: multi-class and ordinal classification, regression, and Siamese. The model output was converted to a continuous score value for each model to represent the underlying severity spectrum of the medical tasks studies. #### Classification All classification models for this study were trained with cross-entropy loss. The continuous severity score is computed as the of the softmax outputs weighted by their class (Equation 1), leading to scores from 0 to k-1. \[Cl_{score}=\sum_{i=1}^{k}p_{i}\times i-1 \tag{1}\] with \(k\) being the number of classes and \(p_{i}\) the softmax probability of class \(i\). #### Ordinal classification In ordinal classification, the task is broken up into a \(k-1\) binary classification tasks, leading to one output unit less than the number of classes.[26] During training, the ordinal loss function penalizes larger misclassification errors more than smaller errors (e.g., predicting class 2 when the ground truth label is 0 is penalized more than if the models predicts class 1). We use the CORAL loss as described by Cao et al. for model optimization.[27] A continuous score is generated by summing over the output probabilities (Equation 2), resulting in values ranging from 0 to k-1. \begin{table} \begin{tabular}{l c c c c} \hline **Dataset** & **Size** & **Training** & **Validation** & **Test** \\ \hline ROP & 5611 & 4322 & 722 & 467 (9-point scale) 100 (ranked) \\ \hline Knee OA & 14273 & 12268 & 1905 & 100 (ranked) \\ \hline Breast Density & 83034 & 70293 & 10849 & 1892 (Volpara Density) \\ \hline \end{tabular} \end{table} Table 1: Summary of dataset size and training/validation/test splits of the three datasets used for this study: disease severity prediction in retinopathy of prematurity (ROP) and knee osteoarthritis (OA) and breast density prediction \[O_{score}=\sum_{i=1}^{k}p_{i} \tag{2}\] _Regression_ Similar to ordinal models, regression models require the ordinality of the target output. However, unlike ordinal models, the output of regression models is a continuous value rather than a discrete class. The regression models were trained using the mean squared error loss function with the class number as the target value. The raw model output yields a continuous value; hence, no conversion is required to receive a continuous score. _Siamese_ Siamese models compare pairs of images to evaluate their similarity. They are composed of two branches consisting of identical sub-networks with shared weights where each of the two images is processed in one of the branches. The lower the Euclidean distance between the outputs of each branch the higher the similarity between the inputs. Following a procedure described by Li et al., at test time, the target images are compared to ten anchor images associated with class 0. [10] Here, the continuous score is the median of the Euclidean distances between the target and the ten anchor images. ### Monte Carlo dropout By utilizing dropout not just during training but also at test time, yields \(N\) slightly different Monte Carlo (MC) predictions. [28] The MC predictions can subsequently be averaged, to obtain the final output prediction. All models referred to as _MC models_ were trained with spatial dropout after each residual block of the ResNets models, and \(N=50\) MC iterations at test time. The dropout rates are 0\(\cdot\)2, 0\(\cdot\)2, and 0\(\cdot\)1 for ROP, knee osteoarthritis, and breast density, respectively, and were selected empirically and based on current literature. ### Model training Model parameters were selected based on initial data exploration and empirical results. All ROP models were using a ResNet18 and all knee osteoarthritis and breast density models a ResNet50. A detailed description of the training parameters can be found in Appendix B. ### Evaluation #### 2.6.1 Metrics _Ranked datasets_ The model performance was evaluated based on the ranked test data using the following three metrics. First, we computed Spearman's rank coefficient between the rank and the predicted score. A monotonic increase between both metrics is expected; hence, a Spearman coefficient of 1 corresponds to a perfect correlation. Second, we computed agreement between the ground truth rank and the rank based on the continuous score using mean squared error (MSE) to quantify the correspondence between the predictions and ground truth. Here, the ranks were normalized to the maximum rank. Finally, the classification performance was assessed using clinically relevant AUCs. We defined the clinically relevant classification and normal/pre-plus vs. plus for ROP, none/doubtful vs. mild/moderate/severe for knee osteoarthritis, and fatty/scattered vs. heterogeneous/dense for breast density. _ROP_ A subset of the ROP test set had expert ratings from 1 to 9 based on the quantitative scale previously published by Taylor et al. [17]. The correspondence between the expert rating and the continuous predicted scores were measured using the MSE. #### Statistical analysis Metrics were bootstrapped (500 iterations) and 95% confidence intervals were evaluated for statistical analysis. Bootstrapped metrics yielding two-sided _t_-test with a p-value inferior to 5% were considered statistically different. ## 3 Results ### Predicted score compared with severity rankings _Agreement between predicted score and severity rankings_ We first assessed how well the predicted continuous scores reflect a ranking of the images in each dataset. Retinal photographs and knee radiographs were ranked by domain experts with increasing disease severity. The mammograms were ranked with increasing density based on the quantitative continuously valued Volpara breast density score. The relationship between the ground truth rankings and the predicted continuous scores are presented in Figure 2. For all datasets, the multi-class models without MC dropout display horizontal plateaus around the class boundaries where the predicted score is more or less constant with increasing rank. Similar patterns can be observed for the Siamese and MC Siamese models, especially for normal ROP and knee osteoarthritis cases. _Agreement between predicted and ground truth rankings_ A linear correlation between the predicted continuous score and the consensus rank cannot be assumed as the predicted score will increment variably depending on the severity increase from a patient of rank \(n\) and \(n+1\). Therefore, we used the Spearman correlation coefficient and MSE to quantify the agreement between the ground truth ranking and the ranking based on the predictions (see Table 2 and Appendix C). All MC dropout models were associated with a statistically significant higher Spearman correlation coefficient and lower MSE compared to their non-MC counterparts (p-value \(<2\cdot 2e-4\), see Appendix D for pair-wise statistical comparisons between the models). The higher Spearman correlation coefficients and lower MSE indicate that the addition of MC dropout during training and inference improves the ability of DL models to correctly rank the images based on the continuous predicitions. The models with the best correspondence between actual and predicted rank were MC multi-class and MC ordinal \begin{table} \begin{tabular}{l c c c} \hline \hline **Model** & **MSE \(\downarrow\)** & **Spearman \(\uparrow\)** & ** \begin{tabular}{} \end{tabular}** \\ \hline **ROP** & & & \\ \hline Multi-class & \(0.027\pm 0.009\) & \(0.84\pm 0.07\) & \(0.98\pm 0.02\) \\ MC multi-class & \(\mathbf{0.010\pm 0.003}\) & \(\mathbf{0.94\pm 0.02}\) & \(\mathbf{0.99\pm 0.01}\) \\ Ordinal & \(0.015\pm 0.005\) & \(0.91\pm 0.04\) & \(0.99\pm 0.01\) \\ MC ordinal & \(\mathbf{0.009\pm 0.003}\) & \(\mathbf{0.94\pm 0.02}\) & \(0.99\pm 0.02\) \\ Regression & \(0.026\pm 0.010\) & \(0.85\pm 0.07\) & \(0.98\pm 0.03\) \\ MC regression & \(\mathbf{0.017\pm 0.005}\) & \(\mathbf{0.89\pm 0.05}\) & \(\mathbf{0.98\pm 0.02}\) \\ Siamese & \(0.020\pm 0.007\) & \(0.88\pm 0.05\) & \(\mathbf{0.99\pm 0.01}\) \\ MC Siamese & \(\mathbf{0.013\pm 0.004}\) & \(\mathbf{0.92\pm 0.03}\) & \(0.98\pm 0.02\) \\ \hline **Knee osteoarthritis** & & & \\ \hline Multi-class & \(0.023\pm 0.008\) & \(0.86\pm 0.06\) & \(0.97\pm 0.02\) \\ MC multi-class & \(\mathbf{0.019\pm 0.006}\) & \(\mathbf{0.89\pm 0.05}\) & \(\mathbf{0.99\pm 0.01}\) \\ Ordinal & \(0.024\pm 0.007\) & \(0.85\pm 0.07\) & \(0.98\pm 0.02\) \\ MC ordinal & \(\mathbf{0.022\pm 0.007}\) & \(\mathbf{0.86\pm 0.06}\) & \(\mathbf{0.99\pm 0.02}\) \\ Regression & \(0.023\pm 0.009\) & \(0.86\pm 0.06\) & \(\mathbf{0.99\pm 0.01}\) \\ MC regression & \(\mathbf{0.019\pm 0.006}\) & \(\mathbf{0.88\pm 0.05}\) & \(0.98\pm 0.02\) \\ Siamese & \(0.022\pm 0.007\) & \(0.87\pm 0.05\) & \(\mathbf{0.97\pm 0.03}\) \\ MC Siamese & \(\mathbf{0.020\pm 0.005}\) & \(\mathbf{0.88\pm 0.04}\) & \(0.97\pm 0.03\) \\ \hline **Breast density** & & & \\ \hline Multi-class & \(0.018\pm 0.001\) & \(0.89\pm 0.01\) & \(0.93\pm 0.01\) \\ MC multi-class & \(\mathbf{0.016\pm 0.001}\) & \(\mathbf{0.90\pm 0.01}\) & \(\mathbf{0.94\pm 0.01}\) \\ Ordinal & \(0.016\pm 0.001\) & \(0.90\pm 0.01\) & \(0.93\pm 0.01\) \\ MC ordinal & \(\mathbf{0.015\pm 0.001}\) & \(\mathbf{0.91\pm 0.01}\) & \(\mathbf{0.94\pm 0.01}\) \\ Regression & \(0.015\pm 0.001\) & \(0.91\pm 0.01\) & \(0.94\pm 0.01\) \\ MC regression & \(\mathbf{0.011\pm 0.001}\) & \(\mathbf{0.93\pm 0.01}\) & \(0.94\pm 0.01\) \\ Siamese & \(0.013\pm 0.001\) & \(0.92\pm 0.01\) & \(0.91\pm 0.01\) \\ MC Siamese & \(\mathbf{0.012\pm 0.001}\) & \(\mathbf{0.93\pm 0.01}\) & \(\mathbf{0.92\pm 0.01}\) \\ \hline \hline \end{tabular} \end{table} Table 2: **Model performance overview (MEAN \(\pm\) 95% CI)**. Bold values indicate a statistical difference (p-value \(<0.05\)) was observed. Spearman’s rank correlation coefficient and the AUC are measured on the predicted continuous score while the MSE is measured between the normalized ground truth rank and the predicted rank generated from continuous scores. AUC was measured between normal and pre-plus vs. plus for ROP, none and doubtful vs. mild, moderate, severe for knee osteoarthritis, and fatty and scattered vs. heterogeneous and dense for breast density. models for ROP, MC multi-class and MC regression for knee osteoarthritis, and MC regression and MC Siamese networks for breast density. #### Classification performance All MC dropout models showed a slightly higher or comparable classification performance, as assessed by AUC, to their non-MC equivalent (see Table 2). The only exceptions were the Siamese models for ROP and knee osteoarthritis and the regression knee osteoarthritis model. In these three cases, though statistically significant, the AUC of the MC models was only 0\(\cdot\)01 (or less) lower the one of their non-MC equivalents. The model associated with the best classification did not necessarily correspond to the best continuous severity scores. The following models have the overall best performance for each dataset: MC multi-class (knee osteoarthritis), MC ordinal (ROP), and MC regression (breast density). ### Comparison of predicted ROP scores with disease severity ratings Next, we evaluated the correspondence between the predicted scores and more detailed severity ratings generated by domain experts. An subset of the test dataset was rated by five experts on a scale from 1 to 9 instead of the standard scale from 1 to 3. [17] This dataset allowed us to evaluate the quality of the continuous model outputs on a more granular scale than the 3-class labels the models were trained on. Perfect continuous predictions would result in increasing disease severity scores with increasing ground severity ratings. All MC models showed a higher correspondence between the true severity ratings and predicted scores, as reflected by a lower MSE in comparison with their conventional counterparts (see Figure 3). The models predicting the experts' ratings the most accurately are the MC multi-class and MC ordinal models. While Siamese networks showed decent correspondence between the predicted score and the ranked severity, a direct comparison with the severity ratings reveals that the predictions from these models are not well calibrated. The multi-class model without MC showed the second worst performance in this analysis. Images rated from 1 to 3 by experts mainly obtained scores near 0, which does not highlight the severity differences as perceived by human experts. Furthermore, for retinal photographs associated with a score of 4, the model predicted values on the entire spectrum, i.e., from 1 to 9, which is undesirable. #### Detection of temporal changes in disease severity Another important characteristic of a reliable severity score is its ability to reflect slight changes in disease severity over time. The disease evolution was quantified as the difference in the ground truth severity ratings or predicted severity scores between photographs of the same patient taken at different time points. We then compared the difference in experts' ratings to the difference Figure 2: **Correspondence between model predicted score and severity ranking.** For each model, the Spearman correlation coefficient (\(\rho\)) is displayed in the upper left corner. It indicates the monotonicity of the correlation where 1 is a perfectly increasing correlation and -1 is a perfectly decreasing correlation. in the predicted scores using MSE (see Figure 4). Ideally, the difference in the expert's scores should be equal to the difference in the models' predictions. MC dropout improved the correspondence between the disease evolution as perceived by experts and predicted by the DL models for multi-class, ordinal, and Siamese models. Prediction differences from MC multi-class and MC ordinal models matched the severity shifts in the experts' ratings most closely. The conventional multi-class model presents multiple outliers and is associated with the highest MSE. Comparing predicted breast density scores with continuously valued ground truth breast density measurements Lastly, we evaluated the ability of the breast density prediction models trained to accurately reflect the continuously valued Volpara Density measurements. The subset of mammograms with the Volpara Densitymeasurements provided us with the unique opportunity to evaluate algorithms trained using ordinal labels on a continuously valued ground truth. Therefore, unlike with the ranked Figure 3: **Correspondence between predicted and consensus ROP severity on an in-distribution test set.** The consensus ROP score was obtained by calculating the median of five ratings from different experts. The predicted scores from multi-class, ordinal, and regression models that were trained to predict values from 0 to 2 were scaled and shifted to match the 1 to 9 range (\(score_{rescaled}=score_{model}\times 2+1\)). Siamese networks predict values from 0 to infinity and are not fully bounded. The Siamese scores were hence only shifted by 1 (\(score_{rescaled}=score_{Siamese}+1\)). In accordance with the severity scale used, Siamese rescaled scores were also clipped to values between 1 and 9. All MSE measurements reported in this figure are statistically different (p-value \(<1.2e-41\)). score analysis presented in Section 3.1, here we directly compared the Volpara Density scores with the continuously valued model predictions. Ideal continuously valued predictions would correlate linearly with the Volpara Density scores. We first assessed the relationship between the Volpara Density scores and the discrete ground truth labels generated by domain experts used for training. As illustrated in the boxplot in Figure 5A, and by the Spearman correlation coefficient of 0.73, there is a high agreement between the ground truth labels and the Volpara Density scores. The MC multiclass model's predictions, both class and continuous score, are a close proxy to the volumetric breast density measurements, as seen in Figure 5B and 5C with a Spearman correlation coefficient of 0.803 (classification) and Pearson correlation coefficient of 0.91 (continuous scores). The high correlation between the continuous breast density predictions and Volpara Density measurements indicates that our model is able to generate an accurate continuous prediction while being trained on only a finite number of classes. ## 4 Discussion The underlying continuous nature of many prediction targets for DL image analysis tasks, such as breast density and disease severity, has to be taken into account in the process of model design. Here, we studied the capability of DL Figure 4: **Correspondence between the perceived rater score difference on longitudinal images from the same patient and the predicted score difference.** The red dashed line is the identity line and indicates the expected region were the data points should fall. All MSE measurements reported in this figure are statistically different (\(p-value<4\cdot 3e-28\)). models to intrinsically learn a continuous score while being trained using discrete ordinal labels. Our results show that training a conventional multi-class classification model without MC dropout does not lead to predictions that reflect the underlying continuous nature of the target variable. Approaches that model the relationship between the ordinal labels, such as ordinal classification, regression, and Siamese networks, provide continuous predictions that closely capture the continuity of the target variable even without the use of MC dropout. Finally, using MC dropout during training and inference increased the ability of the DL models to predict meaningful continuous scores. MC dropout multi-class classification ranked among the best performing models in this study. #### Multi-class classification models Ignoring the ordinal relationship between the training label classes causes conventional multi-class prediction models to return predictions that are clustered around the values of the training labels. This behavior is reflected in the plateaus visible in Figure 2, and the medians in Figures 3, and E3, a lower Spearman correlation coefficient and higher MSE. Due to the definition of the training objective, multi-class classification models are optimized to precisely predict a specific class and discouraged from predicting scores at the class boundaries. This behavior is desirable for nominal classification, where the classes should be separated as clearly as possible with minimal overlap in the feature latent space to avoid ambiguous predictions. However, the approach is not appropriate for problems with a target variable Figure 5: **Correspondence between volumetric breast density measurement and predicted or true breast density.** The predicted values are from the breast density MC multi-class model. (A) demonstrates the relationship between the expert ratings and the quantitative Volpara measurement. (B) illustrates the correspondence between the predicted labels obtained by taking the class with the highest softmax score and volumetric breast density. (C) plots the continuous predicted score against the volumetric breast density. \(\rho\) is the Spearman correlation coefficient for each metric pair. with an underlying continuous nature and explains the limited performance of the multi-class classification models to predict meaningful continuous scores. #### Siamese networks Siamese networks showed decent correspondence between the ranked severity and the predicted score (Figure 2. However, a direct comparison between the predicted score and the severity determined by domain experts (Figure 3, reveals that the predictions are not well calibrated. The predictions do not accurately reflect disease severity on a more granular scale than the labels used for model training. Siamese networks are not trained to predict a specific value, unlike the other models, but rather to detect whether two images stem from the same or different classes.[29] Therefore, they can pick up subtle differences in disease severity.[10] Here, we obtained predictions comparing the input image of interest to a pool of anchor images that are typical representations of the class corresponding to the lowest label score. While the predicted difference between the anchor images and the target images resulted in accurate ordinal predictions (Figure 2, it was not well calibrated to the underlying continuous variable, particularly at the extremes. #### MC dropout improves prediction of continuous variables Through the use of MC dropout, all four model types evaluated showed an improvement in the quality of the continuous scores as reflected in significantly higher Spearman correlation coefficients and lower MSE (see Table 2). MC multi-class classification networks were consistently among the highest performing models for all tasks and datasets evaluated, making them the top-performing models in our study. MC dropout presents a simple way to obtain meaningful continuous predictions from models trained using ordinal labels without sacrificing and, in some cases, even significantly improving predictive performance (see Table 2). However, MC dropout comes at a higher computational cost as inference requires multiple passes of the same input image to obtain the final prediction. If the additional computational burden is a concern, ordinal classification or regression are alternatives to conventional multi-class classification models that are easy to train and provide decent continuous predictions without the use of MC dropout. #### Limitations There are some limitations to this study. First, we treated the available ordinal labels as ground truth. For all three image analysis tasks analyzed here, high inter-rater variability, particularly around the decision boundaries between severity classes, have been reported.[1, 307, 31] It would be desirable for future work to explore the influence of noisy and biased ordinal ratings for the task of learning and predicting a continuous variable. Second, due to the latent nature of the variable of interest, for most of our analysis, we had to rely on proxy variables such as rankings and more granular expert disease severity ratings. Lastly, MC dropout predictions were based on 50 samples, an empirically chosen value based on common practices and our own experience. ## 5 Conclusion In this work, we present a generalizable framework to predict meaningful continuous scores while only using discrete ordinal labels for model development. Our findings are particularly relevant to disease severity prediction tasks as the available labels are usually coarse and ordinal, but continuous disease severity predictions could provide crucial information that allows for earlier detection of deterioration and more personalized treatment planning. Acknowledgments.The authors would like to thank Laura Coombs from the American College of Radiology for providing the DMIST dataset and the Volpara Density scores. ## Declarations ### Funding J.P.C and S.O. are funded by the National Institutes of Health (Bethesda, MD) [R01 HD107493], an investigator-initiated grant from Genentech (San Francisco, CA) [R21 EY031883], and by unrestricted departmental funding and a Career Development Award (J.P.C.) from Research to Prevent Blindness (New York, NY) [P30 EY10572]. M.F.C. previously received grant funding from the National Institutes of Health (Bethesda, MD), and National Science Foundation (Arlington, VA). J.P.C, S.O., M.F.C., J.K-C., and K.H. are supported by research funding from Genentech (San Francisco, CA)[R21 EY031883]. J.K-C. and K.V.H. are supported by funding from the National Institutes of Health (Bethesda, MD) [R01 HD107493] and National Cancer Institute (Bethesda, MD) [U01CA242879]. A.L. has a scholarship from Mitacs [IT24359], NSERC, and Fondation et Alumni de Polytechnique Montreal. ### Conflict of interest M.F.C. is an unpaid member of the scientific advisory board for Clarity Medical Systems (Pleasanton, CA), was previously a Consultant for Novartis (Basel, Switzerland) and was previously an equity owner at InTeleretina, LLC (Honolulu, HI). Dr. Campbell was a consultant to Boston AI Lab (Boston, MA), and is an equity owner of Siloam Vision.J.K-C. is a consultant/advisory board member for Infotech, Soft. The other authors declare no competing financial or non-financial interests. ### Code availability The code used to train the models can be found at [https://github.com/andreanne-lemay/gray_zone_assessment](https://github.com/andreanne-lemay/gray_zone_assessment). ### Data availability Access to the MOST dataset for knee osteoarthritis can be requested through the NIA Aging Research Biobank [https://agingresearchbiobank.nia.nih.gov/](https://agingresearchbiobank.nia.nih.gov/). The breast density, and ROP datasets are not publicly accessible due to patient privacy restrictions. ### Author's contributions Study concept and design: K.H., A.L., J.K.-C., J.P.C. Data collection: J.P.C., S.O., and J.K.-C. Data analysis and interpretation: all authors. Drafting of the manuscript: A.L., K.H. Critical revision of the manuscript for important intellectual content and final approval: all authors. Supervision: J.K.-C., J.P.C. ## Appendix A Dataset label distributions List of label distributions for each dataset. _Retinopathy of prematurity_ Dataset size: 5511 images * Normal: 4535 images (82\(\cdot\)3%) * Pre-plus disease: 804 images (14\(\cdot\)6%) * Plus disease: 172 images (3\(\cdot\)1%) _Knee osteoarthritis (OA)_ Dataset size: 14173 images * No OA (KL 0): 5793 images (40.9\(\cdot\)%) * Doubtful OA (KL 1): 2156 images (15\(\cdot\)2%) * Mild OA (KL 2): 2355 images (16\(\cdot\)6%) * Moderate OA (KL 3): 2604 images (18\(\cdot\)4%) * Severe OA (KL 4): 1265 images (8\(\cdot\)9%) _Breast density_ Dataset size: 108230 images * Fatty: 12428 images (11\(\cdot\)5%) * Scattered: 47909 images (44\(\cdot\)2%) * Heterogeneously dense: 41325 images (38\(\cdot\)2%) * Dense: 6568 images (6\(\cdot\)1%) ## Appendix B Model training parameters _ROP_ ROP models had a ResNet18 architecture and were trained with a batch size of 24, a learning rate of 1e-4 for 25 epochs, and the best model was selected using the highest accuracy on the validation set. Balanced class sampling mitigated the class imbalance during training. Data augmentation consisted of random rotation of \(\pm\) 15 degrees with a probability of 0.5, random flips with a probability of 0.5, and random zooms of 0.9 to 1.1 with a probability of 0.5. _Knee osteoarthritis_ ResNet50 architecture was selected for the knee osteoarthritis model and was trained with the following parameters: batch size of 16, learning rate of 5e-6, 75 epochs. The final model was chosen based on the best loss value on the validation set. The data sampler used balanced weights during training to help with data imbalance. Images were randomly rotated of \(\pm\) 15 degrees with a probability of 0.5 and randomly flipped with a probability of 0.5 as data augmentation. _Breast density_ Breast density models were trained with a ResNet50 architecture for 75 epochs by batches of 8 with a learning rate of 5e-5. The best model was selected using the best loss score on the validation set. The same data augmentation as the knee osteoarthritis model was applied for breast density. ## Appendix C Predicted rank vs. ground truth rank Figure C1 contains the same data from Figure 2 presented in Section 3.1. The predicted scores were ordered to determine a rank and were plotted against the expert's ranks. The MSE displayed in Table 2 was computed on these two variables. Since a linear correlation is expected on the rank-to-rank analysis, the Pearson coefficient was used. **Fig. C1**: **Correspondence between model predicted rank and true severity rank.** For each model the Pearson correlation coefficient (\(r\)) is displayed and indicate the strength of the linear correlation where 1 is a perfectly positive linear correlation and -1 a perfectly negative linear correlation. ## Appendix D Pair-wise statistical comparisons Only the metrics showing no statistical differences between two metrics were included (only some metrics from Table 2). If no figure for a specific metric and dataset is present, it means all the pair-wise comparisons showed a statistical difference. All metrics presented in Table 2, Figure 3, Figure 4, and Figure E3 were analysed. ## Appendix E ROP - Score on out-of-distribution dataset The ROP models were further tested on a dataset from a different population and acquired at different centers from the training dataset. Figure E3 illustrates the correspondence between the predicted and rater scores for this out-of-distribution dataset. Similar to the in-distribution test set (see Figure 3), all the MC models had a better MSE compared with the non-MC corresponding models and MC multi-class and MC ordinal are the best performing models. The multi-class and regression models for most severity scores predicted a wide range of values, often from 1 to 9 which could lead to medical errors. The miscalibration of Siamese models is especially noticeable in Figure E3 as visually, the predicted and rater score do not match for high severity values. This out-of-distribution dataset contains only a few plus and pre-plus images, i.e., only 328 plus and pre-plus cases compared to 7565 normal cases. Driven by a large number of outliers particularly within images with the lower disease severity ratings (normal cases), the MSE is particularly high for the conventional multi-class, ordinal classification, and regression models. The low number of images with higher disease severity scores also explains why the MSE is not extremely high even though the Siamese networks are visually miscalibrated.
2305.09284
Impact of competing energy scales on the shell-filling sequence in elliptic bilayer graphene quantum dots
We report on a detailed investigation of the shell-filling sequence in electrostatically defined elliptic bilayer graphene quantum dots (QDs) in the regime of low charge carrier occupation, $N \leq 12$, by means of magnetotransport spectroscopy and numerical calculations. We show the necessity of including both short-range electron-electron interaction and wavefunction-dependent valley g-factors for understanding the overall fourfold shell-filling sequence. These factors lead to an additional energy splitting at half-filling of each orbital state and different energy shifts in out-of-plane magnetic fields. Analysis of 31 different BLG QDs reveals that both valley g-factor and electron-electron interaction induced energy splitting increase with decreasing QD size, validating theory. However, we find that the electrostatic charging energy of such gate-defined QDs does not correlate consistently with their size, indicating complex electrostatics. These findings offer significant insights for future BLG QD devices and circuit designs.
Samuel Möller, Luca Banszerus, Angelika Knothe, Lucca Valerius, Katrin Hecker, Eike Icking, Kenji Watanabe, Takashi Taniguchi, Christian Volk, Christoph Stampfer
2023-05-16T08:42:22Z
http://arxiv.org/abs/2305.09284v2
# Understanding the fourfold shell-filling sequence in bilayer graphene quantum dots ###### Abstract We report on a detailed investigation of the shell-filling sequence in electrostatically defined bilayer graphene quantum dots (QDs) in the regime of low charge carrier occupation, \(N<12\), by means of magnetotransport spectroscopy. Conductance resonances, so-called Coulomb peaks, appear in groups of four in gate space in good agreement with spin and valley degenerate orbital states in bilayer graphene. Interestingly, an additional bunching into pairs of two is superimposed onto the orbital fourfold degeneracy. We conclude that the additional splitting is caused by electron-electron interaction leading to a renormalization of the QD ground state at half filling of each orbital state. Furthermore, we also report in detail on the influences of the QD geometry on the energy scales of the electron-electron interaction and the impact of the magnetic field on the QD states mainly determined by the QD size-dependent valley \(g\)-factor. Bilayer graphene (BLG) is a fascinating material as it exhibits a gate-tunable band gap [1; 2; 3; 4], low spin-orbit interaction [5; 6; 7; 8], strong correlation effects [9; 10; 11], and the possibility to tailor its band structure by proximitizing it with other 2D materials [12; 13; 14; 15; 16; 17; 18; 19; 20]. This makes BLG a promising material for future quantum technologies based on 2D materials [21; 22; 23; 24; 25]. One way of exploiting the potential of BLG is to confine its charges into quantum dots (QDs) which can serve as building blocks for quantum sensing sevices, quantum metrology, spintronics and quantum information units [21; 26; 27; 28]. This is especially true since recent advances in fabrication techniques allow to reliably create ultra clean and highly tunable QDs in BLG by electrostatic soft-confinement, even allowing the creation of QDs with opposite polarity in close proximity [29; 30; 31; 32; 33; 34; 35; 36]. Confining charges in BLG QDs gives rise to fourfold degenerate orbital states in agreement with the spin and valley degrees of freedom present in BLG. In contrast to QDs in conventional semiconductors, QD states in BLG carry a tunable topological magnetic moment oriented out-of-plane, which is caused by the finite Berry curvature close to the K- and K'-points in gapped BLG [37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113; 114; 115; 116; 117; 118; 119; 120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211; 213; 214; 215; 216; 217; 218; 223; 219; 224; 225; 226; 227; 228; 230; 231; 232; 233; 234; 235; 236; 237; 238; 239; 240; 241; 242; 243; 244; 245; 246; 247; 248; 249; 250; 251; 252; 253; 254; 255; 256; 257; 258; 26; 259; 260; 261; 262; 263; 264; 265; 266; 267; 268; 269; 270; 271; 272; 273; 274; 275; 276; 277; 278; 28; 28; 290; 281; 282; 283; 284; 285; 286; 287; 288; 289; 291; 292; 293; 294; 295; 296; 297; 301; 310; 320; 331; 332; 334; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 51; 529; 53; 54; 55; 56; 57; 58; 59; 60; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91; 101; 112; 133; 144; 156; 157; 158; 169; 170; 181; 191; 192; 193; 195; 196; 197; 198; 199; 201; 210; 211; 213; 214; 215; 216; 217; 218; 219; 222; 233; 241; 242; 25; 267; 278; 28; 299; 311; 32; 333; 34; 36; 37; 39; 50; 51; 529; 68; 71; 72; 73; 75; 76; 77; 89; 911; 11; 124; 226; 27; 28; 80; 82; 84; 86; 87; 88; 92; 93; 94; 96; 97; 101; 12; 13; 14; 15; 16; 17; 18; 19; 21; 22; 29; 21; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 34; 36; 37; 39; 51; 52; 53; 54; 56; 57; 58; 59; 61; 70; 72; 73; 74; 76; 77; 78; 89; 80; 82; 81; 84; 83; 85; 86; 87; 89; 93; 94; 95; 96; 97; 98; 99; 99; 100; 111; 13; 14; 15; 16; 17; 18; 199; 197; 198; 199; 202; 213; 224; 216; 227; 229; 231; 243; 244; 25; 26; 27; 28; 29; 303; 31; 33; 34; 35; 36; 37; 38; 39; 52; 39; 60; 73; 74; 78; 81; 85; 87; 89; 99; 11; 12; 13; 14; 15; 16; 17; 19; 21; 23; 25; 26; 27; 28; 29; 31; 304; 37; 38; 39; 40; 39; 50; 31; 305; 32; 34; 36; 38; 39; 5 - the first three orbital states in bilayer graphene quantum dots has an additional splitting into pairs of two superimposed onto its fourfold orbital degeneracy. This is explained in terms of short-range electron-electron interaction, leading to a spin triplet - valley singlet ground state, which is not expected in the single particle picture. Additionally, we show that the strength of both the electron-electron interaction and the valley magnetic moment increases for smaller BLG QDs. To create electrostatically confined QDs, we fabricate heterostructures consisting of a BLG sheet encapsulated within two layers of hexagonal boron nitride [44; 45]. The heterostructure is placed on a graphite flake which acts as a back gate (BG) [33]. Two layers of Cr/Au gates are fabricated on top of the stack: Split gates (SGs) with a separation of \(\sim 50\) nm, and \(100\) nm wide finger gates (FGs) oriented perpendicular to the channel. The two gate layers are separated by a \(30\) nm thick layer of atomic layer deposited Al\({}_{2}\)O\({}_{3}\). Fig. 1(a) shows an atomic force micrograph of the resulting gate structure. By applying voltages of opposite sign to the SGs and the BG we create a perpendicular electric displacement field, which opens up a band gap in BLG and allows to tune the Fermi energy into the band gap [2]. This leaves a narrow conducting channel connecting source and drain, which is defined by the SGs. Then, a FG is used to locally invert the polarity of the channel, creating a QD and giving rise to tunnel barriers where the Fermi energy lies within the band gap, as illustrated in Fig. 1(b) [33; 34]. From the dimensions of SGs and the FG, we expect an elliptical QD with an aspect ratio of \(\approx 2\). The filling sequence of the QD can be studied by transport spectroscopy in a \({}^{3}\)He/\({}^{4}\)He dilution refrigerator at a base temperature of around \(10\) mK using a combination of DC-measurements and standard low-frequency lock-in techniques. We measure conductance through the channel at low source-drain voltage (\(V_{\mathrm{SD}}\)) as a function of applied FG voltage, \(V_{\mathrm{FG}}\), which is shown in Fig. 1(c). For increasing \(V_{\mathrm{FG}}\), the channel first pinches off (\(V_{\mathrm{FG}}\approx 8.5\) V) before Coulomb resonances appear, indicating the sequential filling of the QD with more than twelve electrons. The addition energy of every forth electron is increased due to the spin and valley degeneracy in BLG, which is highlighted by the blue arrows in Fig. 1(c). The inset shows the differential conductance \(dG/dV_{\mathrm{SD}}\) as a function of applied bias voltage, where Coulomb diamonds are visible. We can extract the gate lever-arm, \(\alpha\approx 0.04\) from the size of the diamonds, allowing to convert \(V_{\mathrm{FG}}\) to chemical potential, \(\mu\), according to \(\Delta\mu=\left|e\right|\alpha\,\Delta V_{\mathrm{FG}}\), with the elementary charge \(e\). To investigate the nature of the multi-particle states at each filling, \(N\), we perform magnetotransport spectroscopy measurements to probe the spin and valley magnetic moment of each QD state. Fig. 2(a) depicts conductance through the QD with respect to an out-of-plane magnetic field, \(B_{\perp}\), and the FG voltage, \(V_{\mathrm{FG}}\). Each Coulomb peak shifts its position on the gate axis as a function of \(B_{\perp}\), featuring kinks that appear simultaneously in adjacent Coulomb peaks, as highlighted by the white arrows. Assuming the confinement to be independent of magnetic field, the charging energy corresponds to the minimal distance between two Coulomb peaks [29; 34; 43]. More details about the charging energy and the addition energy are shown in _Appendix_ A, Fig. 6. We subtract the charging energy between neighboring Coulomb peaks and show the result in Fig. 2(b), where we also transformed \(V_{\mathrm{FG}}\) to chemical potential using the lever-arm. The y-axis is labeled \(\Delta\mu\) to indicate that we subtracted the charging energy and focus only on the change in chemical potential of each Coulomb peak due to the spin and valley Zeeman effect. For magnetic fields larger than \(\left|B_{\perp}\right|\gtrsim 0.4\,\mathrm{T}:=B_{\mathrm{TS}}\) (see black arrow and \(B_{\mathrm{TS}}\) label in Fig. 2(b)), which we define for later discussions, our observations are consistent with previous works [29; 34; 43] and can be explained by successively filling non-interacting single particle states into the energetically lowest orbital state. As the valley g-factor Figure 2: **(a)** Conductance through the QD at low bias (\(V_{\mathrm{SD}}=0.2\) mV) as a function of \(V_{\mathrm{FG}}\) and perpendicular magnetic field \(B_{\perp}\). Arrows highlight kinks in the slopes of the peaks, indicating changes of multi-particle ground states. **(b)** Change in chemical potential, \(\Delta\mu\), required to add the next electron to the QD as a function of \(B_{\perp}\), extracted from (a) by subtracting the charging energy \(E_{\mathrm{C}}\). We chose \(\Delta\mu=0\) for the position of the first Coulomb peak at zero magnetic field. White numbers show the electron occupation of the QD in between the Coulomb peaks. The dashed grey lines indicate where linear fits were performed to determine the valley \(g\)-factors. The orbital splitting can be read off from the separations at \(B_{\perp}=0\), as indicated by the blue arrows, yielding \(\Delta_{\mathrm{orb}}=2.1\pm 0.1\) meV for the differences between orbital 1-2 and 2-3. **(c)** Valley \(g\)-factors, \(g_{\nu}\), as a function of the electron occupation. The grey dashed lines are a guide to the eye, highlighting the slight modulation of the g-factor which repeats within each shell. is about an order of magnitude larger than the spin g-factor, this gives rise to two positive (adding \(|K\uparrow\rangle\) or \(|K\downarrow\rangle\)) and two negative slopes (adding \(|K^{\prime}\uparrow\rangle\) or \(|K^{\prime}\downarrow\rangle\)) per orbital state, according to the magnetic moment of the single particle states added to the QD [46]. For large magnetic fields \(|B_{\perp}|\gtrsim 0.85\) T (see vertical dashed black line in Fig. 2(b)), the valley Zeeman splitting becomes larger than the orbital splitting, changing the order in which the shells are filled. Thus, the orbital splitting, \(\Delta_{\rm orb}\), can be directly read off at \(B_{\perp}=0\), indicated by the blue arrows. The observed sequential filling of \(N=4-8-12\) (see white numbers) is consistent with numerical calculations of orbital states in BLG QDs (see _Appendix A_) predicting non-degenerate orbital states for elliptical QDs, which we expect to have due to our gate structure (see Fig. 1(a)). In contrast, perfectly circular QDs show a \(N=4-12\) filling sequence as they have a degenerate second and third orbital state due to rotational symmetry [43]. We evaluate the strength of the valley magnetic moment for each occupation number of the QD by fitting the slopes as indicated by the grey dashed lines in Fig. 2(b) (see _Appendix C_ for more details). The result is shown in Fig. 2(c), where we plot the valley g-factor, \(g_{v}\), as function of the QD occupation \(N\). The g-factors within each shell are nearly constant but exhibit a modulation that repeats after four electrons, see grey dashed lines in Fig. 2(c). As the valley magnetic moment is determined by the distribution of the electron wavefunction in \(k\)-space (see examples in Fig. 7), we expect nearly constant valley g-factors within the same orbital state. We speculate that the modulation within each shell arises from slight changes in the electrostatics of the QD due to the higher FG voltage and the additional electron occupying the same shell. This modulation is pronounced strongest between the first and second electron entering the QD, which is expected, as there the QD is just formed after pinching off the channel and the confinement potential initially changes significantly with increasing \(V_{\rm FG}\)[30; 47][30; 34]. Nevertheless, as the modulation of g-factors also shows a fourfold pattern, the observation is consistent with the assumption that only one orbital state is filled at a time. This assumption is further supported by finite bias spectroscopy, which allows to probe both ground and excited states of each charge transition. We measure across the first eight Coulomb peaks at \(V_{\rm SD}=1.8\) mV, as exemplarily illustrated by the white dotted arrow in the inset in Fig. 1(c). The results are displayed in Fig. 3, where we plot differential transconductance \(dI/dV_{\rm FG}\) as a function of \(B_{\perp}\) and \(\Delta\mu\), where the latter was obtained from \(V_{\rm FG}\) using the lever-arm. White numbers indicate the charge occupation of the QD in the Coulomb blockaded region. The outline of the conducting region (see black dashed arrows in the upper most left panel in Fig. 3) is determined by the size of the bias window, \(V_{\rm SD}\) and shifts with \(B_{\perp}\) exactly as the charge resonances in Fig. 2(a,b) [48]. Within the conducting region, we can observe additional features which originate from excited states of each charge transition increasing or decreasing the tunnel current [48; 49]. Black arrows and black dashed lines highlight features which are visible at corresponding charge transitions, i.e. 1-5, 2-6, 3-7, 4-8. A fourfold repeating pattern is visible in both the outline and the exited state features (compare upper and lower panels), further supporting the assumption that only one orbital state is filled at a time for \(B_{\perp}<0.85\) T. For magnetic fields \(|B_{\perp}|<B_{\rm TS}\approx 0.4\) T, we observe an additional energy scale, which causes a two-fold bunching Figure 3: Differential transconductance \(dI/dV_{\rm FG}\) measured across the first eight Coulomb peaks (see white numbers and dashed white arrow in Fig. 1(c)) as a function of \(V_{\rm FG}\) and \(B_{\perp}\) at \(V_{\rm SD}=1.8\) mV. The extent of the conductive region is limited by the bias voltage, as exemplarily shown in the first panel. The colored lines highlight how the ground-state to ground-state transition between the \(N\) and \(N-1\)-particle state shifts with magnetic field, with the colors corresponding to those in Fig. 4(a). Grey arrows and dashed lines indicate similarities in the excited state spectra of corresponding charge transitions. The fourfold pattern in the data is clearly visible, showing that orbital states are filled one after the other. of Coulomb resonances within each orbital (see \(B_{\rm TS}\) in Fig. 2(b) and Fig. 3). This is a robust feature present in all works where few charge carrier gate-defined QD states in BLG are investigated [5; 29; 30; 35; 36; 43; 48], strongly indicating that electron-electron interaction needs to be taken into account for understanding the filling of individual shells. To understand the energy scales involved when filling one orbital state, we go beyond the single particle model and take a closer look at involved multi-particle states. Fig. 4(a) shows the energy of the first four multi-particle states, \(E_{N}\) with \(N=1,...,4\), as a function of \(B_{\perp}\). Note that constant energy differences between states with different occupation number, \(N\), are not displayed. For one electron in the QD, there are four single particle states available, which shift due to the spin and valley Zeeman effect and which are experiencing a small spin-orbit splitting, \(\Delta_{\rm SO}\approx 60-80\ \mu\)eV [5; 6; 8]. The single particle ground state (GS) shifts therefore according to \[E_{1}^{\rm GS}=-\frac{1}{2}(g_{s}+g_{v}^{(1)})\mu_{B}B_{\perp}, \tag{1}\] with the Bohr-magneton, \(\mu_{B}\), the spin g-factor, \(g_{s}\), and the valley g-factor, \(g_{v}^{(1)}\), where the upper index (\(N\)) refers to the occupation number \(N\) of the QD, allowing to account for the slight occupation number dependency of the valley g-factor [48]. As we assume only one orbital state to be filled at a time, the orbital part of the multi-particle wavefunction is symmetric. The Pauli-principle then requires the spin and valley part of the multi-particle wavefunction to be anti-symmetric, strongly reducing the available QD states at each filling. For the case of \(N=2\), there are thus only six two particle states available [48]. These are grouped into three valley triplet - spin singlet states, \(\left|S^{s}\ T_{0,\pm}^{v}\right\rangle\), and three spin triplet - valley singlet states, \(\left|T_{0,\pm}^{s}\ S^{v}\right\rangle\), which are separated by \(\delta_{1}\) and \(\delta_{2}\) due to short-range electron-electron interaction [40; 48; 49; 50; 51]. This leads to the interesting situation where the two particle GS is a spin triplet at low magnetic fields but becomes a spin singlet at \(B_{\perp}:=B_{\rm TS}\), indicated by the index 'TS'. The change in GS occurs as soon as the valley Zeeman effect of \(\left|S^{s}\ T_{-}^{v}\right\rangle\) compensates the short-range splitting \(\delta_{2}\). Consequently, the magnetic field dependency of the two particle ground-state has two different slopes, \[E_{2}^{\rm GS}=\begin{cases}-g_{s}\mu_{B}B_{\perp}&\text{for}\ \ \ B<B_{\rm TS},\\ -g_{v}^{(2)}\mu_{B}B_{\perp}&\text{for}\ \ \ B>B_{\rm TS}.\end{cases} \tag{2}\] Still assuming sequential filling of orbitals, for \(N=3\), there are only four available states. This is due to the fact all three electrons are occupying the same orbital state and thus the Pauli principle requires them to have different spin and valley quantum numbers, prohibiting fully valley or spin polarized three particle states. Therefore the three particle GS, \(\left|K^{-}\uparrow\ K^{+}\downarrow\ K^{-}\downarrow\right\rangle_{a}\), with the \(a\) indicating this state to be an anti-symmetric superposition of \(\left|K^{-}\uparrow\right\rangle,\left|K^{+}\downarrow\right\rangle,\left|K^{- }\downarrow\right\rangle\), has a similar magnetic field dispersion as the single particle GS, resulting in \[E_{3}^{\rm GS}=-\frac{1}{2}(g_{v}^{(3)}-g_{s})\mu_{B}B_{\perp}. \tag{3}\] For a full shell, \(N=4\) there is only one state available, which does not shift in \(B_{\perp}\), as that state contains electrons with all four possible spin and valley combinations, leading to zero total magnetic moment and \[E_{4}^{\rm GS}=const. \tag{4}\] The chemical potential of each charge transition is given by \(\mu_{N}=E_{N}-E_{N-1}\), their magnetic field dispersion consequently depends on both involved multi-particle states. This is illustrated by the colored arrows in Fig. 4(a), whose lengths correspond to the position of Figure 4: **(a)** Energy dispersion of the first four multi-particle states in a perpendicular magnetic field. The ground state is highlighted in red, while colored arrows correspond to the chemical potential required to add the next electron onto the QD. At \(B_{\rm TS}\) the GS of the two particle states changes from a spin triplet – valley singlet to a spin singlet – valley triplet. **(b)** Change in chemical potential, \(\Delta\mu\), required to add the next electron to the QD as a function of \(B_{\perp}\), as in Fig. 2(b) but extracted from Fig. 3. White numbers indicate the occupation of the QD when in Coulomb blockade. The colored lines correspond to the length of the arrows in (a). Orbital splitting, \(\Delta_{\rm orb}\), and electron-electron interaction strength, \(\delta_{2}\), can directly be read off. each Coulomb peak on the \(V_{\rm FG}\) axis, i.e. chemical potential axis. Fig. 4(b) shows the change of chemical potential of the first eight Coulomb peaks as a function of \(B_{\perp}\), similar to Fig. 2(b) but evaluated from the data of Fig. 3 (see colored lines) for a better signal-to-noise ratio. Comparing the color code in Fig. 4(a) and 4(b), we can understand where the two-fold bunching of Coulomb peaks within one shell comes from. For low magnetic fields, the two particle GS is a valley singlet, shifting only due to the spin Zeeman effect, while the single particle GS shifts due to the valley and spin Zeeman effect, resulting in a positive slope for the second Coulomb peak, \(\mu_{2}(B_{\perp}<B_{\rm TS})=E_{2}-E_{1}=\frac{1}{2}(g_{v}^{(1)}-g_{s})\mu_{B }B_{\perp}>0\). Only for large magnetic field, \(B_{\perp}>B_{\rm TS}\), where the two particle GS becomes a valley triplet, the slope becomes negative. In this regime the second Coulomb peak shifts with \(\mu_{2}(B_{\perp}>B_{\rm TS})=-\frac{1}{2}(2g_{v}^{(2)}-g_{v}^{(1)}-g_{s})\mu_ {B}B_{\perp}<0\). Neglecting the slight modulation of valley g-factors and assuming \(g_{v}^{(2)}\approx g_{v}^{(1)}\), we recover the result one would expect when assuming non-interacting single particle states. As the three particle states behave like single particle states in \(B_{\perp}\), the third Coulomb peak mirrors the behavior of the second one, and similarly, the forth Coulomb peak mirrors the first one. This behavior is illustrated by the blue and yellow arrows in Fig. 4(a). Note that the data is not perfectly symmetric within each shell as the valley g-factor sensibly depends on the shape of the confinement potential, which changes slightly for increasing charge carrier occupation [43], as also shown in Fig. 2(c). Summing up, it is solely the change of the two particle GS at \(B_{\rm TS}\) due to the short-range electron-electron interaction which causes the two-fold bunching of Coulomb peaks, and one can directly read off its strength \(\delta_{2}\) from the data in Fig. 4(b) (and Fig. 2(b)). Utilizing this understanding, we investigate how different spatial dimensions influence the energy scales determining the shell-filling in BLG QDs. In total, we gathered 31 data points from 10 QDs in different electrostatic environments, i.e. different confinement potentials, from our samples and from the work of the Ensslin group [29; 35; 36; 48; 43]. Together, the strength of the valley g-factor and electron-electron interaction govern the shell-filling at low magnetic field, because they determine the magnetic field \(B_{\rm TS}\), at which the valley Zeeman splitting compensates the short-range interaction and the two particle GS changes from \(\left|T_{-}^{s}\,S^{v}\right\rangle\) to \(\left|S^{s}\,T_{-}^{v}\right\rangle\), \[B_{\rm TS}=\frac{\delta_{2}}{(g_{v}^{(2)}-g_{s})\mu_{\rm B}}. \tag{5}\] Meanwhile, for high out-of-plane magnetic fields, it is the orbital splitting and the valley Zeeman effect dominating the shell-filling. In Fig. 5(a,b), we show the strength of the valley g-factor of the first orbital as a function of \(\Delta_{\rm orb}\) and \(1/\Delta_{\rm orb}\), with the latter being proportional to \(L^{\alpha}\), with the length (size) parameter of the QD, \(L\), and \(\alpha\approx 2\), depending on the shape of the confining potential (see Appendix _B_). The color scale of the markers corresponds to the magnitude of the band gap, \(\Delta_{\rm gap}\), opened in BLG to confine each respective QD, which was estimated from the applied electric displacement field following Ref. [2]. While the exact value of the valley magnetic moment depends sensibly on the shape of the confinement potential of the QD, its orientation with respect to the BLG lattice, the strain in the system, and the magnitude of the band gap [29; 38; 39; 40], we still observe a clear trend towards larger valley g-factors for larger orbital splittings, i.e. for smaller QDs. This trend is consistent with numerical calculations of elliptical QDs, which are included in Fig. 5(b), with the (blue) pink line corresponding to the QD being oriented along the (y-) x-axis of the BLG lattice. The model adds two spatially varying functions, describing the band gap opening and the confinement, to the four-band Hamilto Figure 5: **(a)** Valley g-factor of the first shell in single QDs in literature [29; 35; 36; 43] and in our work as a function of orbital splitting. The colorscale of the markers indicates the size of the band gap used to confine each QD. **(b)** shows the same data as in (a) but as a function of \(1/\Delta_{\rm orb}\), which is proportional to the length parameter of the QD, \(L^{\alpha}\), with \(\alpha\approx 2\). The colored lines indicate the strength of the valley g-factor obtained by numerical calculations of elliptical QDs, oriented along the x- (pink) and y-axis (blue) of the BLG lattice. The dashed line is a guide to the eye, illustrating how the experimental data is a factor of \(\approx 3\) larger compared to the theoretic expectation. **(c)** Strength of the electron-electron interaction, \(\delta_{2}\), as a function of orbital splitting. Again, the colored lines are obtained by numerical calculations of elliptical QDs. nian of BLG [22] and diagonalizes it in order to obtain the orbital wavefunctions of the QD states. We integrate the Berry curvature induced orbital magnetic moment over the distribution of the first orbital wavefunction in \(k\)-space in order to obtain the valley g-factor [40]. However, the magnitude of the calculated valley g-factors is too small by a factor of \(\sim 3\) compared to the experimental data in Fig. 5(b). It is likely that the van-der-Waals stacking technique used to fabricate the samples introduces strain to the BLG, which is not included in our calculation. However, it has been shown that already little strain can cause significant enhancement of the valley magnetic moment [39; 52]. As one would expect, the strength of the short-range electron-electron interaction increases for larger orbital splittings [40], i.e. smaller QDs, which can be seen in Fig. 5(c), where we plot \(\delta_{2}\) as a function of \(\Delta_{\rm orb}\). This behavior is reproduced by numerical calculations using the same model and parameters as in Fig. 5(a,b). More details about the model can be found in _Appendix A_. In summary, we provide a full understanding of the multi-particle spectrum and the fourfold shell-filling sequence of BLG QDs. In particular, we have shown that the short-range electron-electron interaction, the orbital energy and the (state-dependent) valley \(g\)-factor are of particular relevance and are critically influenced by the size of the QD. For low magnetic fields, the valley magnetic moment and the electron-electron interaction strength determine the shell-filling, leading to a spin-triplet two particle ground state and filling according to Hund's rule. For high magnetic fields, shell-filling is dominated by the strong valley magnetic moment, which will eventually even change the order in which orbital states are filled. Understanding the shell-filling sequence enables future works on few-charge carrier multi-QDs in BLG. Foremost, this is important for evaluating the possibility of spin-, valley- and Kramer's qubits in BLG and in which regime of single or multi-QDs they can be operated. They can also be used to probe the spin and valley configuration of correlated phases in BLG. In addition, creating QDs in proximitized BLG can allow to sensibly quantify the influence of the functional layer on the band structure of BLG. **Acknowledgements** The authors thank S. Trellenkamp, F. Lentz and M. Otto for their support in device fabrication, as well as V. Fal'ko for his enlightening contributions to our discussions. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 881603 (Graphene Flagship) and from the European Research Council (ERC) under grant agreement No. 820254, the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 - 390534769, and by the Helmholtz Nano Facility [53]. K.W and T.T acknowledge support from the JSPS KAKENHI (Grant Numbers 19H05790 and 20H00354). ## I Appendix ### Comparison between \(E_{\rm add}\), \(E_{\rm C}\), \(\Delta_{\rm orb}\) and \(\delta_{2}\) From the magnetotransport data presented in Fig. 2(a), we extract the addition energy, \(E_{\rm add}\) at zero magnetic field, and the charging energy, \(E_{\rm C}\), which corresponds approximately to the minimal distance between neighboring Coulomb peaks [34; 46]. We observe a monotonic decrease of \(E_{\rm C}\) for higher occupation numbers and see the influence of the orbital splitting, \(\Delta_{\rm orb}\), and the short-range splitting, \(\delta_{2}\), on the addition energy. ### Numerical calculations of elliptical QDs For calculating the blue and pink curve in Fig. 5(b,c), we follow the approach of Refs. [49; 40; 29; 48]. We describe single electrons in BLG QDs using the four-band Hamiltonian [22], including a spatially varying confinement potential, \(U({\bf r})\), and band gap profile, \(\Delta({\bf r})\), \[\text{H}_{\xi}\!=\!\begin{pmatrix}U-\xi\frac{1}{2}\Delta&\xi v_{3} \pi&0&\xi v\pi^{\dagger}\\ \xi v_{3}\pi^{\dagger}&U+\xi\frac{1}{2}\Delta&\xi v\pi&0\\ 0&\xi v\pi^{\dagger}&U+\xi\frac{1}{2}\Delta&\gamma_{1}\\ \xi v\pi&0&\gamma_{1}&U-\xi\frac{1}{2}\Delta\end{pmatrix}, \tag{6}\] in the two valleys \(K^{\xi}\) labelled by the valley index \(\xi=\pm 1\). The Hamiltonian in Eq. (6) is written in the Bloch basis \(\psi_{K^{+}}=(\psi_{A},\psi_{B^{\prime}},\psi_{A^{\prime}},\psi_{B})\) in valley \(K^{+}\), and \(\psi_{K^{-}}=(\psi_{B^{\prime}},\psi_{A},\psi_{B},\psi_{A^{\prime}})\) in valley \(K^{-}\) (with electron's amplitudes on the BLG sublattices \(A\) and \(B\) in the top, and Figure 6: Plotting charging energy, \(E_{\rm C}\), addition energy, \(E_{\rm add}\), and their difference at zero magnetic field as a function of QD occupation, \(N\). In the latter the influence of the short-range electron-electron interaction, \(\delta_{2}\), and of the orbital splitting \(\Delta_{\rm orb}\) is visible. \(A^{\prime}\) and \(B^{\prime}\) in the bottom layer) and in terms of the momenta \(\pi=p_{x}+ip_{y},\,\pi^{\dagger}=p_{x}-ip_{y}\), and parameters \(v=1.02\cdot 10^{6}\) m/s, \(v_{3}\approx 0.12v\), and \(\gamma_{1}\approx 0.38\) eV. We include the influence of the electrostatic gates by choosing the confinement potential and gap profile to model the experiment, \[U(\mathbf{r})=U_{0}/\mathrm{cosh}\left(\frac{\sqrt{(\frac{\pi}{a })^{2}+(\frac{\pi}{b})^{2}}}{L}\right),\,\mathrm{and} \tag{7}\] \[\Delta(\mathbf{r})=\Delta_{0}-0.15\Delta_{0}/\mathrm{cosh}\left( \frac{\sqrt{(\frac{\pi}{a})^{2}+(\frac{\pi}{b})^{2}}}{L}\right), \tag{8}\] with the band-gap opened by SG and BG of \(\Delta_{0}=30\,\mathrm{meV}\), the maximal potential depth \(U_{0}=15\,\mathrm{meV}\), an ellipticity of \(\frac{a}{b}=2\) (\(\frac{1}{2}\)) for the two perpendicular orientations of the elliptical QD along the BLG lattice, and the width parameter \(L\). We numerically diagonalize the four-band Hamiltonian in Eq. (6) [38; 40] to obtain the single particle orbital states for the confining QD potential widths, \(L=10-150\,\) nm, see Fig. 7. For each value of L, we extract the energy difference between the ground and the first excited orbital state, \(\Delta_{\mathrm{orb}}\), and estimate the topological valley g-factor, \(g_{v}^{(1)}\) of the lowest orbital state by calculating how much orbital magnetic moment, \(M_{z}\), is picked up by the orbital ground states wave function, \(\Psi\), in momentum space, valley g-factor[29; 40; 52], \[\mu_{B}g_{v}=\int d\mathbf{k}M_{z}(\mathbf{k})|\Psi(\mathbf{k})|^{2}. \tag{9}\] Here, the orbital magnetic moment, induced by the nontrivial Berry curvature, of BLG's Bloch bands, is defined as \(\mathbf{M}(\mathbf{k})=M_{z}(\mathbf{k})\mathbf{e}_{z}\), [41; 42; 54; 55] with \[M_{z}=-i\frac{e}{2\hbar}\langle\nabla_{\mathbf{k}}\Phi(\mathbf{k})|\times[ \epsilon(\mathbf{k})-H(\mathbf{k})]|\nabla_{\mathbf{k}}\Phi(\mathbf{k}) \rangle\cdot\mathbf{e}_{z}, \tag{10}\] where, \(\nabla_{\mathbf{k}}=(\partial_{k_{x}},\partial_{k_{y}})\), "\(\times\)" is the cross product, \(\epsilon(\mathbf{k})\) is the band energy, and \(\Phi\) is the corresponding Bloch state. Further, the strength of the short-range electron-electron interaction depends on the QD's ground state orbital wave function in real space, and the corresponding prefactor can be approximated as [40; 48; 49] \[\mathcal{J}\approx\int d\mathbf{r}\;|\Psi(\mathbf{r})|^{2}|\Psi(\mathbf{r})|^{2}. \tag{11}\] Assuming a coupling constant \(|g_{\perp}|=0.16\,\mathrm{eVnm}^{2}\)[56], which is in the same order of magnitude as in [48], we achieve a good agreement with the experimental data. ### Determining the valley g-factor as a function of QD occupation For determining the valley g-factor as a function of N, we first extract \(\Delta\mu_{N}\left(B_{\perp}\right)\) from the outline of the conducting region of the finite bias data presented in Fig. 3, as highlighted by the colored arrows. The result (for \(B_{\perp}>0\)) can be seen in Fig. 4 (b). In principle, we could also utilize \(\Delta\mu_{N}(B_{\perp})\) from the data presented in Fig. 2(a,b), but due to a better signal-to-noise ratio, we take the data from Fig. 3. Next, we perform linear fits on \(\Delta\mu_{N}(B_{\perp})\) for extracting the g-factor. For the first and forth transition we fit in the range of \(|B_{\perp}|\lessapprox 0.85\)\(T\), depending on where the crossing with the transition from the next orbitals happens. For the second and third transition we fit in the range of \(B_{\mathrm{TS}}\lessapprox|B_{\perp}|\lessapprox 0.85\)\(T\). Utilizing equations (1-5), we can extract the g-factors from the obtained slopes according to \[g_{v}^{(1)} =2\frac{\alpha_{1}}{\mu_{B}}-g_{s}\;, \tag{12}\] \[g_{v}^{(2)} =2\frac{\alpha_{2}}{\mu_{B}}+\frac{1}{2}(g_{s}+g_{v}^{(1)})\;,\] (13) \[g_{v}^{(3)} =2\frac{\alpha_{3}}{\mu_{B}}-g_{s}-2g_{v}^{(2)},\] (14) \[\tilde{g}_{v}^{(3)} =2\frac{\alpha_{4}}{\mu_{B}}+g_{s}\;, \tag{15}\] with the fitted slope of each charge transition \(\alpha_{N}\) (see green dashed lines in Fig. 2(b)). As already apparent from equation (1-5), the valley g-factor \(g_{v}^{(4)}\) does not contribute to the slopes as the shell is completely full. Therefore, the forth charge transition also yields the g-factor for \(N=3\), which we label \(\tilde{g}_{v}^{(3)}\). The difference to \(g_{v}^{(3)}\) is due to the increased FG voltage at the forth charge transition. ### Data availability The data and evaluation scripts supporting the findings of this work are available in a Zenodo repository under XXX. Figure 7: Single particle orbital states, \(n\), extracted from diagonalising Eq. (6): We exemplify \(\Delta_{\mathrm{orb}}\) and the ground state wave function in momentum space for the QD widths \(L=40\) nm (oriented along x, left) and \(L=15\) nm (oriented along y, right), which lead to comparable orbital splittings \(\Delta_{\mathrm{orb}}\approx 2\) meV. The second and third orbital state are not fully degenerate due to ellipticity.
2303.15523
Explaining the Hubble tension and dark energy from alpha-attractors
A compelling unified model of dark energy and early dark energy (EDE) is presented, using a scalar field with an exponential runaway potential, in the context of alpha-attractors. The field is originally trapped at an enhanced symmetry point, subsequently thaws to become successful EDE and eventually slow-rolls to become dark energy. EDE ameliorates the observed Hubble tension.
Lucy Brissenden, Konstantinos Dimopoulos, Samuel Sánchez López
2023-03-27T18:05:08Z
http://arxiv.org/abs/2303.15523v1
# Explaining the Hubble tension and dark energy from \(\alpha\)-attractors ###### Abstract: A compelling unified model of dark energy and early dark energy (EDE) is presented, using a scalar field with an exponential runaway potential, in the context of alpha-attractors. The field is originally trapped at an enhanced symmetry point, subsequently thanks to become successful EDE and eventually slow-rolls to become dark energy. EDE ameliorates the observed Hubble tension. Introduction Since the arrival of high precision cosmological observations, a standard model of cosmology, called the concordance model, has been constructed. In short called \(\Lambda\)CDM, this concordance model recently has starting to suffer from a number of challenges, the most important of which is arguably the Hubble tension. In a nutshell, the Hubble tension amounts to a disagreement in the estimate of the Hubble constant \(H_{0}\) as inferred by early (mainly CMB) and late (mainly SNe) observations. Indeed, CMB observations from the Planck satellite [1] suggest \[H_{0}=67.44\pm 0.58\ \mathrm{km\ s^{-1}Mpc^{-1}}, \tag{1}\] while a distance scale measurement using Cepheid-SN-Ia data from the SH0ES collaboration [2] results in \[H_{0}=73.04\pm 1.04\ \mathrm{km\ s^{-1}Mpc^{-1}}. \tag{2}\] This is a 5\(\sigma\) tension (4.56-6.36\(\sigma\)). The most compelling proposal to overcome this problem is early dark energy (EDE). ## 2 Early Dark Energy EDE amounts to momentary importance of dark energy near matter-radiation equality. Investigated first by Refs. [3, 4, 5, 6], this proposal does not necessarily consider that EDE is the same dark energy substance which is responsible for the accelerated expansion at present (for a recent review see Ref. [7]). How does EDE manage to increase the value of \(H_{0}\) as inferred from CMB observations? Even though CMB and BAO observations tightly constrain the cosmological parameters, they constrain the combination \(H(z)r_{s}\), where \(H(z)\) is the Hubble parameter as a function of redshift \(z\), and \(r_{s}\) is the comoving sound horizon at decoupling, given by \[r_{s}=\int_{z_{\mathrm{dec}}}^{\infty}\frac{c_{s}(z)}{H(z)}dz, \tag{3}\] where \(c_{s}(z)\) is the sound speed. An additional amount of dark energy in the Universe increases the total density, which in turn increases the Hubble parameter because of the Friedmann equation \(H^{2}=(\rho_{B}+\rho_{\mathrm{{}_{EDE}}})/3m_{m_{\mathrm{p}}}^{2}\), where \(\rho_{B}\) is the density of the radiation and matter background. EDE amounts to a brief increase of \(H(z)\) before decoupling, which lowers the value of the sound horizon in Eq. (3). Thus, EDE manages to simultaneously lower the value of \(r_{s}\) and increase \(H_{0}\) without violating CMB observations. The fractional energy density required for EDE to work is about 10%, \(f_{\mathrm{{}_{EDE}}}=0.10\pm 0.02\) at redshift \(z_{c}=4070^{+400}_{-840}\). Therefore, the EDE proposal amounts to as injection of energy at around the time of matter-radiation equality (\(z_{c}\simeq z_{\mathrm{eq}}\simeq 3600\)), which then decays away faster than the background radiation, such that it becomes negligible at the time of last scattering, before it can be detected in the CMB [4]. The original proposal in Ref. [3] suggested that the EDE was an axion scalar field \(\phi=\theta f\) with potential \(V(\theta)=m^{2}f^{2}(1-\cos\theta)^{n}\) with \(n>2\). The authors of Ref. [3] found that the fractional energy density must be \(f_{\rm{{}_{\rm{EDE}}}}=0.08\pm 0.04\), which results in \(H_{0}=70.0\pm 1.5\,{\rm km\,s^{-1}Mpc^{-1}}\). After thawing, the EDE field oscillates around its vacuum expectation value (VEV) with average barotropic parameter \(w=\frac{n-1}{n+1}\). To redshift faster than radiation, it is needed that \(w>\frac{1}{3}\), which implies that the minimum is of order higher than quartic. Note that the density of the oscillating EDE redshifts as \(a^{-6n/(n+1)}\), which reduces to \(a^{-6}\) (free-fall) in the limit \(n\gg 1\). The situation is similar to many other EDE models, where typically the EDE scalar field oscillates around its VEV in a high order potential (see however, Ref. [8]). In contrast, in our model presented below, the EDE scalar field experiences a period of kinetic domination, where the field is in non-oscillatory free-fall and its density decreases as \(\propto a^{-6}\). ## 3 \(\alpha\)-attractors Our model unifies EDE with late dark energy in the context of \(\alpha\)-attractors. Ref. [9] is an earlier attempt for such unification in the same theoretical context. However, that proposal is also of oscillatory EDE. \(\alpha\)-attractors appear naturally in conformal field theory or supergravity theories [10, 11, 12, 13]. The scalar field has a non-canonical kinetic term, featuring two poles, which the field cannot cross. The field can be canonically normalised via a field redefinition. Then, the finite poles for the non-canonical field are transposed to infinity for the canonical one. As a result, the scalar potential is "stretched" near the poles, featuring two plateau regions, which have been used for modelling inflation, see _e.g._ Refs. [14, 15, 16, 17, 18, 19, 20] or quintessence [21], or both, in the context of quintessential inflation [21, 22, 23]. The Lagrangian density features two poles at \(\varphi=\pm\sqrt{6\alpha}\,m_{\rm P}\) and has the form \[\mathcal{L}=\frac{-\frac{1}{2}(\partial\varphi)^{2}}{\left(1-\frac{\varphi^{ 2}}{6\alpha\,m_{\rm P}^{2}}\right)^{2}}-V(\varphi)\,, \tag{4}\] where \(\varphi\) is the non-canonical scalar field and \((\partial\varphi)^{2}\equiv g^{\mu\nu}\partial_{\mu}\varphi\,\partial_{\nu}\varphi\). Redefining \(\varphi\) in terms of the canonical scalar field \(\phi\), we have \[\mathrm{d}\phi=\frac{\mathrm{d}\varphi}{1-\frac{\varphi^{2}}{6\alpha m_{\rm P }^{2}}}\quad\Rightarrow\quad\varphi=m_{\rm P}\sqrt{6\alpha}\,\tanh\left(\frac{ \phi}{\sqrt{6\alpha}\,m_{\rm P}}\right). \tag{5}\] The poles \(\varphi=\pm\sqrt{6\alpha}\,m_{\rm P}\) are transposed to infinity and the Lagrangian density now reads \[\mathcal{L}=-\frac{1}{2}(\partial\phi)^{2}-V(\phi). \tag{6}\] ## 4 The Model In contrast to most EDE literature, we investigate non-oscillating EDE. Thus, we require the scalar potential to be steep enough, such that, after equality of matter and radiation, the EDE scalar field becomes dominated by its kinetic energy density and engages in "free-fall" roll. Therefore, we study the following toy-model. Consider a potential of the form \[V(\varphi)=V_{X}\exp(-\lambda e^{\kappa\varphi/m_{\rm p}}), \tag{7}\] where \(\alpha,\kappa,\lambda\) are dimensionless model parameters, \(V_{X}\) is a constant energy density scale and \(\varphi\) is the non-canonical scalar field with kinetic poles given by the typical \(\alpha\)-attractors form with the Lagrangian density in Eq. (4). To assist our intuition, we switch to the canonically normalised (canonical) scalar field \(\phi\), using the transformation in Eq. (5). The Lagrangian density is then given by Eq. (6), where the scalar potential is \[V(\phi)=\exp(\lambda e^{\kappa\sqrt{6\alpha}})V_{\Lambda}\exp[-\lambda e^{ \kappa\sqrt{6\alpha}\tanh(\phi/\sqrt{6\alpha}\,m_{\rm p})}]\,, \tag{8}\] where \(V_{\Lambda}\) is the vacuum density at present related to the model parameters as \[V_{\Lambda}\equiv\exp(-\lambda e^{\kappa\sqrt{6\alpha}})V_{X}\,. \tag{9}\] Note that the model parameter is \(V_{X}\) and not \(V_{\Lambda}\), the latter being generated by \(V_{X}\) and the remaining model parameters as shown above. ## 5 Analytic study We are interested in two limits for the potential: matter-radiation equality and the present time. At matter-radiation equality, we consider \(\phi\to 0\) (\(\varphi\to 0\)). In this limit, we have \[V_{\rm eq}\simeq\exp[\lambda(e^{\kappa\sqrt{6\alpha}}-1)]V_{\Lambda}\exp(- \kappa\lambda\,\phi_{\rm eq}/m_{\rm P})\,, \tag{10}\] where the subscript 'eq' denotes the time of matter-radiation equality. It is assumed that the field was originally frozen there and at the time of equality in unfreezes (thaws). We discuss and justify this assumption in Sec. 7. After thawing the field soon rolls towards large values. Today, we consider \(\phi\to+\infty\) (\(\varphi\to+\sqrt{6\alpha}\,m_{\rm p}\)). The potential in this limit is \[V_{0}\simeq V_{\Lambda}\left[1+2\kappa\lambda e^{\kappa\sqrt{6\alpha}}\sqrt{6 \alpha}\,\exp\left(-\frac{2\phi_{0}}{\sqrt{6\alpha}\,m_{\rm P}}\right)\right]\,, \tag{11}\] where the subscript '0' denotes the present time. Note that, in this limit, the potential approaches \(V_{\Lambda}\), which corresponds to positive vacuum density with \(w=-1\), as in \(\Lambda\)CDM. The above approximations describe well the scalar potential near equality and the present time. As explained below, between these regions, the scalar field free-falls and becomes oblivious of the scalar potential. Let us investigate the evolution of the EDE field. Originally the field is frozen at zero (see Sec. 7). Its energy density is such that it remains frozen there until equality, when it thaws following the appropriate exponential attractor, since \(V_{\rm eq}\) in Eq. (10) is approximately exponential [24]. For convenience, we assume this is the subdominant attractor, which requires that the strength of the exponential is [25, 26] \[\kappa\lambda>\sqrt{3}\,. \tag{12}\] The subdominant exponential attractor is called the scaling attractor. In the scaling attractor the energy density of the rolling scalar field mimics the dominant background energy density. Thus, the fractional energy density of the field is constant, given by the value [24, 25, 26] \[f_{\mbox{\tiny{EDE}}}\simeq\frac{3}{\left(\kappa\lambda\right)^{2}}<1 \tag{13}\] This provides an estimate of the moment when the originally frozen scalar field, unfreezes and begins rolling down its potential. Before unfreezing \(f_{\mbox{\tiny{EDE}}}\) is growing, because the background density decreases with the expansion of the Universe, until \(f_{\mbox{\tiny{EDE}}}\) obtains the above value. However, after unfreezing, the field soon experiences the full exp(exp) steeper than exponential potential so, it does not follow the subdominant attractor any more but it is dominated by its kinetic energy density only (it free-falls). Then, its density scales as \(\rho_{\phi}\simeq\frac{1}{2}\dot{\phi}^{2}\propto a^{-6}\), until it refreezes at a larger value \(\phi_{0}\). This value is estimated as follows. In free-fall, the equation of motion is reduced to \(\ddot{\phi}+3H\dot{\phi}\simeq 0\), where \(H=2/3t\) after equality. The solution is \[\phi(t)=\phi_{\rm eq}+\frac{C}{t_{\rm eq}}\left(1-\frac{t_{\rm eq}}{t} \right)\;, \tag{14}\] where \(C\) is an integration constant. From the above, it is straightforward to find that \(\dot{\phi}=Ct^{-2}\). Thus, at equality we have \[f_{\mbox{\tiny{EDE}}}=\left.\frac{\rho_{\phi}}{\rho}\right|_{ \rm eq}=\frac{\frac{1}{2}C^{2}t_{\rm eq}^{-4}}{\frac{4}{3}(\frac{m_{P}}{t_{\rm eq }})^{2}}=\frac{3}{8}\frac{C^{2}}{(m_{\rm P}t_{\rm eq})^{2}}\] \[\Rightarrow\qquad C=\sqrt{\frac{8}{3}}f_{\mbox{\tiny{EDE}}}\,m_{ \rm P}\,t_{\rm eq}=\frac{\sqrt{8}}{\kappa\lambda}\,m_{\rm P}\,t_{\rm eq}\;, \tag{15}\] where we used Eq. (13), \(\rho_{\phi}=\frac{1}{2}\dot{\phi}^{2}\) and that \(\rho=1/6\pi Gt^{2}=\frac{4}{3}(m_{P}/t)^{2}\). Therefore, the field freezes at the value \[\phi_{0}=\phi_{\rm eq}+C/t_{\rm eq}=\phi_{\rm eq}+\frac{\sqrt{8}}{\kappa \lambda}\,m_{\rm P}\;, \tag{16}\] where we considered that \(t_{\rm eq}\ll t_{\rm freeze}<t_{0}\). Using that \(t_{\rm eq}\sim 10^{4}\,\)y and \(t_{0}\sim 10^{10}\,\)y, we can estimate \[\frac{V_{\rm eq}}{V_{0}}\simeq\frac{f_{\mbox{\tiny{EDE}}}\rho_{\rm eq}}{0.7 \,\rho_{0}}\simeq\frac{30}{7(\kappa\lambda)^{2}}\left(\frac{t_{0}}{t_{\rm eq} }\right)^{2}\simeq\frac{3}{7(\kappa\lambda)^{2}}\times 10^{13}\;. \tag{17}\] Now, from Eqs. (10) and (11) we find \[\frac{V_{\rm eq}}{V_{0}}\simeq\frac{e^{\lambda(e^{\kappa\sqrt{6\alpha}}-1)} \exp(-\kappa\lambda\,\phi_{\rm eq}/m_{\rm P})}{1+2\kappa\lambda\,e^{\kappa \sqrt{6\alpha}}\sqrt{6\alpha}\,\exp(-2\phi_{0}/\sqrt{6\alpha}\,m_{\rm P})}\;. \tag{18}\] Considering that \(\phi_{\rm eq}\simeq 0\) and Eq. (16), the above can be written as \[\frac{V_{\rm eq}}{V_{0}}\simeq\frac{e^{\lambda(e^{\kappa\sqrt{6\alpha}}-1)} }{1+2\kappa\lambda\,e^{\kappa\sqrt{6\alpha}}\sqrt{6\alpha}\,e^{-2\sqrt{8}/ \kappa\lambda\sqrt{6\alpha}}}\;. \tag{19}\] Taking \(f_{\rm EDE}\simeq 0.1\) as required by EDE, Eq. (13) suggests \[\kappa\lambda\simeq\sqrt{30}. \tag{20}\] Combining this with Eq. (17) we obtain \[e^{\frac{\sqrt{30}}{\kappa}}(e^{\kappa\sqrt{6\alpha}}-1)\sim 10^{12}/7\, \tag{21}\] where we have ignored the 2nd term in the denominator of the right-hand-side of Eq. (19). From the above we see that, \(\kappa\) is large when \(\alpha\) is small. Taking, as an example, \(\alpha=0.01\) we obtain \(\kappa\simeq 18\) and \(\lambda\simeq 0.30\) (from Eq. (20)). With these values, the second term in the denominator of the right-hand-side of Eq. (19) is of order unity and not expected to significantly influence our results. For the selected values, Eq. (16) suggests that the total excursion of the field is \[\Delta\phi=\phi_{0}-\phi_{\rm eq}=\frac{\sqrt{8}}{\kappa\lambda}\,m_{P} \simeq 0.5\,m_{P}\, \tag{22}\] i.e. it is sub-Planckian. A sub-Planckian excursion of the field implies that 5th force considerations are suppressed. ## 6 Numerical investigation We have thoroughly analysed this model in Ref. [27]. Here, we will present our main results. We have aimed to obtain a value of \(H_{0}\) in the window \[72\leq\frac{H_{0}}{\rm km\ sec^{-1}\,Mpc^{-1}}\leq 74. \tag{23}\] With this requirement, the parameter space arrived at for our model parameters is \[0<\alpha<0.00071\] \[0<\kappa<700\] \[0<\lambda<0.027\, \tag{24}\] with \(V_{\Lambda}=10^{-120.068}\,m_{\rm P}^{4}\). We see that the above numbers are reasonable. In particular, the value of \(\kappa\sim 10^{2}\) implies that the mass scale suppressing the exponent in our model in Eq. (7) is near the scale of grand unification \(m_{\rm P}/\kappa\sim 10^{16}\,\rm GeV\), which is a rather natural scale. In the above ranges, we find that \(0.015<f_{\rm EDE}<0.107\) at equality, while it becomes lower than \(10^{-3}\) by decoupling; the time the CMB radiation is emitted. The barotropic parameter of dark energy at present is \(w_{\phi}=-1.000\) with negligible running (less than \(10^{-11}\)), which is indistinguishable from \(\Lambda\)CDM. One important finding is that the condition in Eq. (12), \(\kappa\lambda>\sqrt{3}\) assumed in the previous section, is not valid. However, this was chosen only for convenience, as explained before Eq. (12). If the condition is violated then the thawing EDE does not follow the scaling exponential attractor but the dominant exponential attractor instead. In both cases however, once the EDE field rolls away from zero, it starts experiencing the full exp(exp) potential and goes into free-fall, as discussed. Thus, the qualitative behaviour is the same, as also demonstrated by our numerical results shown below. As a concrete example, we choose the following values for the model parameters \[\alpha =0.0005\] \[\kappa =145\] \[\lambda =0.008125\,. \tag{25}\] The above suggest that \(\kappa\lambda=1.178<\sqrt{3}\). The value of the Hubble constant obtained in this case is \[H_{0}=73.27\ \text{km sec}^{-1}\,\text{Mpc}^{-1}, \tag{26}\] which evidently is well in agreement with the SH\({}_{0}\)ES observations. Comparison of the Hubble parameter in this scenario with the one in \(\Lambda\)CDM is shown in Fig. 1. The behaviour of the fractional energy density \(f_{\text{\tiny EDE}}\), which is identified with the EDE density parameter \(\Omega_{\phi}(z)\) is shown in Fig. 2. It is evident that, for this example, \(f_{\text{\tiny EDE}}=\Omega_{\phi}(z_{\text{eq}})\simeq 0.08\). In view of Eq. (9), we find \(\log(V_{X}/V_{\Lambda})=9.926\). Thus, our model parameter \(V_{X}=10^{-110.142}\,m_{\text{P}}^{4}\) is fine-tuned at the same level (slightly less) than \(V_{\Lambda}\) in \(\Lambda\)CDM. However, it has to be stressed that, in contrast to \(\Lambda\)CDM, our proposal addresses simultaneously two cosmological problems; not only late dark energy but also the Hubble tension. Figure 1: The Hubble parameter (in units of km s\({}^{-1}\)Mpc\({}^{-1}\)) of the Universe in our model (green), a classical \(\Lambda\)CDM simulation (black), and one with only matter and radiation (blue), as a function of the redshift (top) and the e-folds (bottom) elapsed since the beginning of the simulation. It is evident that our model corresponds to a larger value of \(H(z)\) than \(\Lambda\)CDM, as desired. The barotropic parameters, of EDE and the background, are shown in Fig. 3. It can be seen clearly that, after thawing, the barotropic parameter of EDE is \(w_{\phi}=1\) and the field is in free-fall as discussed. Its density decreases as \(a^{-6}\) as clearly shown in Fig. 4, which corresponds to the \(n\to\infty\) limit of the oscillating EDE in Ref. [3] and it is never attained by any oscillating EDE model. Thus, our model disturbs the emission of the CMB at decoupling in the least amount possible. Finally, for our example we obtain that the total excursion of the EDE field from thawing to refreezing is sub-Planckian: \(\Delta\phi/m_{\rm P}=0.4274\), in agreement with Eq. (22). This implies both that our model does not suffer from fifth force problems and our potential is stable against radiative corrections. ## 7 Trapping at the origin A compelling explanation why the EDE scalar field finds itself frozen at the origin in the first place is the following. If the origin is an enhanced symmetry point (ESP), then at very early times, an interaction of \(\varphi\) with some other scalar field \(\sigma\) can trap the rolling \(\varphi\) at zero [28]. The scalar potential includes the interaction \[\Delta V=\frac{1}{2}s^{2}\varphi^{2}\sigma^{2}\, \tag{27}\] where the coupling \(g<1\) parametrises the strength of the interaction. We assume that initially \(\varphi\) is rolling down its steep potential, which away from the origin, does not have to be of the form in Eq. (7). In fact, it is conceivable that \(\varphi\) might play the role of the inflaton field too [27]. The original kinetic energy density of \(\varphi\) is depleted due to particle production of \(\sigma\)-particles, because their mass \(\sim g\varphi\) changes non-adiabatically near the origin [28]. Note that, near the origin, the \(\varphi\)-field is approximately canonically normalised. Figure 2: The density parameter of the scalar field \(\Omega_{\phi}\) as a function of the redshift (top) and e-folds (bottom) elapsed since the beginning of the simulation. As shown, at equality, there is a bump with \(f_{\rm EDE}=\Omega_{\phi}(z_{\rm eq})\) with \(f_{\rm EDE}\simeq 0.08\). As the field moves past the ESP, the produced \(\sigma\) particles give rise to an effective linear potential \(\sim gn_{\sigma}|\varphi|\)[28], where \(n_{\sigma}\) is the number density of the produced \(\sigma\)-particles. This linear potential halts the roll of \(\varphi\) and reverses its variation. More \(\sigma\)-particles are created when \(\varphi\) crosses the origin again, resulting in a steeper linear potential, which reverses the variation of \(\varphi\) again, closer to the origin this time. The process continues until the \(\varphi\)-field is trapped at the origin [26, 28]. The trapping of a rolling scalar field at an ESP can take place only if the \(\sigma\)-particles do not decay at maximum displacement. The end result of this process is that all the kinetic energy density of the rolling \(\varphi\) has been given to the \(\sigma\)-particles. Since \(\varphi\) is trapped at zero, the \(\sigma\)-particles are relativistic, which means that their density scales as radiation, being a subdominant part of the thermal bath. As far as \(\varphi\) is concerned, it is trapped at the origin and its density is \(\rho_{\varphi}=V(\varphi=0)=e^{-\lambda}V_{X}=\) constant. After some time, the \(\sigma\)-particles may decay into the standard model particles, which comprise the thermal bath of the hot Big Bang. Because the confining potential is proportional to \(n_{\sigma}\), it disappears. However, the EDE \(\varphi\)-field remains frozen at the origin because the scalar potential \(V(\varphi)\) in Eq. (7) is flat enough there. The EDE \(\varphi\)-field unfreezes again in matter-radiation equality. The above scenario is one possible explanation of the initial condition considered. Numerical simulations simply assume that the field begins frozen at the origin. Other possibilities to explain our initial condition exist, for example considering a thermal correction of the form \(\delta V\propto T^{2}\varphi^{2}\), which would drive the \(\varphi\)-field towards the origin at high temperatures. ## 8 Conclusions The concordance model \(\Lambda\)CDM suffers from the Hubble tension at \(5\sigma\). A prominent resolution of this tension is early dark energy (EDE). EDE amounts to a dark energy substance, which Figure 3: Barotropic parameter of the scalar field (dotted green), of the background perfect fluid (full blue) and of the sum of both components (full black). It is evident that, after unfreezing, the EDE scalar field is in free-fall, with \(w_{\phi}=1\), until it refreezes again. momentarily becomes about 10% of the total energy density near matter-radiation equality, but decays faster than radiation afterwards. EDE in the context of \(\alpha\)-attractors can unify EDE with late dark energy without more fine-tuning than \(\Lambda\)CDM. We studied such a model of EDE, characterised by the exp(exp) potential in Eq. (7). Our EDE is originally frozen at the origin. Near equality it thaws, then it free-falls down its runaway potential until it refreezes before today, when it becomes late dark energy. We have investigated numerically our model and demonstrated that it works for natural values of the parameters. We also showed that the field excursion between the initial and final frozen values is sub-Planckian, which means that our model does not suffer from a fifth force problem and it is not unstable against radiative corrections. **Acknowledgements:** LB is supported by STFC. KD is supported (in part) by the Lancaster-Manchester-Sheffield Consortium for Fundamental Physics under STFC grant: ST/T001038/1. SSL is supported by the FST of Lancaster University. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising.
2309.00613
Iterative Multi-granular Image Editing using Diffusion Models
Recent advances in text-guided image synthesis has dramatically changed how creative professionals generate artistic and aesthetically pleasing visual assets. To fully support such creative endeavors, the process should possess the ability to: 1) iteratively edit the generations and 2) control the spatial reach of desired changes (global, local or anything in between). We formalize this pragmatic problem setting as Iterative Multi-granular Editing. While there has been substantial progress with diffusion-based models for image synthesis and editing, they are all one shot (i.e., no iterative editing capabilities) and do not naturally yield multi-granular control (i.e., covering the full spectrum of local-to-global edits). To overcome these drawbacks, we propose EMILIE: Iterative Multi-granular Image Editor. EMILIE introduces a novel latent iteration strategy, which re-purposes a pre-trained diffusion model to facilitate iterative editing. This is complemented by a gradient control operation for multi-granular control. We introduce a new benchmark dataset to evaluate our newly proposed setting. We conduct exhaustive quantitatively and qualitatively evaluation against recent state-of-the-art approaches adapted to our task, to being out the mettle of EMILIE. We hope our work would attract attention to this newly identified, pragmatic problem setting.
K J Joseph, Prateksha Udhayanan, Tripti Shukla, Aishwarya Agarwal, Srikrishna Karanam, Koustava Goswami, Balaji Vasan Srinivasan
2023-09-01T17:59:29Z
http://arxiv.org/abs/2309.00613v2
# Iterative Multi-granular Image Editing using Diffusion Models ###### Abstract Recent advances in text-guided image synthesis has dramatically changed how creative professionals generate artistic and aesthetically pleasing visual assets. To fully support such creative endeavors, the process should possess the ability to: 1) iteratively edit the generations and 2) control the spatial reach of desired changes (global, local or anything in between). We formalize this pragmatic problem setting as _Iterative Multi-granular Editing_. While there has been substantial progress with diffusion-based models for image synthesis and editing, they are all one shot (i.e., no iterative editing capabilities) and do not naturally yield multi-granular control (i.e., covering the full spectrum of local-to-global edits). To overcome these drawbacks, we propose EMILIE: _Iterative Multi-granular Image Editor_. EMILIE introduces a novel latent iteration strategy, which re-purposes a pre-trained diffusion model to facilitate iterative editing. This is complemented by a gradient control operation for multi-granular control. We introduce a new benchmark dataset to evaluate our newly proposed setting. We conduct exhaustive quantitatively and qualitatively evaluation against recent state-of-the-art approaches adapted to our task, to being out the mettle of EMILIE. We hope our work would attract attention to this newly identified, pragmatic problem setting. ## 1 Introduction An image is an invaluable form of visual communication. Creating such a visual illustration is a creative process and an expression of the ingenuity of the artist. They often start with a blank canvas and _iteratively_ update it with the semantic concepts that they want to convey. These changes might be at different _granularities_, ranging from any small local change to global changes spanning the entire canvas. Generative technologies for image synthesis have made remarkable strides lately. Diffusion models [9, 22, 27] have had tremendous success in creating realistic images from text prompts. These text-to-image models have the capacity to inspire and augment human creativity. Diffusion models for image editing [1, 2, 4, 11, 18, 28, 30] further enhances the collaborative content creation, by allowing users to edit their content using the versatility of diffusion models. In this work, we identify two major gaps in utilizing diffusion based image editing models as an assistant in a creator's workflow: 1) Current diffusion based image-editing methods are one-shot. They consume an input image, make the suggested edit, and give back the result. This is in stark contrast to the creator's workflow, which is naturally iterative. \(2)\) It is not easy for an artist to specify and constrain the spatial extent of the intended edit. We note that this is a very practical yet under-explored research direction in the literature. Towards this end, we formalize and define this novel problem setting as _Iterative Multi-granular Image Editing_. A naive approach to iteratively edit an image would be to use the edited image from the previous step as input to the image editor for the next step. This, however, adds unwanted artifacts to the outputs, and these get accumulated over edit steps as shown in Figs. 2 and 5. To address this, we introduce a new and elegant _latent iteration_ framework. Our key insight is that iterating over the latent space instead of the image space, there is a substantial reduction in the amount of noise/artifacts that get added over edit steps. Next, to incorporate multi-granular control, we modulate the denoising process to follow the user-specified location constraints. We interpret the diffusion model as an energy-based model, which allows us to control the spatial extent of the edits by selectively restricting the gradient updates to the regions of interest. These two contributions, put together as part of our overall framework called _EMILIE: Iterative Multi-granular Image Editor_, tackles our newly introduced problem setting. It is critical to note that \(\mathrm{EMILIE}\) adds these capabilities directly into an already trained diffusion model and does not require any retraining, thereby substantially enhancing its usability. As we propose and address a novel problem setting, we find that there are no existing benchmark datasets that we can use to evaluate the methodologies. Towards this end, we introduce a new benchmark dataset, called IMIE-Bench, particularly suited to our problem. We extensively evaluate \(\mathrm{EMILIE}\) both qualitative and quantitatively on our proposed benchmark and show how it performs against various baselines adapted to this new problem. Further, we also evaluate the multi-granular editing capabilities of EMILIE on the publicly available EditBench [30] benchmark. We see that our approach outperforms competing state-of-the-art methods like Blended Latent Diffusion [1] and DiffEdit [4] both qualitatively and quantitatively. To summarize, the key highlights of our work are: * We introduce _a novel problem setting_ of iterative, multi-granular image editing motivated by practical, real-world use-cases of multimedia content designers. * To tackle our new problem, we propose \(\mathrm{EMILIE}\), a training-free approach that comprises a _new latent iteration method_ to reduce artifacts over iterative edit steps and a gradient update mechanism that enables controlling the spatial extent of the edits. * We propose a _new benchmark dataset_, IMIE-Bench, particularly suited to our new problem setting comprising a carefully curated set of images with a sequence of at least four edit instructions each. * We conduct _exhaustive experimental evaluation_ on IMIE-Bench and EditBench (for multi-granular edits) to bring out the efficacy of our proposed approach, where we clearly outperform the existing state-of-the-art methods. ## 2 Related Works **Multimodal Image Generation and Editing Methods**: Generative Adversarial Networks (GAN) [8] based text to images approaches like StackGAN [34], StackGAN++ [35] and AttnGAN [33] were effective in generating \(64\times 64\times 3\) images conditioned on their textual description. These methods work well in modelling simple objects like generating flowers and birds, but struggle to generate complex scenes with multiple objects. Recently, diffusion based methods [22, 5] have had phenomenal success in generating realistic images from textual descriptions. Similar to the success in image synthesis domain, diffusion based models have had significant strides for editing images too. Imagic [11] is able to make complex non-rigid edits to real images by learning to align a text embedding with the input image and the target text. PnP [28], NTI [18], Imagic [11], DiffEdit [4] first use DDIM inversion [27, 5] to invert the image to the input tensor required by the the diffusion model, and then use the text conditioned denoising process to generate the image with the required edit. CycleD-iffusion [32] proposes DPM-Encoder to project an image to the latent space of diffusion models thereby enabling them to be used as zero-shot image editors. InstructPix2Pix [2] and Imagen Editor [30] pass the image to be edited directly to the diffusion model by bypassing the DDIM inversion step. This improves the fidelity of the generated edits. But, none of these works addresses the challenges involved with _iterative editing_, which is the main focus of our work. A trivial way to make these efforts iterative would be to pass the edited image recursively. We compare with such iterative variants of PnP [28] and InstructPix2Pix [2] in Sec. 5. Recent efforts in mask guided image editing methods [1, 4] indeed can support multi-granular editing. We compare with these methods in Sec. 5. **Iterative Image Synthesis**: This area is relatively less explored. CAISE [12] and Conversational Editing [17] applies operations like 'rotate image by \(90^{\circ}\)', 'change the contrast by \(60\)', 'crop the image' and so on. SSCR [7] and Keep drawing it [6] operates on synthetic data from CLEVR [10] dataset, and tries to add objects to a canvas incrementally. Our approach operates on natural images, and can handle real-world edits that an artist might make to an image. ## 3 Anatomy of Latent Diffusion Models Diffusion models [26, 9] are a class of probabilistic models that generates a sample from a data distribution, \(\mathbf{x_{0}}\sim p(\mathbf{x_{0}})\) by gradually denoising a normally distributed random variable \(\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{\mathrm{I}})\), over \(T\) iterations. The denoising process creates a series of intermediate samples \(\mathbf{x}_{T},\mathbf{x}_{T-1},\cdots,\mathbf{x}_{0}\); with decreasing amount of noise. The amount of noise to be removed at each step is predicted by a neural network \(\epsilon_{\theta}(\mathbf{x}_{t},t)\), where \(t\) is the denoising step. These models has been found to be extremely successful in synthesizing realistic images [5, 21, 24] when \(\mathbf{x}_{i}\in\mathbb{R}^{W\times H\times 3}\). To reduce the computational requirements for training without degrading quality and flexibility, Rombach _et al_. proposed Latent Diffusion [22], which does the diffusion process in the latent space of a pretrained VQ-VAE [29] encoder. Hence, the input image \(\mathbf{x}_{0}\) is first projected into the latent space \(\mathbf{z}_{0}=\mathcal{E}(\mathbf{x}_{0})\), where \(\mathcal{E}(\cdot)\) is the VQ-VAE encoder and \(\mathbf{z}_{0}\in\mathbb{R}^{w\times h\times 4}\), where \(w=W/2^{4}\) and \(h=H/2^{4}\)[22]. The forward diffusion process uses a Markovian noise process \(q\), which gradually adds noise to \(\mathbf{z}_{0}\) through \(\mathbf{z}_{T}\) with a Gaussian function following a variance schedule \(\beta_{t}\): \[q(\mathbf{z}_{t}|\mathbf{z}_{t-1})=\mathcal{N}(z_{t};\sqrt{1-\beta_{t}}\mathbf{z}_{t-1}, \beta_{t}\textbf{I}) \tag{1}\] Ho _et al_. [9] showed that sampling \(\mathbf{z}_{t}\sim q(\mathbf{z}_{t}|\mathbf{z}_{0})\) need not be iterative, but instead can be as follows: \[q(\mathbf{z}_{t}|\mathbf{z}_{0}) =\mathcal{N}(z_{t};\sqrt{\bar{\alpha}_{t}}\mathbf{z}_{0},(1-\bar{ \alpha}_{t})\textbf{I}) \tag{2}\] \[=\sqrt{\bar{\alpha}_{t}}\mathbf{z}_{0}+\epsilon\sqrt{(1-\bar{\alpha} _{t})},\epsilon\sim\mathcal{N}(\textbf{0},\textbf{I}) \tag{3}\] where \(\alpha_{t}=1-\beta_{t}\) and \(\bar{\alpha}_{t}=\prod_{r=0}^{t}\alpha_{r}\). The reverse diffusion process learns \(\epsilon_{\theta}(\mathbf{z}_{t},t)\) to predict \(\epsilon\) from Eq. (3). The learning objective is as follows: \[\mathcal{L}_{LDM}=\mathbb{E}_{t\sim[1,T],\mathbf{z}_{0}=\mathcal{E}(\mathbf{x}_{0}), \epsilon\sim\mathcal{N}(\textbf{0},\textbf{I})}[\|\epsilon-\epsilon_{\theta}( \mathbf{z}_{t},t)\|^{2}] \tag{4}\] To sample an image from a learned noise prediction function (loosely referred to as diffusion model henceforth) from \(\mathbf{z}_{T}\sim\mathcal{N}(\textbf{0},\textbf{I})\), we iteratively apply the following step: \[\mathbf{z}_{t-1}=\mathbf{z}_{t}-\epsilon_{\theta}(\mathbf{z}_{t},t)+\mathcal{N}(\textbf{0 },\sigma_{t}^{2}\textbf{I}) \tag{5}\] where \(\sigma_{t}=\sqrt{\beta_{t}}\). We refer the readers to Ho _et al_. [9] for more details on the sampling process. Finally, \(\mathbf{z}_{0}\) is projected back to the image space by the VQ-VAE decoder \(\mathcal{D}\): \(\mathbf{x}_{0}=\mathcal{D}(\mathbf{z}_{0})\). The diffusion model \(\epsilon_{\theta}(\cdot,\cdot)\) is generally implemented with a time-conditional UNet [23] architecture. It contains an encoder, middle layer and a decoder. Each of these modules contains multiple blocks with a residual convolutional layer, self-attention layer and cross-attention layer. The diffusion model can be optionally augmented to consume an extra condition \(\mathbf{y}\) (like text, caption, canny-maps, segmentation masks, skeletal information _etc_.) as follows: \(\epsilon_{\theta}(\mathbf{z}_{t},t,\mathbf{y})\). The cross-attention layers effectively warps the extra conditioning into the diffusion model. Latent Diffusion Models (LDM) have been effectively applied in image editing applications too. The major challenge is to make modifications specified via a textual condition \(\mathbf{y}\) to an existing image \(\mathbf{x}_{0}\). Methods like PnP [28], NTI [18], Imagic [11], DiffEdit [4] first invert \(\mathbf{x}_{0}\) to \(\mathbf{z}_{inv}\) using techniques similar to DDIM Inversion [5, 27], so that when they start the diffusion process (in Eq. (5)) from \(\mathbf{z}_{inv}\), they will be able to get back \(\mathbf{x}_{0}\). Alternatively, approaches like InstructPix2Pix [2] and Imagen Editor [30] learn few extra layers to directly pass \(\mathbf{x}_{0}\) as input to the diffusion model. This approach retains better characteristics of the input image, as it side-steps the inversion process. We build \(\mathrm{EMILE}\) by extending InstructPix2Pix [2] to support iterative and multi-granular control, as it has minimal changes from LDM [22] (the input tensor \(\mathbf{z}_{t}\) is augmented with four more channels to consume \(\mathbf{x}_{0}\), and the corresponding weights are initialized to zero. No other changes to LDM [22]), and that its trained model is publicly available. ## 4 Iterative Multi-granular Editing As previously noted in Section 1, existing techniques are all focused on one-shot generation (no iterative capabilities) and do not provide the ability for users to specify and constrain the spatial extent of the intended edits. Since creative professionals would want to iteratively edit images on the canvas while needing more spatial control on where in the image (global or local) the edits go, the first contribution of this work is the formulation of a new problem setting we call _Iterative Multi-granular Image Editor_. Given an input image \(\mathbf{I}_{0}\), a set of edit instructions \(E=\{\mathbf{y}_{1},\cdots,\mathbf{y}_{k}\}\), and an optional set of masks \(M=\{\mathbf{m}_{1},\cdots,\mathbf{m}_{k}\}\) corresponding to each \(\mathbf{y}_{i}\), such an image editor \(\mathcal{M}(\mathbf{I}_{i},\mathbf{y}_{i},\mathbf{m}_{i})\) should be able to make the semantic modification intended by \(\mathbf{y}_{i}\) on \(\mathbf{I}_{i}\) iteratively. If the mask \(\mathbf{m}_{i}\) is provided by the user, then the model \(\mathcal{M}\) should constrain the edits to follow \(\mathbf{m}_{i}\). The set of edited images \(\mathcal{I}_{edits}=\{\mathbf{I}_{1},\cdots,\mathbf{I}_{k}\}\) should be visually appealing and semantically consistent to \(E\) and \(M\). Here, we propose one instantiation for \(\mathcal{M}(\cdot,\cdot,\cdot)\) that builds on top of a pre-trained diffusion model [2] and does not need any retraining, instead relying only on test-time optimization of the intermediate representation of the model. The practicality of our newly introduced problem setting combined with the versatility of diffusion models allows \(\mathrm{EMILE}\) to be a faithful co-pilot for image editing workflows. We explain how \(\mathrm{EMILE}\) handles multi-granular control and iterative edits in Sec. 4.1 and Sec. 4.2 respectively. We explain our overall framework in Sec. 4.3. ### Multi-granular Image Editing The first contribution of our work is to equip diffusion models with the flexibility to constrain where an edit should be applied spatially on an image. We propose to interpret a diffusion model as an energy-based model (EBM) [13, 16], leading to a flexible new method to selectively restrict gradient updates to the region of interest to the user. We first begin with a brief review of EBMs. An EBM provides a flexible way of modelling data likelihood. The probability density for the latent embedding \(\mathbf{z}=\mathcal{E}(\mathbf{x})\), cor responding to an image \(\mathbf{x}\) can be expressed as follows: \[p_{\psi}(\mathbf{z})=\frac{\text{exp}\,\left(-E_{\psi}(\mathbf{z})\right)}{\int_{\mathbf{z}} \text{exp}\,\left(-E_{\psi}(\mathbf{z})\right)d\mathbf{z}}, \tag{6}\] where \(E_{\psi}(\cdot)\) is an energy function which maps \(\mathbf{z}\) to a single scalar value, called the energy or score. \(E_{\psi}(\cdot)\) is usually instantiated as a neural network with parameters \(\psi\). After learning, sampling from the EBM is indeed challenging, owing to the intractability of the partition function \(\left(\int_{\mathbf{z}}\text{exp}\,(-E_{\psi}(\mathbf{z}))d\mathbf{z}\right)\). A popular MCMC algorithm called Langevin Sampling [19, 31], which makes use of the gradient of \(E_{\psi}(\cdot)\) is used as the surrogate as follows: \[\mathbf{z}_{t}=\mathbf{z}_{t-1}-\frac{\lambda}{2}\partial_{\mathbf{z}}E_{\psi}( \mathbf{z}_{t-1})+\mathcal{N}(0,\omega_{t}^{2}\mathbf{I}) \tag{7}\] where \(\lambda\) is step size and \(\omega\) captures the variance of samples. Interestingly, the sampling process used by the EBM in Eq. (7) and that used by the latent diffusion model in Eq. (5) are functionally same. Without loss of generality, this allows us to express the _noise predictions_ from the diffusion model \(\epsilon_{\theta}(\cdot,\cdot)\) as the _learned gradients_ of the energy function \(E_{\psi}(\cdot)\)[16]. Thus we can control the granularity of each edits by selectively zeroing out the gradients (equivalent to masking parts of the noise predictions) that we are not interested in updating. This theoretically grounded approach turns out to be simple to implement. For a user provided mask \(\mathbf{m}\in\{0,1\}^{w\times h\times 1}\), which specifies the location to constrain the edits, we update the iterative denoising step in Eq. (5) as follows: \[\mathbf{z}_{t-1}=\mathbf{z}_{t}-\mathbf{m}*\epsilon_{\theta}(\mathbf{z}_{t},t)+\mathcal{N}( \mathbf{0},\sigma_{t}^{2}\mathbf{I}) \tag{8}\] Our experimental analysis in Sec. 5.4, shows that such gradient modulation is effective in localizing the user intent (conveyed via the binary mask), without adding any extra training or compute expense. ### Iterative Image Editing The next contribution of our work is to iteratively edit images with diffusion models while maintaining the state of the canvas, i.e., the newer edit instruction from a user should be applied on the latest version of the image. Diffusion models capable of doing image editing [2, 30] can consume the image to be edited. We briefly describe its architectural components in Fig. 4. Consider \(\mathbf{y}^{e}\) to be the edit instruction that needs to be applied to an image \(\mathbf{I}^{e}\), at an edit step \(e\). \(\mathbf{I}^{e}\) is passed though a pretrained VQ-VAE encoder \(\mathcal{E}\) to obtain \(\mathbf{z}_{img}^{e}\). \(\mathbf{z}_{t}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) is the initial latent variable. \(\mathbf{z}_{img}^{e}\) is stacked with \(\mathbf{z}_{t}\) and successively denoised by the pretrained diffusion model (shown in the light green outline) over \(T\) iterations. A naive approach to iteratively edit the image would be to pass the edited image \(\mathbf{I}^{e+1}\) through the model along with the next edit instruction \(\mathbf{y}^{e+1}\). Unfortunately, this accumulates and amplifies the noisy artifacts in the image. The first row of Fig. 2 illustrates this phenomena. In this experiment, we iteratively pass an image through the architecture defined in Fig. 4, and use \(\mathbf{y}^{e}=\phi\) (this is to characterize the behaviour independent of the edit instruction at each step). On a high level, there would potentially be only two sources that might introduce such noisy artifacts: 1) the VQ-VAE based auto-encoder and 2) the denoising steps of the latent diffusion model. In-order to isolate each of their contribution, we experiment using only the VQ-VAE based encoder and decoder from the pipeline described in Fig. 4; _i.e._\(\mathbf{I}^{e+1}=\mathcal{D}(\mathcal{E}(\mathbf{I}^{e}))\). We show the corresponding results in the second row of Fig. 2. These result illustrates that the auto-encoding is not perfect, and is indeed accumulating noise over time. This motivates us to propose _latent iteration_, where the latent representation produced by diffusion model corresponding to edit instruction \(\mathbf{y}^{e}\) will be passed iteratively along with \(\mathbf{y}^{e+1}\) in successive edit step. Hence, \(\mathbf{z}_{img}^{e+1}\) will be used instead of \(\mathcal{E}(\mathbf{I}^{e+1})\) as input to diffusion model. This simple approach alleviates the exacerbation of the noisy artefacts as seen in the last row of Fig. 2. Figure 4: The figure illustrates the architectural components [2] involved in editing an image \(\mathbf{I}^{e}\), according to an edit instruction \(\mathbf{y}^{e}\) in the \(e^{th}\) edit iteration. \(\mathbf{z}^{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) is successively denoised by the diffusion model for \(T\) iterations to generate \(\mathbf{z}_{img}^{e+1}\). Figure 2: While iteratively passing an image through the diffusion model [2], we see noisy artifacts being added in successive steps (we illustrate \(4,8,12\text{ and }16\text{ steps}\)) in Row 1. We see a similar phenomena while only auto-encoding the same image iteratively (here, the diffusion model is not used) in Row 2. Finally, when we iterate in the latent space (where \(\mathbf{z}_{img}^{e+1}\) is passed iteratively), we see that the successive images are robust to such noisy artifacts. In Fig. 3, we apply latent iteration over multiple edit instructions \(\mathbf{y}^{e}\). We note that our method is able to make semantic edits consistent with the corresponding caption in each edit iteration. Further, in Fig. 5, we compare the fourth edit result between iterating over image space and latent space. (We showcase the results of the other steps in the Supplementary owing to space constrains.) It is interesting to note that latent iteration is not only able to reduce the noise accumulation (box \(1\)), but also improve the consistency with previous edits. In box \(3\), the sunglass from the previous step is retained as is (note the frame), while in box \(2\) and \(5\), the leather head cover is maintained with the newer edits. In box \(4\), latent iteration was able to add related semantic concept to improve the consistency of the image. We see that this behaviour consistently holds across our exhaustive experimental evaluation in Sec. 5. ### Overall Framework Algorithm 1 summarizes the key steps involved in adding multi-granular edits to an image iteratively. For the first edit instruction \(\mathbf{y}_{1}\), we initialize \(\mathbf{z}_{img}^{e}\) to the VQ-VAE encoding of the input image \(\mathbf{I}_{0}\) in Line \(4\). This latent is therein passed to the diffusion model via \(\mathbf{z}_{T}\) in Line \(7\). If the user specifies a mask to control the spatial reach of the edit, it is used while denoising the latent in Step 9. After denoising \(\mathbf{z}_{0}\) is indeed decoded using VQ-VAE decoder, and stored to \(\mathcal{I}_{edits}\). Importantly, \(\mathbf{z}_{0}\) is saved and reused for subsequent edit instructions in Line \(11\) and Line \(6\) respectively. ``` 0: Image to be Edited: \(\mathbf{I}_{0}\); Edit Instructions: \(E=\{\mathbf{y}_{1},\cdots,\mathbf{y}_{k}\}\); Optional Masks: \(M=\{\mathbf{m}_{1},\cdots,\mathbf{m}_{k}\}\); Pre-trained Diffusion Model [2]: \(\epsilon_{\theta}(\cdot,\cdot,\cdot)\); VQ-VAE: \(\mathcal{E}(\cdot),\mathcal{D}(\cdot)\); Variance Schedule: \(\sigma_{t}\); Number of Diffusion Steps: \(T\). 0: Edited Images: \(\mathcal{I}_{edits}\) 1:\(\mathbf{z}_{init}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) 2:for\(\mathbf{y}_{c}\in E\)do\(\triangleright\) For each edit instruction. 3:if e == \(1\)then 4:\(\mathbf{z}_{img}^{e}\leftarrow\mathcal{E}(\mathbf{I}_{0})\) 5:else 6:\(\mathbf{z}_{img}^{e}\leftarrow\text{prev\_latent}\)\(\triangleright\) Latent Iteration. 7:\(\mathbf{z}_{T}\leftarrow\text{concat}(\mathbf{z}_{init},\mathbf{z}_{img}^{e})\) 8:for\(t\in\{T,\cdots,0\}\)do\(\triangleright\) For each denoising step. 9:\(\mathbf{z}_{t-1}=\mathbf{z}_{t}-\mathbf{m}_{e}\ast\epsilon_{\theta}(\mathbf{z}_{t},t,\mathbf{y}_{c}) +\mathcal{N}(\mathbf{0},\sigma_{t}^{2}\mathbf{I})\)\(\triangleright\) Eq. (8) 10:\(\mathcal{I}_{edits}.\text{append}(\mathcal{D}(\mathbf{z}_{0}))\) 11: prev_latent = \(\mathbf{z}_{0}\) 12:return\(\mathcal{I}_{edits}\) ``` **Algorithm 1** Iterative Multi-granular Image Editor ## 5 Experiments and Results ### IMIE-Bench Benchmark To complement our newly introduced problem setting, we introduce IMIE-Bench (Iterative Multi-granular Image Editing Benchmark) to evaluate the efficacy of the methodologies. Inspired by TEdBench [11], we manually curate \(40\) images from LAION [25] dataset. These include images of people, landscapes, paintings, and monuments. Next, we collected four semantically consistent edit instructions that would modify these images. This would help to quantify the quality of the iterative edits by the model. We also add masks to some of these edit instructions to simulate local editing. We hope that IMIE-Bench would serve as a standardized evaluation setting for this task. Figure 5: The figure compares the fourth edit step in Fig. 3 between image space and latent space iteration. Green boxes \(2,3\) and \(5\) shows improved preservation of semantic concepts from the previous edit step, while box \(4\) shows how newer related concepts are added, while keeping the image less noisy (box \(1\)). Kindly zoom in for fine details. Figure 3: The figure shows how our proposed _latent iteration_ framework is able to semantically modify the first image, consistent with the text caption provided by the user at each time step. Please see Sec. 4.2 for more details. ### Experimental Protocol We evaluate the 'iterative' and'multi-granular' aspects of our approach both qualitatively and quantitatively. We use examples from IMIE-Bench and compare against three baseline approaches. Two of these include iteratively passing the edited image through two image editing methods: InstructPix2Pix [2] and Plug and Play [28]. These methods Figure 6: Here we compare \(\mathrm{EMILIE}\) with three baselines on images from IMIE-Bench. ‘Iterative PnP’ and ‘Iterative iP2P’ refers to Plug and Play [28] and Instruct Pix2Pix [2], where the latest edited image is recursively passed on in the next edit step. ‘Concatenating Captions’ refers to the setting where all the edit instructions that have been received so far, is concatenated and send through Instruct Pix2Pix [2] to edit the original image. We see that \(\mathrm{EMILIE}\) is able to consistently reduce the noisy artefact accumulation and retain semantics of earlier steps better. Please see more results in the Supplementary material. are representative of two kinds of approaches for diffusion-based image editors. Plug and Play uses DDIM inversion to project the image to the latent space of the diffusion model, while InstructPix2Pix learns a few new layers to directly pass the input image into the model. The third baseline is to concatenate all the instructions received until a time step, and pass it through InstructPix2Pix along with the original image. We show these results in the following sections. Further, to better understand the multi-granular local editing aspect of \(\mathrm{EMILIE}\), we compare with two recent state-of-the-art mask-based image editing methods: DiffEdit [4] and Blended Latent Diffusion [1] on the EditBench [30] dataset. We could not compare with Imagen Editor [30] because their code, models or results are not publicly available. We use two complementary metrics for quantitative evaluation: first, we compute the CLIP [20] similarity score of the edited image with the input edit instruction and the second is a text-text similarity score, where BLIP [14] is used to first caption the generated image, and it is then compared with the edit instruction. ### Implementation Details We build \(\mathrm{EMILIE}\) by extending the InstructPix2Pix diffusion model [2] to support iterative multi-granular editing capability. We keep all the hyper-parameters involved to be same as their implementation. We use a single NVIDIA A-100 GPU to do our inference. Each edit instruction takes \(6.67\) seconds on average to complete. While doing latent iteration, we find that the magnitude of values in \(\mathbf{z}_{img}^{e+1}\) is significantly lower than \(\mathcal{E}(\mathbf{I}^{e+1})\). To normalize this, we multiply \(\mathbf{z}_{img}^{e+1}\) with a factor \(f=avg(\mathcal{E}(\mathbf{I}^{e+1}))/avg(\mathbf{z}_{img}^{e+1})\). We use ancestral sampling with Euler method to sample from the diffusion model. ### Results on IMIE-Bench We showcase our qualitative results in Fig. 6. We compare with iterative versions of InstructPix2Pix [2] and Plug and Play (PnP) [28] and a concatenated set of edit instructions, on images from IMIE-Bench. While analyzing the results, we note that \(\mathrm{EMILIE}\) is able to retain the concepts from the previous edits, without deteriorating the quality in successive iterations. Iterative PnP struggles the most. This can be attributed to the DDIM inversion which projects the image to the latent space of the diffusion model. The concatenated caption (that contains all the edit instruction that we have seen so far) contains multiple concepts. The diffusion model tries its best to generate these multi-concept images, but struggles to maintain consistency with the previously edited version of the image. Iterative InstructPix2Pix performs the closest to \(\mathrm{EMILIE}\), but accumulates noise as edits progresses. \(\mathrm{EMILIE}\) supports multi-granular edits too, as is shown in the last column. The baseline methods cannot operate in this setting. We compare \(\mathrm{EMILIE}\) with multi-granular approaches in Sec. 5.5. These results illustrate that iterating in latent space indeed helps attenuate noise, and gives the model the optimal plasticity to add new concepts yet retain the stability to not forget the previous set of edited concepts. ### Results on EditBench Fig. 7 shows the results of using \(\mathrm{EMILIE}\) for doing localized edits in images. The top row includes the input image with mask and the corresponding edit text from the user. The subsequent two rows include results from two recent state-of-the-art approaches: DiffEdit [4] and Blended Latent Diffusion [1], followed by \(\mathrm{EMILIE}\). The results show that the methods are successful in limiting the extent of edits to the area constrained by the user. \(\mathrm{EMILIE}\) is able to make semantically richer modifications to the masked region. Our training-free guidance during the denoising steps is able to constrain the edits to local regions, while being more semantically consistent. Finally, we run a quantitative evaluation of how well the edited image is able to capture the semantics of the edit instruction successfully by measuring the CLIP and BLIP scores. The results are provided in Tab. 1. We comfortably outperform the baselines here too. ### User Study We conduct a user study with images from IMIE-Bench and EditBench [30] datasets to test the preference of users across the generated images in Tab. 3 and Tab. 2 respectively. The former evaluates iterative edits, while the latter analyzes the preference for multi-granular edits. From the Figure 7: We illustrate the ability of \(\mathrm{EMILIE}\) to do local editing on images from EditBench [30] benchmark. We compare against recent state-of-the-art methods, DiffEdit (ICLR ‘23) [4] and Blended Latent Diffusion (SIGGRAPH ‘23) [1]. Our simple gradient modification approach is able to consistently improve the quality of generation when compared to these approaches. \(30\) users, most of them preferred the generations from \(\mathrm{EMILE}\) over the other baselines. ## 6 Further Discussions and Analysis ### Object Insertion with \(\mathrm{EMILIE}\) Our proposed approach gives control to the user in specifying the location of edits. At times, users might have an image of an object that they would like to insert into a base image. It would be great if the model can automatically propose a plausible location and then insert the object there. We find a straight forward strategy to combine \(\mathrm{EMILIE}\) with GracoNet [36] (a recent method for proposing object placement in images) to achieve this. Given a base image and an inset image, GracoNet proposes masks for potential location for the inset image. Next, we pass the inset image thought BLIP 2 [14] model to generate a caption. Finally, the mask, the caption and base image is passed to \(\mathrm{EMILIE}\) to render the final image. Fig. 9 showcases some examples from OPA dataset [15]. We can see that the object insertion by \(\mathrm{EMILIE}\) is more composed and realistic than that by GracoNet. This is because, instead of just placing the object at a location, \(\mathrm{EMILIE}\) indeed synthesises a new object at the location specified via the mask proposed by GracoNet. ### Ablating the Gradient Control In order to understand the contribution of our proposed gradient control strategy explained in Sec. 4.1, we turn it off while doing local editing. Results in Fig. 8 shows that without modulating the latent representations selectively (following Eq. (8)), the edit instructions gets applied globally. ### Limitations While experimenting with \(\mathrm{EMILIE}\), we could understand that it cannot handle negative edit instructions. Let us say we add a pair of sun-glasses as the first edit, and try removing it in the second edit step, the model fails to give back the original image. Disentangling the feature representations for each edits would potentially help to alleviate this issue. We will explore this in future work. We show more failure cases in the Supplementary materials. ## 7 Conclusion We introduce a novel problem setting of _Iterative Multi-granular Image Editing_ where a creative professional can provide a series of edit instructions to be made to a real image. Optionally, they can specify the spatial locality on which the edit needs to be applied. Our proposed approach \(\mathrm{EMILIE}\) is training free and utilizes a pre-trained diffusion model with two key contributions: latent iteration (Sec. 4.2) for supporting iterative editing and gradient modulation (Sec. 4.1) for supporting multi-granular editing. Finally, we introduce a new benchmark dataset IMIE-Bench, and bring out the mettle of our approach by comparing \(\mathrm{EMILE}\) against other state-of-the-art approaches adapted to our novel task. We hope that this newly identified research area would be actively investigated by the community. \begin{table} \begin{tabular}{l|c c c c} \hline \hline & \multicolumn{2}{c}{DiffEdit [4]} & \multicolumn{2}{c}{BLD [1]} & \multicolumn{2}{c}{\(\mathrm{EMILIE}\)} \\ \hline Average CLIP Score & 0.272 (-0.039) & 0.280 (-0.031) & **0.311** \\ Average BLIP Score & 0.582 (-0.038) & 0.596 (-0.024) & **0.620** \\ \hline \hline \end{tabular} \end{table} Table 1: We quantitatively evaluate the performance of \(\mathrm{EMILIE}\) for multi-granular editing here. When compared to recent state-of-the-art approaches, \(\mathrm{EMILIE}\) is able to score better performance in both CLIP and BLIP score metrics. \begin{table} \begin{tabular}{l|c} \hline \hline Method & Split \\ \hline DiffEdit [4] & 4.87\% \\ BLD [1] & 13.33\% \\ \(\mathrm{EMILIE}\) & 81.79\% \\ \hline \hline \end{tabular} \begin{tabular}{l|c} \hline \hline Method & Split \\ \hline Concat & 18.18 \% \\ iP2P & 19.32 \% \\ \(\mathrm{EMILIE}\) & 62.50 \% \\ \hline \hline \end{tabular} \end{table} Table 2: User Study for Multi-granular Edits. Figure 8: We re-purpose the local editing capability of \(\mathrm{EMILIE}\) to insert objects into specific locations of a source image. For this, we combine \(\mathrm{EMILIE}\) with GracoNet [36] which predicts the location of the object to be inserted. We can see from the results that the insertions done by \(\mathrm{EMILIE}\) is more composed to the background. Figure 9: Here we experiment with turning off the gradient control strategy explained in Sec. 4.1 for local edits. Without modulating the gradients, the characteristics of the edit instruction gets globally applied on the image, whereas \(\mathrm{EMILIE}\) is able to easily localise the edits effectively.
2304.11084
Training Automated Defense Strategies Using Graph-based Cyber Attack Simulations
We implemented and evaluated an automated cyber defense agent. The agent takes security alerts as input and uses reinforcement learning to learn a policy for executing predefined defensive measures. The defender policies were trained in an environment intended to simulate a cyber attack. In the simulation, an attacking agent attempts to capture targets in the environment, while the defender attempts to protect them by enabling defenses. The environment was modeled using attack graphs based on the Meta Attack Language language. We assumed that defensive measures have downtime costs, meaning that the defender agent was penalized for using them. We also assumed that the environment was equipped with an imperfect intrusion detection system that occasionally produces erroneous alerts based on the environment state. To evaluate the setup, we trained the defensive agent with different volumes of intrusion detection system noise. We also trained agents with different attacker strategies and graph sizes. In experiments, the defensive agent using policies trained with reinforcement learning outperformed agents using heuristic policies. Experiments also demonstrated that the policies could generalize across different attacker strategies. However, the performance of the learned policies decreased as the attack graphs increased in size.
Jakob Nyberg, Pontus Johnson
2023-04-17T07:52:00Z
http://arxiv.org/abs/2304.11084v1
# Learning automated defense strategies using graph-based cyber attack simulations ###### Abstract We implemented and evaluated an automated cyber defense agent. The agent takes security alerts as input and uses reinforcement learning to learn a policy for executing predefined defensive measures. The defender policies were trained in an environment intended to simulate a cyber attack. In the simulation, an attacking agent attempts to capture targets in the environment, while the defender attempts to protect them by enabling defenses. The environment was modeled using attack graphs based on the Meta Attack Language language. We assumed that defensive measures have downtime costs, meaning that the defender agent was penalized for using them. We also assumed that the environment was equipped with an imperfect intrusion detection system that occasionally produces erroneous alerts based on the environment state. To evaluate the setup, we trained the defensive agent with different volumes of intrusion detection system noise. We also trained agents with different attacker strategies and graph sizes. In experiments, the defensive agent using policies trained with reinforcement learning outperformed agents using heuristic policies. Experiments also demonstrated that the policies could generalize across different attacker strategies. However, the performance of the learned policies decreased as the attack graphs increased in size. cyber security, machine learning, reinforcement learning, attack graphs, attack modeling, cyber-physical systems, cyber-defense ## I Introduction Cybersecurity is an important concern in our increasingly digital society. Consequences of cyber crime can be severe, both on an individual and societal level. Apart from managing personal finances and information, digital systems are also used in maintaining infrastructure such as power grids, water supply, and public transportation. The intricacies of these systems provide a large attack surface for malicious actors to exploit. To cover this attack surface, we believe that a combination of automation and human expertise is needed. Using automated systems to handle rote tasks of the decision-making process can allow human operators to focus on higher-level tasks that are yet difficult for a computer to perform. An autonomic system is a system that can manage itself, and adapt to changes in its environment [1]. A common reference model for autonomic systems is the MAPE-K loop [1]. The MAPE-K loop consists of four steps: _monitor_, _analyze_, _plan_, and _execute_. An autonomic agent monitors its environment through _sensors_ provided by the system, and executes actions through _actuators_. In our implementation of the MAPE-K loop, the sensors are intrusion detection system (IDS) modules for providing alert signals, and the actuators a set of predefined defensive operations, such that can be defined in an software defined network (SDN) controller or host-based IDS system like Wazuh [2]. To analyze the state and plan actions, we use a policy function optimized using reinforcement learning (RL) that uses the IDS signals as input and selects a predefined defensive measure to execute. An issue with RL, especially when proceeded with the prefix "deep", is the need for large amounts of online data collection. Policies are learned by continually observing and interacting with the environment. If running the system is expensive or slow, this can be costly since training an agent can require thousands of episodes of experience. One approach to solve this issue is to train in a simulated environment. The simulator attempts to imitate the inputs, outputs and dynamics of the real system. The learned policy function can then be transferred to a real system with little or no additional training. This is sometimes referred to as _sim-to-real_ transfer [3]. This work investigates the first half of the sim-to-real process, the simulation and training. To simulate a cyberattack, we employ approaches from the field of _threat modeling_, which focuses on identifying and analyzing threats to a system. Specifically, we use attack graphs produced using the Meta Attack Language (MAL) [4] to model the cyberattack process. Although this work focuses on intrusion response, a major challenge in network defense is the _detection_ of misuse or compromise. We eschwe this issue by assuming that an IDS capable of detecting the ongoing attack is in place in the system we wish to apply our agent in, the output of which is used as input to the decision policy. In practice, the IDS can be a signature-based system, or on based on machine learning. We lighten our assumption by allowing the IDS to be imperfect, failing to register some events and falsely reporting on others. A question of interest to us was how well the defender agent could perform with, or in spite of, the imperfect information from the IDS. We investigated this by training the agent with different volumes of IDS errors, and comparing the performance of the agent against heuristic baselines. We were also interested in the ability of the RL policies to generalize to different attacker strategies. This was investigated by training policies using RL on a number of different attacker strategies, and evaluating the performance of the agent on previously unseen ones. Lastly, we investigated the effect of the size of the attack graph on the performance of the RL policies. Related Work Cyber intrusion detection and response are broad research fields whose scope extends far beyond that of this work. For the purpose of brevity, we focus this section on other works that employ reinforcement learning for intrusion _response_. There have been several works that implement different varieties of automated defender agents based on RL [5, 6, 7, 8, 9, 10, 11]. Huang, Huang, and Zhu [11] list several applications where RL can or may be used to develop a cyber-resilient system. Gabirondo-Lopez, Egana, Miguel-Alonso, _et al._[6] trains a variant of AlphaZero using self-play with an attacker and defender agent. Hammar and Stadler [8] presents and tries to solve the defense problem as an optimal stopping problem. Hu, Zhu, and Liu [10] formulates the cyber environment as a partially observable Markov decision process (POMDP) and uses Q-learning to find a solution policy. Several of these works also implement their own cyber attack simulators in order to learn defender policies. Named simulator include CybOrg [12], CyberBattleSim [9] and Yawning Titan [13]. The work of Andrew, Spillard, Collyer, _et al._[13] bears close resemblance this one. They, too, present a graph-based cyberattack simulator and train a defender agent using the output of the simulator. Unlike this work, however, the authors use causal Bayesian optimization to train a policy function, rather than reinforcement learning. This has the advantage of producing a causal model of the defender's behavior. The causal model can then be used to interpret the defender's decisions, which can be difficult with methods based on neural networks. However, as noted by the authors, using causal optimization methods requires the manual construction of a causal system model. This is a potentially difficult and time-consuming process that may not be feasible for large or complex systems with many variables that influence decision-making. The same attack simulator is also used by Collyer, Andrew, and Hodges [14], but with an agent based on RL instead of a causal models. The authors use a graph embedding method to represent the graph, and use a graph convolutional network to process the graph. This allows the agent to generalize to graphs other than the one it was trained on. They note that for large graphs, the embedding method leads to improved performance in environments not seen by the agent during training. The work of Wolk, Applebaum, Dennler, _et al._[15] also share several aspects of this project. They employ a variety of methods based on RL and Proximal Policy Optimization (PPO) to train an automated defender agent on the CAGE Challenge [16], which is built on top of the CybORG simulator [12] cyber environment simulator. They find that an approach using an ensemble of PPO agents performed better than other variants, such as hierarchical PPO. They also evaluate their agents in unseen environments, such as to test how well they may perform in a Real environment, and find that the performance of drops significantly. Related to the topic of automated defense agents is the concept of automated penetration testing, that instead aims to automate the process of finding and exploiting vulnerabilities in a system [17, 18, 19, 20, 21]. Automated attacker agent can also serve as opponents for automated defense agents, using adversarial machine learning or game-theoretic solution methods. For this work, we use heuristic policies for the attacker agent, but we believe that the use of RL for the attacker agent is a promising direction for future work. ## III Reinforcement Learning Reinforcement learning is a field of machine learning focused on the interaction between an _agent_ and an _environment_[22, Ch. 1]. The environment is usually modeled as a Markov decision process (MDP), and the agent should learn a policy for performing actions that maximizes a reward signal. Unlike MDP solver methods such as dynamic programming, reinforcement learning does not necessarily require a model of the environment. Instead, the agent interacts with the environment and learns a policy from observations and rewards produced by said environment. There are a number of different algorithms for reinforcement learning, including Q-learning [22, Ch. 6] and policy gradient methods [22, Ch. 13]. Policy gradient methods are focused on directly finding a policy function through gradient descent. The policy function maps a given state \(s\) to an action \(a\) via the parameters \(\theta\), and is denoted as \(\pi_{\theta}(a|s)\). PPO is a policy gradient method that applies a clip function on the policy loss to limit the amount of change in the policy parameters in a single iteration [23]. This is intended to improve the stability of the learning process. The clipped policy loss is defined as \[L_{t}^{CLIP}(\theta)= \tag{1}\] \[\hat{E}_{t}\left[\min\left(r_{t}(\theta)\hat{A}_{t},\text{clip}(r_ {t}(\theta),1-\epsilon,1+\epsilon)\hat{A}_{t}\right)\right] \tag{2}\] where \(r_{t}(\theta)=\frac{\pi_{\theta}(a|s)}{\pi_{\theta_{t}}(a|s)}\) is the ratio of the new policy to the old policy, \(\epsilon\) hyperparameter that controls the clip limits and \(\hat{A}_{t}\) is an advantage function. ## IV System Description The basis of the system simulation is an _attack graph_. An attack graph is a model to predict and analyze possible events and outcomes of a cyberattack. The attack graphs used are based on those produced using the Meta Attack Language (MAL) [4], which can be used to create digital twins of cyber-physical systems and networks [24]. The graphs defined here share the logic and features of MAL attack graphs. Their construction, however, differs due to not being generated using a list of defined object classes and relations, but rather are constructed manually. We define an attack graph as a directed graph \(G=(V,E)\), where \(V\) is the set of nodes, and \(E\) is the set of edges between nodes. The nodes are divided into two sets: _attack steps_, \(A\), and _defense steps_, \(D\). Each attack step features a probability distribution specifying the time they require to be _compromised_, the time-to-compromise (TTC). Defense steps, on the other hand, are _enabled_. When a defense step is enabled, it will prevent any attack steps that has the defense step as a parent node from being compromised. A defense step can represent a file not being encrypted, or a firewall rule not existing. Activating the defense, e.g. making the file encrypted, makes the file no longer readable. When an attack step is compromised, it may give access to other steps in the graph. For example, one attack step may represent the action of performing a password dictionary attack. If the attack is successful, the attack step is compromised, and the attacker may proceed to a next step in the graph. In accordance to MAL, attack steps can be of one of two types: AND and OR. AND-steps require all parent steps to be compromised in order to be compromised themselves, whereas OR-steps require only one parent step to be compromised. AND-steps can be used to represent preconditions such as an account requiring a password to be compromised. OR-steps, on the other hand, can be used to represent alternative paths, such as a user account being compromised by either a password dictionary attack or a phishing campaign. Figures 0(b) and 0(a) show visualizations of handcrafted attack graphs used for experimentation. AND-steps are indicated by dashed incoming edges. We define the _attack surface_ as a set of attack steps, \(A_{as}\subset A\), that fulfill the following conditions: 1. There is an edge from a compromised attack step to the step. 2. One parent step is compromised, if the step is an OR-step, or all parent steps are compromised, if the step is an AND-step. 3. A defense step that is a parent node to the attack step is not enabled. ### _Episode Structure_ In order to train a defender agent, we model an attack-defend capture-the-flag games as an MDP. The goal of the attacker is to capture as many targets, _flags_, as possible. The defender attempts to stop the attacker, and is penalized for each flag compromised. The game is played with an attack graph acting as the basis of actions that can be taken by the attacker and defender. A subset of attack steps in the graph are assigned as flags, \(F\in A\). Episodes are initiated with a single attack step in the attack graph being compromised, the entry point for the attacker. The game is played in discrete time-steps, with the attacker and defender both acting within the same time-step. Every time-step, the attacker can select any attack step from the attack surface to work on. For every time-step the attacker works on an attack step, the TTC value is reduced by 1. When the TTC reaches 0, the attack step is compromised. As the defender's set of actions, it can select any defense step from the set of defense steps that are not enabled. When a defense step is enabled, all child steps are removed from the attack surface and can no longer be compromised by the attacker are no longer considered compromised if defended. The defender can also choose to do nothing for a time-step. An episode ends when the attack surface is empty, meaning that there are no attack steps that can be reached by the attacker. ### _State Description_ The state of the environment is expressed by two vectors, \(\vec{A}=(a_{0},\ldots,a_{|A|})\in\{0,1\}^{|A|}\) and \(\vec{D}=(d_{0},\ldots,d_{|D|})\in\{0,1\}^{|D|}\). \(\vec{A}\) describes the state of all attack steps, and \(\vec{D}\) the state of all defense steps. For \(\vec{A}\), 1 represents that the step is compromised, and 0 that it is not. For \(\vec{D}\), 1 represents that the defense is enabled, and 0 that it is disabled. We assume that the network that is being defended has an IDS capable of tracking the state of every attack step in the attack graph. We also assume that this process may be faulty, and that the IDS may sometimes fail to report an attack step as compromised, or report a step as compromised when it is actually not. Thus, we define another vector, \(\vec{O}=(o_{0}\ldots,o_{|A|})\in\{0,1\}^{|A|}\), to describe the observation of the attack steps produced by the IDS. As the state changes over time, the subscript \(t\) is used to denote the state at time-step \(t\), e.g. \(\vec{A}_{t}\) is the state of all attack steps at time-step \(t\). We define the accuracy of the IDS by a false positive rate (FPR) and false negative rate (FNR). Every time-step, depending on the FPR and FNR, the state of the system can be incorrectly reported by the IDS. The FPR and FNR can be expressed as the conditional probabilities \[\text{FPR}=P(o_{ti}=1|a_{ti}=0) \qquad i\in\{0,\ldots,|A|\} \tag{3}\] \[\text{FNR}=P(o_{ti}=0|a_{ti}=1) \qquad i\in\{0,\ldots,|A|\} \tag{4}\] ### _Attacker_ The attacker agent uses a policy function to select actions defined by the attack graph, deciding which attack step to work on at a given time-step. The attacker agent can only select attack steps that are in the attack surface. Four different search policies were used in experiments for the attacker agent. RandomA search policy that selects a random attack step from the attack surface to work on each time-step. Breadth-firstA search policy that compromises all steps on the current depth of the attack surface before moving on to the next depth. The policy is randomized by shuffling the order that attack steps are traversed when multiple are available at the same depth. Depth-firstA search policy that compromises each branch of the attack surface until it reaches an attack step with no children, and then backtracks to the start of another branch. The policy is randomized by shuffling the order in which branches are traversed when multiple are available. PathfinderA policy that incorporates full information of the attack graph. It calculates the shortest path to each flag, and then targets flags in order of increasing TTC cost for the path. If blocked by a defense step en route, the policy will recalculate the shortest path to the targeted flag. If no path is available, it will target the next flag in the list. ### _Defender_ The defender agent takes alert signals from the IDS as input, and makes a decision on which defense to enable. It can also choose to do nothing for a time-step. Decisions are made using a policy function, with \(\vec{A}\) concatenated with \(\vec{D}\) as input. The action space consists of all defense steps in \(D\) that are not enabled, plus a do-nothing action. In experiments, we evaluated three choices of defender policy function: RandomA policy that selects an available defense step at random each time-step. TripwireA conditional policy, that emulates the behavior of "if-this-then-that" rules present in IDS frameworks such as Wazuh [2]. An example rule may be to automatically block traffic from a certain host if a port scan is detected. Within the attack graph environment, this is translated to activating a defense step when one of its child step is reported as compromised. Reinforcement LearningA policy trained using RL. The policy function was parametrized by a fully-connected neural network and optimized using the PPO algorithm. The neural network had an input layer of size \(|A|+|D|\), a set number of hidden layers, and an output layer of size \(|D|+1\). The number of hidden layers and their sizes were treated as hyperparameters. \(\tanh\) was used as the activation function for the hidden layers. ### _Reward_ The defender agent has two goals in the game: to minimize the number of compromised flags, and to minimize the operational cost of the defense. The operational cost is an abstraction that could, for example, represent the cost of downtime resulting from disabling a service or network for security purposes. A defender with the singular goal of defending the flags could easily maximize its reward by shutting off access to the entire system immediately and be done, as the attacker will have nothing to do at that point. However, this would prevent normal users from accessing the system, which we consider undesirable. Defensive measures are thus assumed to have a set cost \(c_{d}\). With \(\vec{F}_{t}=\{f_{0},\dots f_{|F|}\}\in\{0,1\}^{|F|}\) denoting flag states, the reward for the defender agent at a given time-step \(t\) is defined as \[r_{d}(t)=-\sum_{i=0}^{|D|}d_{i}c_{d}-\sum_{i=0}^{|F|}f_{i}c_{f}\quad d_{i},f_ {i}\in[0,1],\ c\in\mathbb{R} \tag{5}\] where \(c_{d}\) is the cost of activating a defense step \(d\in D\), \(c_{f}\) is the cost of taking a flag \(f\in F\). Note that \(f_{i}=1\) only if the flag was taken at time-step \(t\), meaning that the penalty is only incurred once for each flag taken. The defense penalty, on the other hand, is incurred on every time-step for every defense step enabled, i.e. \(d_{i}=1\). There are no positive terms in the reward function, making the maximum possible reward 0. The minimum reward would be incurred if the defender enables all defenses from the first time-step, and the attacker still takes all flags during the episode. As the defender agent can only disable one defense per time-step, the minimum cumulative reward for an episode of length \(l\) is \[-c_{d}\left(\sum_{i=1}^{|D|-1}i+|D|(l-(|D|-1))\right)-c_{f}|F| \tag{6}\] ## V Experiments Three experiments were performed to evaluate the simulator and defender agent: A comparison between the RL policy and heuristic policies at different levels of IDS accuracy, a comparison between RL policies trained with different attackers and an analysis of how the policy training is affected as the graph size increases. RL policy training and evaluation was performed on a Google Cloud virtual machine equipped with an Nvidia V100 GPU, 12 CPU cores, and 30 GB of RAM. The implementation used the Python library Ray RLLib [25] for implementations of the reinforcement learning algorithms. All policies trained with RL were run for 500 PPO policy iterations. All policies were evaluated by running 500 episodes. Each policy took roughly 20 minutes to train and evaluate. Experiments were performed three times with different seeds for the random number generator. The neural networks used two hidden layers with 128 nodes. All TTC values except those explicitly set to 0 were initialized at the start of each episode by sampling from the exponential distribution \(f(x;\beta)=\frac{1}{\beta}e^{-\frac{1}{\beta}x}\), where \(\beta\) is the mean TTC value assigned to each attack step. The costs were set as \(c_{f}=1.5\cdot\sum_{a\in A}\text{TTC}(a)\) and \(c_{d}=1\) for all flags and defense steps respectively, where \(\text{TTC}(a)\) denotes the TTC value for attack step \(a\). The choice of \(c_{f}\) was made such that the cost of losing a flag would be greater than disabling a defense step on the first time-step. ### _Sensor Fault Resistance_ A desired property of a good defender agent is the ability to operate in spite of imperfect information. Therefore, an experiment was performed to study the effects of false alerts and missed alerts on the defender. The rate of errors in the observations is defined using two values, the FPR and the FNR. Five values for the FPR and FNR were selected: \(0\,\%\), \(12.5\,\%\), \(25\,\%\), \(72.5\,\%\) and \(100\,\%\). A partial grid search was performed over combinations the error rates, for a total of 15 points. Points where \(FNR>1-TNR\) were excluded since they are equivalent to those where \(FNR<1-TNR\), but with reverse definitions of the true and false labels. Three defender agents, one using policies trained with PPO, one using the tripwire policy and one using the random policy were compared. Figure 0(a) depicts the attack graph used as the environment, with depth-first search as the attacker policy. For the RL agents, one policy was trained and evaluated for each combination of FPR and FNR values. ### _Attacker Comparison_ A second experiment was performed to study the generalization capabilities of the defender agent trained using RL. As such, only the PPO policy was used for this experiment. The agent was trained with one attacker policy, and the evaluated against all attacker policies listed in Section IV-C. A mixture of the attacker policies was used as an additional variant, where the attacker policy was chosen at random at the beginning of each episode. Figure 0(b) depicts the attack graph used. The FPR and FNR were set to \(10\,\%\). ### _Graph Size_ Real-life attack graphs can be huge due to the complexity of real-life systems [4]. Thus, an experiment was performed to study the effects of graph size on policy learning. Four graphs of sizes 20, 40, 60 and 80 steps were generated using a semi-random attachment procedure. Each graph has \(\frac{|A|}{20}\) flags. Each flag step had a defense step attached to it, ensuring that each flag was defensible. A policy was trained for each graph size using PPO and evaluated together with the tripwire policy. The same hyperparameter values were used for training on all graph sizes. The FPR and FNR were set to \(10\,\%\). ## VI Results Sections VI-A, VI-B and VI-C present the results of the experiments described in Section V. Results for all experiments are averaged over three runs with different seeds for the random number generator (RNG). The RNG affects the TTC values sampled at that episode initialization, attacker behavior, and neural network parameter initialization. ### _Sensor Fault Resistance_ Figure 2 presents performance metrics for the three defender agents as surface plots. There are two figures for each policy, one showing the average percentage of flags captured by the attacking agent, and one showing the average reward for the 500 episodes of evaluation. Episodes were, in average, 13 time-steps long for agents using PPO and tripwire policies, and 15 time-steps for the agent using a random policy. The agent using the random policy has a consistent performance across the different combinations, as it does not use observations to begin with. The amount of flags captured was higher than for the other policies, and the reward was thus lower. For all but two combinations of error rates, the learned policies produce a higher average reward than the tripwire policy. For the two combinations where the FPR \(\geq 0.5\) and the FNR \(=0\), the difference in reward was no longer statistically significant (\(p<0.05\)). The performance of both learned polices and the tripwire policy decrease as the FPR increases. The PPO policies performs better when the FNR increases compared to the tripwire policy, the performance of which rapidly decreases with an increased FNR. Averaged across all combinations, the PPO policies have an average cumulative reward of \(-50\), compared with \(-73\) for the tripwire policy and \(-72\) for the random policy. ### _Attacker Comparison_ Figure 3 shows bar plots of the rewards and percentage of flags compromised. Episodes were, in average, 9 time-steps long. The longest recorded episode was 19 time-steps, and the shortest 6 time-steps. There is a significant difference between the rewards produced by the agents when faced with different attacker policies than those they were trained on. The policy trained against the depth-first attacker produces the lowest reward compared to the others when faced with the depth-first attacker. However, it performs much better on average when faced with other strategies. Inversely, the policy trained against the breadth-first attacker has the best overall performance compared to the other policies when faced with only one attacker, but is significantly worse when faced with others. When averaged over all attackers, the difference in reward between policies not trained against the breadth-first attacker were not statistically significant (\(p<0.05\)). ### _Graph Size_ The performance of the policies learned by PPO decrease as the graph size increases. For all graph sizes greater than 20, the RL policies produced worse results than the tripwire policy. Fig. 1: Attack graphs constructed manually for experiments. Defense steps are colored purple, flags green and attacker entry points red. AND-steps have dashed incoming edged. Average TTC values are drawn as numbers in nodes. The tripwire policy also decreases in performance as the graph size increases, but the decrease is less than for the RL policies. ## VII Discussion ### _Other Baselines_ We compared the RL policy against two heuristic policies, neither of which were optimal. Comparing against the optimal policy would be beneficial to judge the effectiveness of the learned policies. In order to find the optimal policy for a particular configuration, one could apply POMDP solver methods. These may become difficult to apply, however, for larger graphs as the number of possible states is \(2^{n}\), two to the power of attack steps. The state space thus grows exponentially. This also makes it difficult to compare the neural network policy to a tabular RL policy, like Q-learning that store a value for each state-action pair. ### _Scaling_ A large drawback of the RL method was the time it took to train the neural network policy function. The training time only grew larger as graphs sizes increased and, inversely, the performance of the policies decreased. The decrease in performance with increased graph sizes comes from a number of factors. One of these was optimization robustness. As the graph size increases, the size of the neural network grows linearly. Despite the size of neural network and reward signals changing, the same set of hyperparameters was used for all graph sizes. A procedure to find appropriate hyperparameters for each graph size would likely be necessary to make training more robust. Fig. 3: Three plots showing PPO policy performance against different attacker policies during evaluation. The top and middle plots show the average reward and flags captured when faced with attacker policies not encountered during training. The bottom plot shows the reward when evaluated against the attacker policy used during training. The metrics are averaged over three runs with different RNG seeds. Fig. 2: Surface plots showing metrics for different defender policies. Results are averages over three runs with different RNG seeds. One could also take an orthogonal approach and apply same neural network to all graph sizes. This would require a different approach to how the graph state was represented, as the current approach is not invariant to changes in the graph structure or size. Graph convolutional networks are a possible solution to this problem, as demonstrated in previous work [14]. Another complicating factor was the rules of the attack-defense game. When the graph size increases, so does the episode length. The episode ended only when the attacker no longer can perform any actions. Even if the defender enables all defenses, the attacker can still traverse the remaining nodes, thus extending the episode length. Adjusting the episode end conditions could help alleviate this issue, such as imposing a time limit for the attacker. Another issue that comes from long episodes is that the PPO policies are stochastic. This means that the longer the episode, and the more actions are available, the more likely it is that the defender will eventually take a low-probability action. As there is no way to undo actions, the defender can eventually enable every defense even if the attacker does nothing. This could be addressed by adding a probability threshold for enabling a defense, or by always selecting the most likely one. ### _Generalization_ Generalization is a desirable property for the defender agent, as it would allow the agent to be used in different environments without having to retrain the policy. This was also one of the main motivations for using deep RL, as we would like to be able to learn policies in environments that are easily simulated, in order to be able to test them in more realistic environments down the line. Figure 3 demonstrates that the policy can generalize across different attacker policies. However, so can the tripwire policy, as it is invariant to the behavior of the attacker. A more interesting task is to generalize to different graph environments. In the current implementation, a learned policy function is only applicable to a single graph. The graph topology has to be static, and no attack steps can be added or removed. Additionally, the agent was only presented with the node states, and not the topology of the graph. Both of these problems may be addressed by using a model that can incorporate the graph topology, like a graph neural network. Another task to test generalization is to evaluate trained policies against different volumes of IDS noise, to determine which level is best to train the policy on. ### _The Sim-to-Real Gap_ The defender agent is trained in a simulated environment, with the intention of being used in a real environment. Within the field of robotics, it has been shown that the transfer of learned policy functions from simulation to real world can deteriorate performance of the policy [3]. This is a consequence of differences between the real world and the simulation model, leading to worse policy performance. For this work, the target environment is a computer network, and we should expect that the performance of the learned policy function will change when applied to real alert patterns. Measuring the magnitude of this change will be a key focus for future work. One issue related to the field of cybersecurity in particular is the construction of realistic attack scenarios. Even if the network itself is modeled accurately, the attacks and adversaries the defender trains against may be unrealistic. ## VIII Conclusion An automated cyber defense agent using policies trained with RL was implemented and evaluated. The agent was trained and tested in a simulated environment based on MAL attack graphs. The RL policy was compared against heuristic policies when faced with different volumes of IDS noise. We observed that the agents implemented using RL could learn policies that were better than the heuristic policies, and that they were more resilient to missed alerts. From our second experiment, we also observed that the neural network-based policy had the ability to generalize to different adversarial attackers, and that some adversaries were better training opponents than others. Unfortunately, the performance of the learned policies decreased as the graphs increased in size. Future work will focus on the ability for the defender agent to generalize across graphs, and crossing the gap between simulation and the real world, transferring a defender agent trained in simulation to an actual computer network.
2307.08085
MindOpt Tuner: Boost the Performance of Numerical Software by Automatic Parameter Tuning
Numerical software is usually shipped with built-in hyperparameters. By carefully tuning those hyperparameters, significant performance enhancements can be achieved for specific applications. We developed MindOpt Tuner, a new automatic tuning tool that supports a wide range of numerical software, including optimization and other solvers. MindOpt Tuner uses elastic cloud resources, features a web-based task management panel and integration with ipython notebook with both command-line tools and Python APIs. Our experiments with COIN-OR Cbc, an open-source mixed-integer optimization solver, demonstrate remarkable improvements with the tuned parameters compared to the default ones on the MIPLIB2017 test set, resulting in over 100x acceleration on several problem instances. Additionally, the results demonstrate that Tuner has a higher tuning efficiency compared to the state-of-the-art automatic tuning tool SMAC3.
Mengyuan Zhang, Wotao Yin, Mengchang Wang, Yangbin Shen, Peng Xiang, You Wu, Liang Zhao, Junqiu Pan, Hu Jiang, KuoLing Huang
2023-07-16T15:57:25Z
http://arxiv.org/abs/2307.08085v1
# MindOpt Tuner: Boost the Performance of Numerical Software by Automatic Parameter Tuning ###### Abstract Numerical software is usually shipped with built-in hyperparameters. By carefully tuning those hyperparameters, significant performance enhancements can be achieved for specific applications. We developed MindOpt Tuner, a new automatic tuning tool that supports a wide range of numerical software, including optimization and other solvers. MindOpt Tuner uses elastic cloud resources, features a web-based task management panel and integration with ipython notebook with both command-line tools and Python APIs. Our experiments with COIN-OR Cbc, an open-source mixed-integer optimization solver, demonstrate remarkable improvements with the tuned parameters compared to the default ones on the MIPLIB2017 test set, resulting in over 100x acceleration on several problem instances. Additionally, the results demonstrate that Tuner has a higher tuning efficiency compared to the state-of-the-art automatic tuning tool SMAC3. ## 1 Introduction Numerical software is designed for solving complex problems that involve numerical calculations. These computations typically involve algorithms with many configurable hyperparameters. For instance, Cbc by COIN-OR (Forrest & Lougee-Heimer, 2005), one of the state-of-the-art open-source numerical software for solving mixed integer linear programming (MILP), has approximately 160 configurable hyperparameters that control the behavior of algorithms inside. Depending on the numerical structures of the MILP instances, different hyperparameter combinations (referred to as hyperparameters throughout this paper) are preferred to achieve optimal performances. Finding the best hyperparameters for a class of MILP instances may be one of the key factors in the successful application of an optimization solver. Due to the lack of prior knowledge regarding the relationship between hyperparameters and actual performance on specific instances, we may have to attempt to solve typical instances multiple times with different hyperparameters. This process can be tedious and time-consuming, even with when using the built-in tuning tools designed to search for better hyperparameters. We have developed a new tuning tool, MindOpt Tuner, which can automatically evaluate large amounts of hyperparameters utilizing cloud resources and accelerate the tuning progress with massive parallelism. Currently, MindOpt Tuner V0.9 has provided full support for optimization solvers, a typical class of numerical software, to enhance their performances on optimization problems, including reducing solving time and/or the optimality gap of the solution. In addition to optimization solvers, MindOpt Tuner supports tuning any 3rd-party numerical software. The homepage of MindOpt Tuner is located at: [https://opt.alibabacloud.com/#/tuner/quicktry](https://opt.alibabacloud.com/#/tuner/quicktry) In this paper: 1. We introduce MindOpt Tuner, an automatic hyperparameter tuning tool for numerical software that leverages the elasticity of the cloud computing resources. 2. We present three approaches for using MindOpt Tuner, which enable diverse groups of users to execute and manage their tuning tasks in a flexible and convenient manner. 3. We provide empirical evaluations of MindOpt Tuner, which validate its effectiveness and cost-efficiency in enhancing the performance of the optimization solver. This paper is organized as follows. We first discuss related works on hyperparameter tuning algorithms and tools in Section 2. In Section 3, we introduce the architecture and algorithm of MindOpt Tuner, and present three approaches for using MindOpt Tuner, i.e., web-based task management, command-line tool, and Python APIs. In Section 4, we provide empirical evaluations for MindOpt Tuner by tuning Cbc's parameters on the MIPLIB2017 Benchmark set (Gleixner et al., 2021) and compare its performance to SMAC3 (Lindauer et al., 2022). Finally, we conclude the paper in Section 5. For brevity, we will hereafter use the terms "software" and "algorithm" in place of "numerical software" and "tuning algorithm", respectively. ## 2 Related Work In this section, we provide a non-comprehensive review of the state-of-the-art tuning methods and tools. Please refer to (Hutter et al., 2019; Yang and Shami, 2020) for a comprehensive review of hyperparameter tuning techniques and tools. Basic methods for hyperparameter tuning include random search (Bergstra and Bengio, 2012) and grid search (Montgomery, 2017): the former samples a number of random samples independently, usually based on a specified probability distribution; the latter partitions the search space into a grid and evaluates the candidate hyperparameters corresponding to each grid point in turn, with sampling complexity increasing exponentially with the dimensionality of the hyperparameters. More advanced algorithms include evolution strategy based ones that maintain a population of candidate hyperparameters that are randomly mutated and selected for survival based on their fitness values (Hansen et al., 2003), and Bayesian optimization methods that use Bayesian inference to construct a probabilistic model (e.g., Gaussian process, random forest) of the mapping from hyperparameters to numerical performance, which guides the search for optimal hyperparameters (Shahriari et al., 2016). Some other algorithms include gradient estimation based method (Liu et al., 2017) and tree-search based method (Wang et al., 2020), etc. Based on the various kinds of algorithms, several hyperparameter tuning tools and products have been proposed. ParamILS (Hutter et al., 2009) is a local search-based method that iteratively revises a candidate set of hyperparameters to get the optimal one. The irace (Lopez-Ibanez et al., 2016) combines random sampling and statistical tests to quickly identify promising hyperparameters and eliminate poor ones. MOSAIC (Rakotoarison et al., 2019) applies the Monte-Carlo Tree Search algorithm over the search space of hyperparameters, with its performance largely influenced by the parameters of MOSAIC itself. Bayesian Optimization has been widely used by the algorithm in a variety of popular hyperparameter tuning tools, including SMAC3 (Lindauer et al., 2022), BOHB (Falkner et al., 2018), HEBO (Cowen-Rivers et al., 2022), SigOpt (Clark and Hayes, 2019), HyperOpt (Bergstra et al., 2013) and Dragonfly (Kandasamy et al., 2020), etc. Some more advanced hyperparameter tuning tools have multiple algorithms included and are integrated with techniques such as early stopping and parallelization of hyperparameter evaluations (Liaw et al., 2018; Akiba et al., 2019; Golovin et al., 2017; Rapin and Teytaud, 2018). Ray Tune (Liaw et al., 2018), for instance, is an open-source hyperparameter tuning library (mainly targeted at machine learning models) that synthesizes a bunch of algorithms. It is built on top of Ray, a popular open-source framework for building distributed applications, and can be scaled to large clusters and provide efficient parallelization of the tuning job. For most commercial hyperparameter tuning tools, e.g., SigOpt, the algorithm is run remotely. The evaluations of hyperparameters can be conducted at the user side and the evaluation results are submitted to the remote server via relevant APIs. Some AI platforms such as Google's Vertex AI, Amazon's SageMaker, and Microsoft Azure's Machine Learning Platform also provide tuning services through which users can run both the algorithm and the hyperparameter evaluations using cloud resources. The majority of existing hyperparameter tuning tools are dedicated to use in machine learning applications, for instance, to improve the accuracy of machine learning models. However, there are some key differences between tuning for machine learning models and tuning for software. For example, processing different data samples during model training normally has equal costs in machine learning applications, and can be easily done in a parallel manner. However, when tuning for an optimization solver, each problem instance needs to be solved under different settings of hyperparameters. The time cost may vary a lot for different instances and for different hyperparameters, which brings greater challenges for improving the tuning performance under the guarantee of tuning efficiency. Moreover, the dimensionality of hyperparameters for machine learning models is relatively small (usually less than 20), which is far less than the number of tunable hyperparameters for an optimization solver, which ranges from 100+ to 1000+ depending on the solvers. Among the tools mentioned above, only SMAC3, ParamILS, and irace are equipped with well-designed features for tuning numerical algorithms or software. They have devised a mechanism to improve the efficiency when the tuning objective is to search for hyperparameters that perform well across a large set of instances in general. However, these three tools are not designed to be directly used with elastic cloud resources. Some software may provide their own tuners. For example, commercial optimization solvers such as CPLEX (Cplex, 2009), Gurobi (Gurobi Optimization, Inc., 2023), and Copt (Ge et al., 2022) have their own built-in tuners. Nonetheless, they are limited to tuning for their own solvers and can only be executed locally. To the best of our knowledge, the MindOpt Tuner proposed in this paper is the first cloud-based tuning tool that provides tuning services to general numerical software, including open-source or commercial optimization solvers. It leverages the elasticity of cloud computing resources to reduce the resource cost of tuning tasks and applies a hierarchical surrogate modeling method to enhance the efficiency of the tuning algorithm. A well-developed command-line tool enables users to easily specify the model data and task configurations, such as the time limit of tuning process and the logging level. Additionally, it features a built-in log parser module that can extract abundant information from different fields of the software's output logs (customized by the users), which will be utilized by the algorithm to boost the software performance. ## 3 MindOpt Tuner ### Overview The workflow of MindOpt Tuner is illustrated in Figure 1. To create a tuning task, users need to first specify the target software, a set of hyperparameters that control the behavior of the software, and the data that describe the problem(s) for which the software will be tuned 1. During the execution of the tuning task, MindOpt Tuner will generate and evaluate different samples of hyperparameters iteratively: The algorithm will solve the problem(s) using the current hyperparameters and observe the performance. Based on the collect performance information of all evaluated hyperparameters, the algorithm will then generate and evaluate new sample(s) of hyperparameters trying to improve the current best value of the tuning objective. The process is repeated until certain stopping conditions are reached.. Footnote 1: Optimization solvers typically support problem instances given in the formats of.mps,.lp, and.nl. Upon completion of the tuning task, the recommended (nearly optimal) hyperparameters of the software will be provided to the users. A collection of result files will also be available, offering comprehensive information on the progress and outcome of the task. For instance, the evaluation history of hyperparameters can be further utilized for result visualization and analysis. ### Cloud Architecture MindOpt Tuner is connected to a cloud for parameter evaluations. In addition, MindOpt Tuner has a Web interface, a command-line interface on an online terminal, as well as a Python interface. The cloud brings multiple advantages: **Scalability**. The computational resources required for hyperparameter tuning can vary significantly depending on the complexity and size of the numerical problems and the dimensionality of hyperparameters to tune. Cloud platforms allow us to easily scale up (or down) resources to meet these demands. In other words, we can take advantage of more computing power when determining large-scale hyperparameters over difficult problem instances and save costs when we don't. Some tuning projects can last weeks, and Alibaba Cloud comes with automated backups and recovery. **Parallelization**. We use model-based global optimization algorithms, which allow us to run multiple evaluation tasks with different hyperparameters concurrently. Cloud-based environments provide access to multiple workers simultaneously, making it easier to run these instances in parallel and thus helping identify the optimal hyperparameters more quickly. **Cost-effectiveness**. Some users need tuning services only when they are unhappy with their current solution times. Once they are satisfied, they will no longer need to run tuning tasks. On-premise deployment of a system for tuning may require substantial upfront investments in hardware and software, ongoing maintenance costs, and possibly excess capacity. Cloud-based solutions generally follow a pay-as-you-go model, which can be more cost-effective. **Easy access**. Since it is cloud-based, MindOpt Tuner is accessible from anywhere and not tied to a specific machine or local network. It provides several interfaces through Alibaba Cloud, including web, command line, and Python. This allows for more flexibility for the user. In addition, MindOpt Tuner is designed to be able to connect with third-party cloud services, such as AWS and Google Cloud, as well as users' private clouds. **Architecture**. See Fig. 2. Users can create the tuning task via either the interactive webpage or the command-line tool. The task manager will upload the necessary data files to a shared file-storage and send the task execution instruction to the cluster scheduler. The cluster scheduler then starts a Kubernetes (K8S) pod within which the image of the algorithm and the task configurations are loaded. Once the algorithm starts running, sub-tasks of hyperparameter evaluations will be continuously created and sent to the cluster scheduler, which will start a set of new K8S pods within Figure 1: MindOpt Tuner Overview which the image of software is loaded and sub-tasks are executed. The cluster scheduler helps to automatically adjust the K8S cluster's size, up or down, based on the real-time demands, enhancing resource utilization and reducing user costs. During times with increasing task loads, a single user can simultaneously run dozens of tuning tasks, utilizing hundreds of virtual nodes. As the workload of users decreases, fewer pods will be maintained. The cluster scheduler's functionality also includes managing quotas, which ensures that users' resource usage stays within specified limits. **Data privacy**. Although cloud file storage is associated with individual user accounts and its access is protected by passwords or unique tokens, unauthorized third-party access may still occur due to vulnerable passwords and 3rd-party system hacking. For this reason, we provide an open-source, downloadable tool, called "MindOpt Sanitizer", which removes all the comments and identifiable information from problem-instance files. For optimization problem formats, it anonymizes the names of objectives, variables, and constraints by replacing them with generic markers OBJ, X1, X2,..., and CON1, CON2,..., respectively. The users can run the tool before uploading their problem instances to the cloud. When they do this, a local file that records the mapping between the original names and the generic names is saved. The file will be used to de-anonymize the solution and log files if the users wish to download and inspect them. ### Usage of MindOpt Tuner In this section, we introduce three approaches for using MindOpt Tuner, including the web-based task management panel, the command-line tool, and the Python APIs2. Footnote 2: We take tuning for optimization solvers as examples throughout this section #### 3.3.1 Web-based Task Management Panel MindOpt Tuner can be used via a web-based task management interface within the MindOpt Studio. The user-friendly GUI provides a convenient way for users to create, execute, monitor, and delete their tuning tasks. To access the management panel, users should log into the MindOpt Studio and navigate to the "Tuning Tasks" tab at the top of MindOpt Tuner's webpage.3 Fig. 3 displays a screenshot of the webpage: The "Tasks" on the left-hand side lists the created and deleted tasks respectively. The upper right area of the page is for uploading problem data files and specifying configurations for a tuning task. Simply follow the steps below to create and run a tuning task: **Step 1**: **Name a new task**. To create a new task, click the "Create" button at the top of the "Tasks" and enter the task name into the pop-up window. Figure 2: MindOpt Tuner’s architecture with elastic cloud resource. **Step 2: Add problem file(s).** To provide the problem data file(s) for the task, users can either select and upload local file(s) or enter the URL link(s) of the file(s) stored in Alibaba Cloud's Object Storage Service (OSS).4 Footnote 4: [https://www.alibabacloud.com/product/object-storage-service](https://www.alibabacloud.com/product/object-storage-service) **Step 3: Edit the task configurations**. To configure the tuning task, users need to select the target software and specify the time limit of the tuning task through the input box on the top right side. The names of the hyperparameters to tune and the additional tuning configurations should be entered in the "Task configuration" window using the JSON format. The basic task configurations are listed in Table 1. Please refer to the product documentation for a full list of task configurations and their valid values. **Step 4: Run the task**. To run the configured tuning task, click the circular "Run" button in the upper right corner. The moving progress bar will indicate the execution progress of the task. The "Output" window below will display real-time outputs consisting of the intermediate information as well as the final results. **Step 5: Get the recommended hyperparameters**. Upon completion of the task, the recommended values of hyperparameters will be saved to a file, which can be downloaded along with the tuning log file using the links at the bottom of the page. #### 3.3.2 Command-line Tool The command-line tool of MindOpt Tuner provides a quick and flexible way to create and manage tuning tasks, which is especially beneficial for users who want to perform a batch of tasks in a Figure 3: Screenshot of the MindOpt Tuner’s task management webpage. single command or script. It can be used within MindOpt Studio Notebook5 or locally where the standalone version is installed. Footnote 5: [https://opt.aliyun.com/#/platform/overview](https://opt.aliyun.com/#/platform/overview) For example, to create and start running a new task for tuning a solver, simply execute: mindopt-tuner create-task [-h] --solver {cbc,cplex,mindopt} --problem <problem> where the solver and problem arguments specify the target solver and the problem data file(s). Fig. 4 shows an example of creating a task within the MindOpt Studio Notebook. Additional task configurations can be specified by adding arguments and values to the command. For comprehensive instructions on the usage of all commands and arguments, please refer to the product's documentation available on the website. #### 3.3.3 Python APIs The MindOpt Tuner's Python APIs provide a set of APIs that correspond to the command-line tool and enable Python developers to seamlessly integrate the tuning process into their workflow. A quick example of using the Python APIs is provided as follows: ``` 1importmtunperysamtuner 2 3args_dict={ 4'solver':'cbc', 5'problem':'multimlp.nl', 6'max_tuning_time':200, 7'parameters':['cuts','preprocess'] 8} 9 10mtuner.create_task(args_dict) ``` Source code 1: Example of using Python APIs of MindOpt Tuner \begin{table} \begin{tabular}{l|l|l} \hline \hline **Name of configurations** & **Description** & **Type** \\ \hline \hline tuning-objective & \begin{tabular}{l} Specify the names of the hyperparameters to \\ tune. \\ \end{tabular} & string \\ \hline max-distinct-para-combos & \begin{tabular}{l} Tuning task will terminate after the number \\ of distinct hyperparameter combos reaches \\ this number. \\ \end{tabular} & integer \\ \hline max-tuning-time & \begin{tabular}{l} Tuning task will terminate after surpassing \\ this amount of time (in seconds). \\ \end{tabular} & integer \\ \hline max-eval-time & \begin{tabular}{l} Maximum time allowed for a single evaluation \\ of one hyperparameter combo (in seconds). \\ \end{tabular} & integer \\ \hline log-level & Level of logging messages. & string \\ \hline verbose & Details level of the standard output. & integer \\ \hline \hline \end{tabular} \end{table} Table 1: Basic task configurations of MindOpt Tuner. Figure 4: Example of creating a tuning task with the command-line tool. ## 4 Numerical Experiments In this section, we present the experimental results of MindOpt Tuner's performance in tuning Cbc on the MIPLIB2017 Benchmark. The results show that MindOpt Tuner is able to achieve orders of magnitude acceleration over Cbc's default hyperparameter settings on some instances. A comparison of the performance between MindOpt Tuner and the popular tool SMAC3 reveals that our proposed tool has a clear advantage in terms of both the performance of the tuned hyperparameters and the tuning efficiency. ### Experiment Settings Cbc is an open-source package for solving mixed-integer linear programs. It integrates branch-and-bound, primal simplex, dual simplex, interior-point methods, etc., to find the optimal solution to a given problem. Cbc is highly customizable and can handle problems with large numbers of variables and constraints. It has been widely used in academia and industry for a variety of applications, such as transportation planning, supply chain optimization, and financial modeling. MIPLIB2017 Benchmark set is a collection of challenging optimization problems that can be used to evaluate and compare the performance of various optimization solvers. It includes a total of 240 mixed-integer programming instances (selected from more than 1000 instances), covering a wide range of applications, such as network design, scheduling, and facility location. The benchmarks are chosen to be representative of real-world applications and have been widely used in the optimization community to benchmark solver performance and guide algorithm development. In our tuning experiment, we focused on minimizing the wallclock time required for Cbc (Version: 2.10.5 Build Date: Nov 24th, 2021) to obtain the optimal solution (the solving time). We used the _speed-up ratio_ as the tuning performance metric, defined as the ratio of solving time with default hyperparameters to that with tuned hyperparameters, i.e., \(T_{\text{default}}/T_{\text{tuned}}\). ### Benchmarking on problems from MIPLIB2017 In the first set of experiments, the tuning task for each individual problem instance was conducted on a Linux machine with a 2.50 GHz Intel(R) Xeon(R) Platinum 8163 CPU, 16 cores (2 threads per core), and 30GB memory. The results demonstrate that the tuned parameters recommended by MindOpt Tuner are effective in enhancing the performance of Cbc. Fig. 5 and Fig. 6 show the speed-up ratio of two groups of problem instances: those with solving times ranging between 2000s and 12000s (high-difficulty problems) and those with solving times ranging between 500s and 2000s (moderate-difficulty problems). The distribution of the achieved speed-up ratios is displayed in Fig. 7. It is clear that, for over three-quarters of the problems considered, a speed-up ratio greater than 2x (equivalent to at least a 50% reduction in solving time) can be achieved. Additionally, more than half of the problems considered can achieve a speed-up ratio greater than 4x. Moreover, 10% of the considered problems have achieved a speed-up ratio exceeding 32x, including 2% of the problems that have achieved a speed-up ratio of 100x or more (with the maximum speed-up ratio exceeding 1000). We have also analyzed the number of problems that can be solved within different time budgets before and after using the parameters recommended by MindOpt Tuner. As illustrated in Fig. 8, the number of solvable problems with the tuned parameters is substantially greater than that with default parameters at a given solving time limit. For instance, only 77 problems can be solved with default parameters in 4000s. While with the tuned parameters, an additional 23 problems can be solved within the same time limit. #### 4.2.1 Performance Comparison with SMAC3 Based on the tuning results of the MIPLIB2017 instances, we selected a set of problem instances that varied in their solving time under default hyperparameters and their achieved speed-up ratios with Figure 5: MIPLIB2017 problem instances with 1x–2x speed-up ratios. Figure 6: MIPLIB2017 problem instances with 10x–100x speed-up ratio. MindOpt Tuner. We then tuned these problems using both MindOpt Tuner and SMAC3 (V1.4) with their default settings and under the same limit on hyperparameter evaluations (200 per instance). The tuning performance of the two tools was then compared in terms of the speed-up ratio and elapsed tuning time6. Footnote 6: We did not choose ParamILS as the baseline because it is an earlier work from the same group of SMAC3, with some of its designs already included in SMAC3. _irace_ is not chosen here because it is designed particularly for the scenario where running time is not the tuning objective (López-Ibáñez et al., 2016), which is conflict with our experimental settings. Figure 8: Number of solvable problems vs solving time limit in seconds. Figure 7: Distribution of speed-up ratios. The selected problems have default solution times ranging from less than 30s to more than 400s, and the speed-up ratios by MindOpt Tuner vary from less than 2x to more than 10x. As shown in Table 2, MindOpt Tuner outperforms SMAC3 in terms of the speed-up ratios. Additionally, MindOpt Tuner generally requires less tuning time compared to SMAC3, and this advantage becomes more significant as the difficulty of the problem increases. 5 Conclusions In this work, we present MindOpt Tuner, a cloud-based efficient hyperparameter tuning tool designed for numerical software. We explain the architecture design that enables the utilization of elastic cloud resources and several of its benefits. We also introduce the three approaches: the webpage-based interface, the command-line tool, and the Python APIs by which users can access MindOpt Tuner. Finally, we provide numerical experiment results of tuning Cbc's hyperparameters on the MIPLIB benchmark, which validates the tuning performance of MindOpt Tuner as well as its efficiency advantages compared to SMAC3. We are continually developing new features that enrich the current functionality of MindOpt Tuner. For example, future versions will enable saving the state of a tuning task as a checkpoint, which can be loaded whenever users want to continue this tuning task. In addition, to enhance the generalization ability of MindOpt Tuner, we are developing a hyperparameter recommendation module that can directly recommend hyperparameter for newly unseen problem data. Acknowledgements. This work of M. Zhang and M. Wang is supported in part by the National Key Research and Development Program of China (Grant No. 2022YFB2403500).
2310.02601
MagicDrive: Street View Generation with Diverse 3D Geometry Control
Recent advancements in diffusion models have significantly enhanced the data synthesis with 2D control. Yet, precise 3D control in street view generation, crucial for 3D perception tasks, remains elusive. Specifically, utilizing Bird's-Eye View (BEV) as the primary condition often leads to challenges in geometry control (e.g., height), affecting the representation of object shapes, occlusion patterns, and road surface elevations, all of which are essential to perception data synthesis, especially for 3D object detection tasks. In this paper, we introduce MagicDrive, a novel street view generation framework, offering diverse 3D geometry controls including camera poses, road maps, and 3D bounding boxes, together with textual descriptions, achieved through tailored encoding strategies. Besides, our design incorporates a cross-view attention module, ensuring consistency across multiple camera views. With MagicDrive, we achieve high-fidelity street-view image & video synthesis that captures nuanced 3D geometry and various scene descriptions, enhancing tasks like BEV segmentation and 3D object detection.
Ruiyuan Gao, Kai Chen, Enze Xie, Lanqing Hong, Zhenguo Li, Dit-Yan Yeung, Qiang Xu
2023-10-04T06:14:06Z
http://arxiv.org/abs/2310.02601v7
# MagicDrive: Street View Generation with Diverse 3D Geometry Control ###### Abstract Recent advancements in diffusion models have significantly enhanced the data synthesis with 2D control. Yet, precise 3D control in street view generation, crucial for 3D perception tasks, remains elusive. Specifically, utilizing Bird's-Eye View (BEV) as the primary condition often leads to challenges in geometry control (_e.g._, height), affecting the representation of object shapes, occlusion patterns, and road surface elevations, all of which are essential to perception data synthesis, especially for 3D object detection tasks. In this paper, we introduce MagicDrive, a novel street view generation framework offering diverse 3D geometry controls, including camera poses, road maps, and 3D bounding boxes, together with textual descriptions, achieved through tailored encoding strategies. Besides, our design incorporates a cross-view attention module, ensuring consistency across multiple camera views. With MagicDrive, we achieve high-fidelity street-view synthesis that captures nuanced 3D geometry and various scene descriptions, enhancing tasks like BEV segmentation and 3D object detection. ## 1 Introduction The high costs associated with data collection and annotation often impede the effective training of deep learning models. Fortunately, cutting-edge generative models have illustrated that synthetic data can notably boost performance across various tasks, such as object detection (Chen et al., 2023b) and semantic segmentation (Wu et al., 2023). Yet, the prevailing methodologies are largely tailored to 2D contexts, primarily relying on 2D bounding boxes (Lin et al., 2014; Han et al., 2021) or segmentation maps (Zhou et al., 2019) as layout conditions (Chen et al., 2023b; Li et al., 2023). In autonomous driving applications, a thorough grasp of the 3D environment is essential. This demands reliable techniques for tasks like Bird's-Eye View (BEV) map segmentation (Zhou and Krahenbuhl, 2022; Ji et al., 2023) and 3D object detection (Chen et al., 2020; Huang et al., 2021; Figure 1: Multi-camera street view generation from MagicDrive. MagicDrive can generate continuous camera views with controls from road map, object boxes, and text (_e.g._, weather). Liu et al., 2023a; Ge et al., 2023). A genuine 3D geometry representation is crucial for capturing intricate details from 3D annotations, such as road elevations, object heights, and their occlusion patterns, as shown in Figure 2. Consequently, generating multi-camera street-view images according to 3D annotations becomes vital to boost downstream perception tasks. For street-view data synthesis, two pivotal criteria are _realism_ and _controllability_. _Realism_ requires that the quality of the synthetic data should align with that of real data; and in a given scene, views from varying camera perspectives should remain consistent with one another (Mildenhall et al., 2020). On the other hand, _controllability_ emphasizes the precision in generating street-view images that adhere to provided conditions: the BEV map, 3D object bounding boxes, and camera poses for views. Beyond these core requirements, effective data augmentation should also grant the flexibility to tweak finer scenario attributes, such as prevailing weather conditions or the time of day. Existing solutions like BEVGen (Swerlow et al., 2023) approach street view generation by encapsulating all semantics within BEV. Conversely, BEVControl (Yang et al., 2023) starts by projecting 3D coordinates to image views, subsequently using 2D geometric guidance. However, both methods compromise certain geometric dimensions--height is lost in BEVGen and depth in BEVControl. The rise of diffusion models has significantly pushed the boundaries of controllable image generation quality. Specifically, ControlNet (Zhang et al., 2023a) proposes a flexible framework to incorporate 2D spatial controls based on pre-trained Text-to-Image (T2I) diffusion models (Rombach et al., 2022). However, the challenge of seamlessly integrating multiple 3D conditions with multi-camera view consistency in street view synthesis remains unresolved. In this paper, we introduce MagicDrive, a novel framework dedicated to street-view synthesis with diverse 3D geometry controls. For _realism_, we harness the power of pre-trained stable diffusion (Rombach et al., 2022), further fine-tuning it for street view generation. One distinctive component of our framework is the cross-view attention module. This simple yet effective component provides multi-view consistency through interactions between adjacent views. In contrast to previous methods, MagicDrive encodes objects and road maps separately, amplifying the utilization of all available 3D data for _controllability_ enhancement. More specifically, given the sequence-like, variable-length nature of 3D bounding boxes, we employ cross-attention akin to text embeddings for their encoding. Besides, we adapt ControlNet (Zhang et al., 2023a) to view transformation and incorporate road map information. Without resorting to explicit geometric transformations or imposing geometric constraints on multi-camera consistency, our design learns the inherent geometric relationships from data, significantly simplifying our framework. Finally, MagicDrive factors in textual descriptions, offering attribute control such as weather conditions and time of day. Our MagicDrive framework, despite its simplicity, excels in generating strikingly realistic images that align with road maps, 3D bounding boxes, and varied camera perspectives. Besides, the images produced can enhance the training for both 3D object detection and BEV segmentation tasks. Furthermore, MagicDrive offers comprehensive geometric controls at the _scene_, _background_, and _foreground_ levels. This flexibility makes it possible to craft previously unseen street views suitable for simulation purposes. We summarize the main contributions of this work as: * The introduction of MagicDrive, an innovative framework that generates multi-perspective camera views conditioned on BEV and 3D data tailored for autonomous driving. * The development of simple yet potent strategies to manage 3D geometric data, effectively addressing the challenges of multi-camera view consistency. * Through rigorous experiments, we demonstrate that MagicDrive outperforms prior street view generation techniques, notably for the multi-dimensional controllability. Additionally, our results reveal that synthetic data delivers considerable improvements in 3D perception tasks. Figure 2: 3D bounding boxes are crucial for street view synthesis. Two examples show that 2D boxes or BEV maps lost distance, height, and elevation. Images are generated from MagicDrive. Related Work **Diffusion Models for Conditional Generation.** Diffusion models (Ho et al., 2020; Song et al., 2020) generate images by learning a progressive denoising process from the Gaussian noise distribution to the image distribution. These models have proven exceptional across diverse tasks, such as text-to-image synthesis (Rombach et al., 2022; Nichol et al., 2022), inpainting (Wang et al., 2023), and instructional image editing (Zhang et al., 2023b; Brooks et al., 2023), due to their adaptability and competence in managing various form of controls (Rombach et al., 2022; Zhang et al., 2023; Li et al., 2023) and multiple conditions (Liu et al., 2022a; Huang et al., 2023; Gao et al., 2023). Besides, data synthesized from geometric annotations can aid downstream tasks such as 2D object detection (Chen et al., 2023b; Wu et al., 2023). Thus, this paper explores the potential of T2I diffusion models in generating street-view images and benefiting downstream 3D perception models. **Street View Generation.** Numerous street view generation models condition on 2D layouts, such as 2D bounding boxes (Li et al., 2023) and semantic segmentation (Wang et al., 2022). These methods leverage 2D layout information corresponding directly to image scale, whereas the 3D information does not possess this property, thereby rendering such methods unsuitable for leveraging 3D information for generation. For street view synthesis with 3D geometry, BEVGen (Swerllow et al., 2023) is the first to explore. It utilizes a BEV map as a condition for both roads and vehicles. However, the omission of height information limits its application in 3D object detection. BEVControl (Yang et al., 2023) amends the loss of object's height by the height lifting process, but the projection from 3D to 2D results in the loss of essential 3D geometric information, like depth and occlusion. Thus, neither of them fully exploits 3D annotations or can utilize textual control of driving scenes. In this paper, we propose to encode bounding boxes and road maps separately for more nuanced control and integrate scene descriptions, offering enhanced control over the generation of street views. **Multi-camera Image Generation** of a 3D scene fundamentally requires viewpoint consistency. Several studies have addressed this issue within the context of indoor scenes. For instance, MVDiffusion (Tang et al., 2023) employs panoramic images and a cross-view attention module to maintain global consistency, while Tseng et al. (2023) leverage epipolar geometry as a constraining prior. These approaches, however, primarily rely on the continuity of image views, a condition not always met in street views due to limited camera overlap. Our MagicDrive introduces extra cross-view attention modules to UNet, which significantly enhances consistency across multi-camera views. ## 3 Preliminary **Problem Formulation.** In this paper, we consider the coordinate of the LiDAR system as the ego car's coordinate, and parameterize all geometric information according to it. Let \(\mathbf{S}=\{\mathbf{M},\mathbf{B},\mathbf{L}\}\) be the description of a driving scene around the ego vehicle, where \(\mathbf{M}\in\{0,1\}^{w\times h\times c}\) is the binary map representing a \(w\times h\) meter road area in BEV with \(c\) semantic classes, \(\mathbf{B}=\{(c_{i},b_{i})\}_{i=1}^{N}\) represents the 3D bounding box position (\(b_{i}=\{(x_{j},y_{j},z_{j})\}_{j=1}^{8}\in\mathbb{R}^{8\times 3}\)) and class (\(c_{i}\in\mathcal{C}\)) for each object in the scene, and \(\mathbf{L}\) is the text describing additional information about the scene (_e.g._, weather and time of day). Given a camera pose \(\mathbf{P}=[\mathbf{K},\mathbf{R},\mathbf{T}]\) (_i.e._, intrinsics, rotation, and translation), the goal of street-view image generation is to learn a generator \(\mathcal{G}(\cdot)\) which synthesizes realistic images \(I\in\mathbb{R}^{H\times W\times 3}\) corresponding to the scene \(\mathbf{S}\) and camera pose \(\mathbf{P}\) as, \(I=\mathcal{G}(\mathbf{S},\mathbf{P},z)\), where \(z\sim\mathcal{N}(0,1)\) is a random noise from Gaussian distribution. **Conditional Diffusion Models.** Diffusion models (Ho et al., 2020; Song et al., 2020) generate data (\(\mathbf{x}_{0}\)) by iteratively denoising a random Gaussian noise (\(\mathbf{x}_{T}\)) for \(T\) steps. Typically, to learn the denoising process, the network is trained to predict the noise by minimizing the mean-square error: \[\ell_{simple}=\mathbb{E}_{\mathbf{x}_{0},\mathbf{c},\mathbf{c},t}\left[||\mathbf{\epsilon}- \mathbf{\epsilon}_{\theta}(\sqrt{\bar{\alpha}_{t}}\mathbf{x}_{0}+\sqrt{1-\bar{\alpha}_ {t}}\mathbf{\epsilon},t,\mathbf{c})||^{2}\right], \tag{1}\] where \(\mathbf{\epsilon}_{\theta}\) is the network to train, with parameters \(\theta\), \(\mathbf{c}\) is optional conditions, which is used for the conditional generation, \(t\in[0,T]\) is the time-step, \(\mathbf{\epsilon}\in\mathcal{N}(0,I)\) is the additive Gaussian noise, and \(\bar{\alpha}_{t}\) is a scalar parameter. Latent diffusion models (LDM) (Rombach et al., 2022) is a special kind of diffusion model, where they utilize a pre-trained Vector Quantized Variational AutoEncoder (VQ-VAE) (Esser et al., 2021) and perform diffusion process in the latent space. Given the VQ-VAE encoder as \(z=\mathcal{E}(x)\), one can rewrite \(\mathbf{\epsilon}_{\theta}(\cdot)\) in Equation 1 as \(\mathbf{\epsilon}_{\theta}(\sqrt{\bar{\alpha}_{t}}\mathbf{\xi}(\mathbf{x}_{0})+\sqrt{1- \bar{\alpha}_{t}}\mathbf{\epsilon},t,\mathbf{c})\) for LDM. Besides, LDM considers text describing the image as condition \(c\). ## 4 Street View Generation with 3D Information The overview of MagicDrive is depicted in Figure 3. Operating on the LDM pipeline, MagicDrive generates street-view images conditioned on both scene annotations (\(\mathbf{S}\)) and the camera pose (\(\mathbf{P}\)) for each view. Given the 3D geometric information in scene annotations, projecting all to a BEV map, akin to BEVGen (Swerdlow et al., 2023) or BEVControl (Yang et al., 2023), doesn't ensure precise guidance for street view generation, as exemplified in Figure 2. Consequently, MagicDrive categorizes conditions into three levels: _scene_ (text and camera pose), _foreground_ (3D bounding boxes), and _background_ (road map); and integrates them separately via cross-attention and an additive encoder branch, detailed in Section 4.1. Additionally, maintaining consistency across different cameras is crucial for synthesizing street views. Thus, we introduce a simple yet effective cross-view attention module in Section 4.2. Lastly, we elucidate our training strategies in Section 4.3, emphasizing Classifier-Free Guidance (CFG) in integrating various conditions. ### Geometric Conditions Encoding As illustrated in Figure 3, two strategies are employed for information injection into the UNet of diffusion models: cross-attention and additive encoder branch. Given that the attention mechanism (Vaswani et al., 2017) is tailored for sequential data, cross-attention is apt for managing variable length inputs like text tokens and bounding boxes. Conversely, for grid-like data, such as road maps, the additive encoder branch is effective in information injection (Zhang et al., 2023a). Therefore, MagicDrive employs distinct encoding modules for various conditions. **Scene-level Encoding** includes camera pose \(\mathbf{P}=\{\mathbf{K}\in\mathbb{R}^{3\times 3},\mathbf{R}\in\mathbb{R}^{3 \times 3},\mathbf{T}\in\mathbb{R}^{3\times 1}\}\), and text sequence \(\mathbf{L}\). For text, we construct the prompt with a template as "A driving scene image at {location}. {description} ", and leverage a pre-trained CLIP text encoder (\(E_{text}\)) as LDM (Rombach et al., 2022), as shown by Equation 2, where \(L\) is the token length of \(\mathbf{L}\). As for camera pose, we first concat each parameter by their column, resulting in \(\bar{\mathbf{P}}=[\mathbf{K},\mathbf{R},\mathbf{T}]^{T}\in\mathbb{R}^{7\times 3}\). Since \(\mathbf{P}\) contains values from sin/cos functions and also 3D offsets, to have the model effectively interpret these high-frequency variations, we apply Fourier embedding (Mildenhall et al., 2020) to each 3-dim vector before leveraging a Multi-Layer Perception (MLP, \(E_{cam}\)) to embed the camera pose parameters, as in Equation 3. To maintain consistency, we set the dimension of \(h^{c}\) the same as that of \(h^{t}_{i}\). Through the CLIP text encoder, each text embedding \(h^{t}_{i}\) already contains positional information (Radford et al., 2021). Therefore, we prepend the camera pose embedding \(h^{c}\) to text embeddings, resulting in scene-level embedding \(\mathbf{h}^{s}=[h^{c},\mathbf{h}^{t}]\). \[\mathbf{h}^{t} =[h^{t}_{1}\dots h^{t}_{L}]=E_{text}(\mathbf{L}), \tag{2}\] \[h^{c} =E_{cam}(\mathrm{Fourier}(\bar{\mathbf{P}}))=E_{cam}(\mathrm{ Fourier}([\mathbf{K},\mathbf{R},\mathbf{T}]^{T})). \tag{3}\] **3D Bounding Box Encoding.** Since each driving scene has a variable length of bounding boxes, we inject them through the cross-attention mechanism similar to scene-level information. Specifically, we encode each box into a hidden vector \(h^{b}\), which has the same dimensions as that of \(h^{t}\). Each 3D bounding box \((c_{i},b_{i})\) contains two types of information: class label \(c_{i}\) and box position \(b_{i}\). For Figure 3: Overview of MagicDrive for street-view image generation. MagicDrive generates highly realistic images, exploiting geometric information from 3D annotations by independently encoding road maps, object boxes, and camera parameters for precise, geometry-guided synthesis. Additionally, MagicDrive accommodates guidance from descriptive conditions (_e.g._, weather). class labels, we utilize the method similar to Li et al. (2023), where the pooled embeddings of class names (\(L_{c_{i}}\)) are considered as label embeddings. For box positions \(b_{i}\in\mathbb{R}^{8\times 3}\), represented by the coordinates of its 8 corner points, we utilize Fourier embedding to each point and pass through an MLP for encoding, as in Equation 4. Then, we use an MLP to compress both class and position embedding into one hidden vector, as in Equation 5. The final hidden states for all bounding boxes of each scene are represented as \(\mathbf{h}^{b}=[h_{1}^{b}\dots h_{N}^{b}]\), where \(N\) is the number of boxes. \[e_{c}^{b}(i)=\mathrm{AvgPool}(E_{text}(L_{c_{i}})),\ e_{p}^{b}(i)=\mathrm{MLP }_{p}(\mathrm{Fourier}(b_{i})), \tag{4}\] \[h_{i}^{b}=E_{box}(c_{i},b_{i})=\mathrm{MLP}_{b}(e_{c}^{b}(i),e_{p}^{b}(i)). \tag{5}\] Ideally, the model learns the geometric relationship between bounding boxes and camera pose through training. However, the distribution of the number of visible boxes to different views is long-tailed. Thus, we bootstrap learning by filtering visible objects to each view (\(v_{i}\)), _i.e._, \(f_{viz}\) in Equation 6. Besides, we also add invisible boxes for augmentation (more details in Section 4.3). \[\mathbf{h}_{v_{i}}^{b}=\{h_{i}^{b}\in\mathbf{h}^{b}|f_{viz}(b_{i},\mathbf{R}_{v_{i}}, \mathbf{T}_{v_{i}})>0\}. \tag{6}\] **Road Map Encoding.** The road map has a 2D-grid format. While Zhang et al. (2023) shows the addictive encoder can incorporate this kind of data for 2D guidance, the inherent perspective differences between the road map's BEV and the camera's First-Person View (FPV) create discrepancies. BEVControl (Yang et al., 2023) employs a back-projection to transform from BEV to PPV but complicates the situation with an ill-posed problem. In MagicDrive, we propose that explicit view transformation is unnecessary, as sufficient 3D cues (_e.g._, height from object boxes and camera pose) allow the addictive encoder to accomplish view transformation. Specifically, we integrate scene-level and 3D bounding box embeddings into the map encoder (see Figure 3). Scene-level embeddings provide camera poses, and box embeddings offer road elevation cues. Additionally, incorporating text descriptions facilitates the generation of roads under varying conditions (_e.g._, weather and time of day). Thus, the map encoder can synergize with other conditions for generation. ### Cross-view Attention Module In multi-camera view generation, it is crucial that image synthesis remains consistent across different perspectives. To maintain consistency, we introduce a cross-view attention module (Figure 4). Given the sparse arrangement of cameras in driving contexts, each cross-view attention allows the target view to access information from its immediate left and right views, as in Equation 7; here, \(t\), \(l\), and \(r\) are the target, left, and right view respectively. Then, the target view aggregates such information with skip connection, as in Equation 8, where \(\mathbf{h}^{v}\) indicates the hidden state of the target view. \[\mathrm{Attention}_{cv}^{i}(Q_{t},K_{i},V_{i})=\mathrm{softmax}(\frac{Q_{t}K _{i}^{T}}{\sqrt{d}})\cdot V_{i},\ i\in\{l,r\}, \tag{7}\] \[\mathbf{h}_{out}^{v}=\mathbf{h}_{in}^{v}+\mathrm{Attention}_{cv}^{l}+\mathrm{Attention }_{cv}^{r}. \tag{8}\] We inject cross-view attention after the cross-attention module in the UNet and apply zero-initialization (Zhang et al., 2023) to bootstrap the optimization. The efficacy of the cross-view attention module is demonstrated in Figure 4 right, Figure 5, and Figure 6. The multilayered structure of UNet enables aggregating information from long-range views after several stacked blocks. Therefore, using cross-view attention on adjacent views is enough for multi-view consistency, further evidenced by the ablation study in Appendix C. Figure 4: Cross-view Attention. _left_: we introduce cross-view attention to the pre-trained UNet after the cross-attention module. _right_: we highlight some areas for comparison between without and with cross-view attention. Cross-view attention guarantees consistency across multiple views. ### Model Training **Classifier-free Guidance** reinforces the impact of conditional guidance (Ho and Salimans, 2021; Rombach et al., 2022). For effective CFG, models need to discard conditions during training occasionally. Given the unique nature of each condition, applying a drop strategy is complex for multiple conditions. Therefore, our MagicDrive simplifies this for four conditions by concurrently dropping scene-level conditions (camera pose and text embeddings) at a rate of \(\gamma^{s}\). For boxes and maps, which have semantic representations for _null_ (i.e., padding token in boxes and \(0\) in maps) in their encoding, we maintain them throughout training. At inference, we utilize _null_ for all conditions, enabling meaningful amplification to guide generation. **Training Objective and Augmentation.** With all the conditions injected as inputs, we adapt the training objective described in Section 3 to the multi-condition scenario, as in Equation 9. \[\ell=\mathbb{E}_{x_{0},\epsilon,t,\{\mathbf{S},\mathbf{P}\}}\left[||\mathbf{ \epsilon}-\mathbf{\epsilon}_{\theta}(\sqrt{\bar{\alpha}_{t}}\mathcal{E}(\mathbf{x}_{0 })+\sqrt{1-\bar{\alpha}_{t}}\mathbf{\epsilon},t,\{\mathbf{S},\mathbf{P}\})\right]. \tag{9}\] Besides, we emphasize two essential strategies when training our MagicDrive. First, to counteract our filtering of visible boxes, we randomly add 10% invisible boxes as an augmentation, enhancing the model's geometric transformation capabilities. Second, to leverage cross-view attention, which facilitates information sharing across multiple views, we apply unique noises to different views in each training step, preventing trivial solutions to Equation 9 (_e.g._, outputting the shared component across different views). Identical random noise is reserved exclusively for inference. ## 5 Experiments ### Experimental Setups **Dataset and Baselines.** We employ the nuScenes dataset (Caesar et al., 2020) as the testing ground for MagicDrive, a prevalent dataset in BEV segmentation and detection for driving. We adhere to the official configuration, utilizing 700 street-view scenes for training and 150 for validation. Our baselines are BEVGen (Swerdlow et al., 2023) and BEVControl (Yang et al., 2023), both recent propositions for street view generation. Our method considers 10 object classes and 8 road classes, surpassing the baseline models in diversity. Appendix B holds additional details. **Evaluation Metrics.** We evaluate both realism and controllability for street view generation. Realism is mainly measured using Frechet Inception Distance (FID), reflecting image synthesis quality. For controllability, MagicDrive is evaluated through two perception tasks: BEV segmentation and 3D object detection, with CVT (Zhou and Krahenbuhl, 2022) and BEVFusion (Liu et al., 2023) as perception models, respectively. Both of them are renowned for their performance in each task. Firstly, we generate images aligned with the validation set annotations and use perception models pre-trained with real data to assess image quality and control accuracy. Then, data is generated based on the training set to examine the support for training perception models as data augmentation. **Model Setup.** Our MagicDrive utilizes pre-trained weights from Stable Diffusion v1.5, training only newly added parameters. Per Zhang et al. (2023), a trainable UNet encoder is created for \(E_{map}\). New parameters, except for the zero-init module and the class token, are randomly initialized. We adopt two resolutions to reconcile discrepancies in perception tasks and baselines: 224\(\times\)400 (0.25\(\times\) down-sample) following BEVGen and for CVT model support, and a higher 272\(\times\)736 (0.5\(\times\) down-sample) for BEVFusion support. Unless stated otherwise, images are sampled using the UniPC (Zhao et al., 2023) scheduler for 20 steps with CFG at \(2.0\). ### Main Results **Realism and Controllability Validation.** To confirm the efficacy of MagicDrive in generating realistic images, we employ the nuScenes validation set to synthesize street-view images and show the metrics in Table 1. MagicDrive surpasses all other methods in image quality, delivering significantly lower FID scores. Despite the inherent domain gap between stable diffusion pre-training and street views, our refinement process effectively mitigates this, yielding highly convincing results. Regarding controllability, assessed via BEV segmentation tasks, MagicDrive consistently equals or exceeds baseline results at 224\(\times\)400 resolution. This is mainly attributed to the distinct encoding design which significantly boosts generation precision on vehicles. At 272\(\times\)736 resolution, advancements in our encoding strategy are demonstrated by enhanced vehicle mIoU performance. Cropping large areas of each view affects the road mIoU on CVT negatively. Nonetheless, BEVFusion's results in 3D object detection further corroborate the efficacy of our bounding box encoding. **Training Support for BEV Segmentation and 3D Object Detection.** MagicDrive can produce augmented data with accurate annotation controls, enhancing the training for perception tasks. For BEV segmentation, we augment an equal number of images as in the original dataset, ensuring consistent training iterations and batch sizes for fair comparisons to the baseline. As shown in Table 3, MagicDrive significantly enhances CVT in both settings, outperforming BEVGen, which only marginally improves vehicle segmentation. For 3D object detection, we train BEVFusion models with MagicDrive's synthetic data as augmentation. To optimize data augmentation, we randomly exclude 50% of bounding boxes in each generated scene. Table 2 shows the advantageous impact of MagicDrive's data in both CAM-only (C) and CAM+LiDAR (C+L) settings. It's crucial to note that in CAM+LiDAR settings, BEVFusion utilizes both modalities for object detection, requiring more precise image generation due to LiDAR data incorporation. Nevertheless, MagicDrive's synthetic data integrates seamlessly with LiDAR inputs, highlighting the data's high fidelity. \begin{table} \begin{tabular}{l|c|c|c|c|c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Synthesis} & \multicolumn{3}{c|}{BEV segmentation} & \multicolumn{2}{c}{3D object detection} \\ \cline{3-6} & resolution & \multicolumn{1}{c|}{FID\(\downarrow\)} & \multicolumn{1}{c|}{Road mIoU \(\uparrow\)} & \multicolumn{1}{c|}{Vehicle mIoU \(\uparrow\)} & \multicolumn{1}{c}{mAP \(\uparrow\)} & \multicolumn{1}{c}{NDS \(\uparrow\)} \\ \hline Oracle & - & - & 72.21 & 33.66 & 35.54 & 41.21 \\ \hline BEVGen & 224\(\times\)400 & 25.54 & 50.20 & 5.89 & - & - \\ BEVControl & - & 24.85 & _60.80_ & 26.80 & - & - \\ \hline MagicDrive & 224\(\times\)400 & **16.20** & **61.05** & _27.01_ & 12.30 & 23.32 \\ MagicDrive & 272\(\times\)736 & _16.59_ & 54.24 & **31.05** & **20.85** & **30.26** \\ \hline \end{tabular} \end{table} Table 1: Comparison of generation fidelity with driving-view generation methods. Conditions for data synthesis are from nuScenes validation set. For each task, we test the corresponding models trained on the nuScenes training set. MagicDrive surpasses all baselines throughout the evaluation. \(\uparrow\)/\(\downarrow\) indicates that a higher/lower value is better. The best results are in **bold**, while the second best results are in _underlined italic_ (when other methods are available). Figure 5: Qualitative comparison with baselines. Both scenes are from nuScenes validation set. We highlight some areas with rectangles to ease comparison. _Up_: compared with BEVControl, generations from MagicDrive appear more consistent in both background and foreground. _Down_: compared with BEVControl, image quality of objects from MagicDrive is much better. ### Qualitative Evaluation **Comparison with Baselines.** We assessed MagicDrive against two baselines, BEVGen and BEVControl, synthesizing multi-camera views for the same validation scenes. Figure 5 illustrates that MagicDrive generates images markedly superior in quality to both baselines, particularly excelling in accurate object positioning and maintaining consistency in street views for backgrounds and objects, notably over BEVControl. Such performance primarily stems from MagicDrive's bounding box encoder and its cross-view attention module. **Multi-level Controls.** The design of MagicDrive introduces multi-level controls to street-view generation through separation encoding. This section demonstrates the capabilities of MagicDrive by exploring three control signal levels: _scene level_ (time of day and weather), _background level_ (BEV map alterations and conditional views), and _foreground level_ (object orientation and deletion). As illustrated in Figure 1 and Figure 6, MagicDrive adeptly accommodates alterations at each control level, maintaining multi-camera consistency and high realism in generation. \begin{table} \begin{tabular}{c c|c|c} \hline \hline Modality & Data & mAP \(\uparrow\) & NDS \(\uparrow\) \\ \hline \multirow{2}{*}{C} & w/o synthetic data & 32.48 & 37.61 \\ & w/ MagicDrive & **35.14** - 2.64 & **39.63** - 2.62 \\ \hline \multirow{2}{*}{C+L} & w/o synthetic data & 64.92 & 69.42 \\ & w/ MagicDrive & **67.28** - 1.28 & **70.14** - 4.72 \\ \hline \multirow{2}{*}{C+L} & Generation from MacroDrive & \multicolumn{1}{c}{} & \\ \cline{2-4} & Scene level: _prompt_ + “wapap, _diff_(text:top ## 6 Ablation Study **Bounding Box Encoding.** MagicDrive utilizes separate encoders for bounding boxes and road maps. To demonstrate the efficacy, we train a ControlNet (Zhang et al., 2023) that takes the BEV map with both road and object semantics as a condition (like BEVGen), denoted as "w/o \(E_{box}\)" in Table 4. Given the relatively small size of objects in the BEV map, it is necessary for a separate \(E_{box}\) to effectively help the model accurately represent vehicle annotations, evidenced by the performance gap in vehicle mIoU. Besides, by applying visible object filter \(f_{viz}\), both the road and the vehicle mIoU improve significantly. This is due to that visibility filtering reduces the burden of optimization. Additionally, we explored a variant of MagicDrive, incorporating \(E_{box}\) alongside BEV containing both road and object semantics. However, this redundancy did not enhance performance, reinforcing the merit of integrating diverse types of information through different strategies. **Effect of Classifier-free Guidance.** We focus on the two most crucial conditions, _i.e_. object boxes and road maps, and analyze how CFG affects the performance of generation. We change CFG from 1.5 to 4.0 and plot the change of validation results from CVT in Figure 7. Firstly, by increasing the CFG scale, FID keeps degrading, since CFG significantly changes the contrast and sharpness, which is also observed by previous works (Chen et al., 2023). Secondly, by keeping the map the same for both conditional inference and unconditional inference, we can eliminate the effect of CFG on the map condition. As shown by blue lines of Figure 7, increasing CFG scale results in the highest vehicle mIoU at CFG=2.5, but the road mIoU keeps decreasing. Thirdly, with \(M=\{0\}\) for unconditional inference in CFG, road mIoU significantly increases. However, it slightly degrades the guidance on vehicle generation. As stated in Section 4.3, CFG gains complication through incorporating more conditions. Although we ease the case for training, we still face different choices of CFG during inferece. We leave the in-depth investigation for this case as future work. ## 7 Conclusion This paper presents MagicDrive, a novel framework to encode multiple geometric controls for high-quality multi-camera street view generation. With the separation encoding design, MagicDrive fully utilizes geometric information from 3D annotations and realizes accurate semantic control for street views. Besides, the proposed cross-view attention module is simple yet effective in guaranteeing consistency across multi-camera views. As evidenced by experiments, the generations from MagicDrive show high realism and fidelity to 3D annotations. Multiple controls equipped MagicDrive with improved generalizability for the generation of novel street views. Meanwhile, MagicDrive can be used for data augmentation, facilitating the training for perception models on both BEV segmentation and 3D object detection tasks. **Limitation and Future Work.** We show failure cases from MagicDrive in Figure 8. Although MagicDrive can generate night views, they are not as dark as real images (as in Figure 7(a)). This may be due to that diffusion models are hard to generate too dark images (Guttenberg, 2023). Figure 7(b) shows that MagicDrive cannot generate unseen weathers for nuScenes. Future work may focus on how to improve the cross-domain generalization ability of street view generation. \begin{table} \begin{tabular}{l|c c c} \hline \hline Method & FID \(\downarrow\) & \begin{tabular}{c} Road \\ mIoU \(\uparrow\) \\ \end{tabular} & \begin{tabular}{c} Vehicle \\ mIoU \(\uparrow\) \\ \end{tabular} \\ \hline w/o \(E_{box}\) & 18.06 & 58.31 & 5.50 \\ w/o \(f_{viz}\) & 14.67 & 56.46 & 24.73 \\ w/ \(E_{box}\) \& map\({}_{obj}\) & 14.70 & 56.04 & 26.20 \\ \hline Ours & **14.46** & **59.31** & **27.13** \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation of the separate box encoder. Evaluation results are from CVT on the synthetic nuScenes validation set, without \(M=\{0\}\) in CFG scale \(=2\). MagicDrive has better controllability and keeps image quality. Figure 8: Failure cases of MagicDrive. Figure 7: Effect of CFG on different conditions to each metrics. **Acknowledgement.** This work is supported in part by General Research Fund (GRF) of Hong Kong Research Grants Council (RGC) under Grant No. 14203521 and in part by the Research Matching Grant Scheme under Grant No. 8601109. We gratefully acknowledge the support of MindSpore, CANN (Compute Architecture for Neural Networks) and Ascend AI Processor used for this research.
2303.15245
Comparison between layer-to-layer network training and conventional network training using Deep Convolutional Neural Networks
Title: Comparison between layer-to-layer network training and conventional network training using Deep Convolutional Neural Networks Abstract: Convolutional neural networks (CNNs) are widely used in various applications due to their effectiveness in extracting features from data. However, the performance of a CNN heavily depends on its architecture and training process. In this study, we propose a layer-to-layer training method and compare its performance with the conventional training method. In the layer-to-layer training approach, we treat a portion of the early layers as a student network and the later layers as a teacher network. During each training step, we incrementally train the student network to learn from the output of the teacher network, and vice versa. We evaluate this approach on VGG16, ResNext, and DenseNet networks without pre-trained ImageNet weights and a regular CNN model. Our experiments show that the layer-to-layer training method outperforms the conventional training method for both models. Specifically, we achieve higher accuracy on the test set for the VGG16, ResNext, and DeseNet networks and the CNN model using layer-to-layer training compared to the conventional training method. Overall, our study highlights the importance of layer-wise training in CNNs and suggests that layer-to-layer training can be a promising approach for improving the accuracy of CNNs.
Kiran Kumar Ashish Bhyravabhottla, WonSook Lee
2023-03-27T14:29:18Z
http://arxiv.org/abs/2303.15245v2
Comparison between layer-to-layer network training and conventional network training using Deep Convolutional Neural Networks ###### Abstract Convolutional neural networks have been widely deployed in almost all applications. It reached every boundary and scenario. Now, there has been significant development in neural architectures such as transfer learning, generative networks, diffusion models, and so forth. But each network's base is the convolutional neural architecture. In today's scenario, accuracy plays a crucial role. In general, accuracy mainly depends on the features. The features are extracted through the convolutional filters inside hidden layers. So, the layer in any architecture has a very vital role to play in the training process. In this research, we propose a comparative analysis of layer-to-layer training and the conventional training of the network. In layer-to-layer training, the portion of the first layers is treated as a student network and the last layers are treated as a teacher network. During each step of training, the portions keep incrementing in the forward layers or student network and decrementing in the last layers or teacher network. This layer-to-layer comparison is tested on the VGG16, ResNext and DenseNet networks without using any pre-trained ImageNet weights and on a normal CNN model. The results are then compared with the conventional training method with VGG16 ResNext, DenseNet and the normal CNN model respectively. Convolutional neural networks, VGG16, ResNext, DenseNet, ImageNet, layer-to-layer training. + Footnote †: footnote][https://github.com/ashish-AIML/LIIILab](https://github.com/ashish-AIML/LIIILab) ## 1 Introduction Convolutional neural networks have gained momentum in image classification, object detection, and image segmentation applications. For certain real-world scenarios, traditional machine learning still has limitations despite its success and application in many practical applications. The problem, however, is that obtaining sufficient training data can be costly, time-consuming, or even impossible in many cases. This problem can be partially addressed by semi-supervised learning, which does not require mass-labeled data. For improved learning accuracy, semi-supervised approaches utilize a large amount of unlabeled data instead of a limited amount of labeled data. The resultant traditional models are usually unsatisfactory because unlabeled instances are also challenging to collect. Hence, transfer learning came into existence intending to transfer knowledge across domains with limited labeled data. In simple words, it is learning to transfer the generalization of experience. This creates a scenario of the ability to realize the situations through experiences. The commonly used transfer learning methodology is the ImageNet weights [2]. The idea of implementing ImageNet as pre-trained model weights is inspired by human beings' ability to transfer knowledge across domains. It leverages the knowledge from the source, i.e., ImageNet [2] data to improve the performance of the model and to minimize the number of labeled data required in the target domain. Now, the main research focus is on improving accuracy. Critical applications required a very high amount of accuracy equaling nearly 99%. Hence, the growing intolerance of having accuracies less than even 97% is gaining momentum. This research is about exploring layer-to-layer training with a simple convolutional network as a teacher-student mechanism and analyzing its memory consumption, training speed, and performance with the normal conventional training methods. ## 2 Background and Motivation Modern neural networks are composed of dozens or hundreds of layers that perform mathematical operations. These layers take a feature tensor as input and output activations corresponding to those features. The training algorithm iterates over a large dataset many times and minimizes the loss function. The full dataset is partitioned into mini-batches and iterated through the full dataset. This process is called an epoch. The training of a neural network consists of (a) forward pass, (b) backward pass, and (c) parameter synchronization. The forward pass (FP) analyses the model layer-by-layer in each iteration to determine the loss to the target labels and the loss function. GPU computing is needed for the forward and return passes. We determine the parameter gradients from the last layer to the first layer in the backward pass (BP) using the chain rule of derivatives for the loss [5]. We update the model parameters utilizing an optimization procedure, such as stochastic gradient descent, after each iteration (SGD) [4]. Since the datasets are complicated in today's scenario, several intense layer-based algorithms have been proposed to acquire higher accuracies. Many techniques such as optimizing parameters for existing algorithms to achieve better accuracies have been executed. In this research, we explore a different approach to training the neural network. Recent efforts have shown that front layers extract general features and deeper layers are more task-specific feature extractors [5]. Our research aims at exploring layer-wise training within a network without using any pre-trained residual network's weights [1] or any sort of pre-trained weights rather than training from the scratch. ## 3 Technical Approach **Architecture:** Our architecture is a simple and normal convolutional neural network. It's a sequential training network. The base network is a _12-layered network_. The first layer is a 2D convolutional layer with _32_ filters, each with a kernel size of _3x3,'same'_ padding, and _ReLU_ activation [7]. The input shape is the shape of a single image in the training data. The second layer is another 2D convolutional layer with _32_ filters, also with a kernel size of _3x3_, '_same_' padding, and _ReLU_ activation. The third layer is a 2D max pooling layer with a pool size of _2x2_. The fourth layer is a dropout layer with a rate of _0.25_, which randomly drops 25% of the inputs during training to prevent overfitting. The fifth layer is a 2D convolutional layer with _64_ filters, each with a kernel size of _3x3_, _'same'_ padding, and _ReLU_ activation. The sixth layer is another 2D convolutional layer with _64_ filters, also with a kernel size of _3x3_, _'same'_ padding, and _ReLU_ activation. The seventh layer is another 2D max pooling layer with a pool size of _2x2_. The eighth layer is another dropout layer with a rate of _0.25_. The ninth layer is a flattened layer that flattens the output of the previous layer into a 1D array. The tenth layer is a fully connected layer with _512_ units and _ReLU_ activation. The eleventh layer is another dropout layer with a rate of _0.5_. The final layer is another fully connected layer with _num_classes_ units and a _softmax_ activation function. **VGG16 Architecture:** The first convolutional layer has _32_ filters, each with a kernel size of _3x3_. So, the number of parameters in this layer is (_3 * 3 * input channels + 1) * 32_, where _input_channels_ is the number of channels in the input image (usually 3 for RGB images). In this case, the input shape is _(32, 32, 3)_, so the number of parameters in this layer is _(3 * 3 * 3 + 1) * 32 = 896_. The second convolutional layer has the same parameters as the first, so it also has _896_ parameters. The max pooling layers and dropout layers do not have any parameters. The third convolutional layer has _64_ filters, so the number of parameters in this layer is _(3 * 3 * 32 + 1) * 64 = 18496_. The fourth convolutional layer has the same parameters as the third, so it also has _18496_ parameters. The first fully connected layer has _512_ units, so the number of parameters in this layer is _(previous layer_size + 1) * 512_, where _previous layer_size_ is the flattened size of the previous layer. In this case, the previous layer has a flattened size of _4096 (64 * 8 * 8)_, so the number of parameters in this layer is _(4096 + 1) * 512 = 2097664_. The second fully connected layer has _num_classes_ units, so the number of parameters in this layer is _(previous layer_size + 1) * num_classes_. In this case, the previous layer has a size of _512_, so the number of parameters in this layer is _(512 + 1) * num_classes_. **ResNext Architecture:** The ResNext architecture takes an input tensor of shape specified by input_shape and produces an output tensor of shape which is number of classes, i.e., 100. The architecture consists of four groups of convolutional layers, each group containing two convolutional layers with the same number of filters. The number of filters is doubled in each group, starting from 64 in the first group. After each convolutional layer, batch normalization is performed, followed by the ReLU activation function. Max pooling is applied after each group to reduce the spatial size of the feature maps. The final layers of the network consist of a global average pooling layer followed by a fully connected layer with num_classes neurons and a softmax activation function to produce a probability distribution over the classes. This architecture is based on the ResNet architecture, which introduces residual connections to address the vanishing gradient problem in deep neural networks. However, the ResNext architecture extends ResNet by introducing a split-transform-merge strategy for the residual connections, which allows for more diverse representations to be learned by the network. **DenseNet Architecture:** The architecture consists of a convolutional layer followed by batch normalization and ReLU activation, a series of dense blocks, and a global average pooling layer and a fully connected softmax output layer. Each dense block consists of a series of bottleneck layers and convolutional layers with concatenation of feature maps. The architecture ends with a global average pooling layer and a fully connected softmax output layer. ## 4 Experiments We evaluate our model with the standard CIFAR100 [3] dataset with a normal CNN network and on dense networks such as VGG16 [6], ResNext [1], DenseNet [6] networks. For each training, the number of epochs is set to **300.** All the experiments are implemented on the Google Colab GPU notebooks. The performance metrics to evaluate the model are: \begin{tabular}{l l} **(i)** & **total training time** \\ **(ii)** & **accuracy** \\ **(iii)** & **total memory consumption** \\ \end{tabular} ### Benchmark Datasets **CIFAR100:** CIFAR-100 is a popular image classification dataset that contains 60,000 32x32 color images in 100 classes, with 600 images per class. The dataset is split into 50,000 training images and 10,000 testing images. The 100 classes in CIFAR-100 are grouped into 20 super classes, each containing five fine-grained classes. ### Layer-to-Layer Training The model is trained with layer-to-layer training. In this mechanism, we declare a single network with _'n'_layers. In the first step, the _1st_ layer and the (_n-1)th_ layer is trained, freezing the rest of the layers. Here, the 1st layer acts as a student network and the (n-1)th layer acts as a teacher network. In the second step, _2nd_ layer is a student network and _(n-2)nd_ layer is a teacher network that is trained and the rest layers are frozen. In the third step, _3rd_ layer as a student network and _(n-3)rd_ layer as a teacher network is trained, freezing the rest of the layers. This process is continued till _(n-i)_ layers as _(i+1, (n-(i+1))), (i+2, (n-(i+2))), (i+3, (n-(i+3))),......... _((i+n/2), (n-(i+n/2)),_ where 'n' is the number of layers, and 'i' is the student layers and value of _i=0_. After training with all the layer pairs, we perform an ensemble of all the layer pairs to print the final accuracy. ### Standard Training The performance is compared with the standard training of the networks. Normal training is a standard sequential training of all the layers at once. ### Results and Discussion In this section, we discuss the performance of layer-to-layer training compared with the standard training of both architectures respectively. The tabular comparison is shown in table 1. **Standard CNN:** Since the accuracy factor plays an important role in critical applications, layer-to-layer training outperforms the standard training methods. As shown in Table 1, the accuracy of layer-to-layer training is **80%,** and that of standard layered training is **78%.** Crucial applications can utilize the layer-to-layer training method which is critical for accuracy. The other two performance metrics performed better than the layer-to-layer training. The total training time for standard training is **72.7 seconds** compared to **309.19 seconds** of layer-to-layer training. The total memory consumption of standard training is **5.27 GB** compared to **8.86 GB** of layer-to-layer training. Whenever the systems are RAM critical, standard layered training can be used, but with better RAM systems, layer-to-layered training is better since higher accuracies can be achieved. **VGG16:** VGG16 [6] architecture is trained without pre-trained ImageNet weights but rather trained from scratch. The scenario of VGG16 is similar to that of the standard CNN. Accuracy is greater in layer-to-layer training than the standard training. But when compared with the standard CNN architecture, the accuracy of VGG16 in both training methods are almost negligible. Even after performing the ensemble method in the layer-to-layer training, the accuracy did not increase. The accuracy of layer-to-layer training is at **10%** and that of standard training is at **9.62%**. The total memory consumption of standard VGG16 is **5.4 GB** and that of layer-to-layer training is **6 GB**. The total training time of standard layers training is 265.5 seconds compared to that of 2311.3 seconds of layer-to-layer training. **DenseNet:** Since the accuracy factor plays an important role in critical applications, layer-to-layer training outperforms the standard training methods. As shown in table 1, the accuracy of layer-to-layer training is **63.98%,** and that of standard layered training is **60.25%**. Crucial applications can utilize the layer-to-layer training method which is critical for accuracy. The other two performance metrics performed better than the layer-to-layer training. The total training time for standard training is **36045.163185596466 seconds** compared to **68081.966414779316 seconds** of layer-to-layer training. The total memory consumption of standard training is **5.26 GB** compared to **7.77 GB** of layer-to-layer training. Whenever the systems are RAM critical, standard layered training can be used, but with better RAM systems, layer-to-layered training is better since higher accuracies can be achieved. **ResNet:** ResNet architecture is trained without pre-trained ImageNet weights but rather trained from scratch. Accuracy is greater in layer-to-layer training than the standard training Even after performing the ensemble method in the layer-to-layer training, the accuracy did not increase. The accuracy of layer-to-layer training is at **56.85%** and that of standard training is at **55.28%**. The total memory consumption of standard training of ResNet is **4.9 GB** and that of layer-to-layer training is **6.9 GB**. The total training time of standard layers training is **3930.0629668235779 seconds** compared to that of **8790.185438156128 seconds** of layer-to-layer training. The second half of the network is chosen as a teacher model since it is processed with intense filter sizes hence more knowledge can be extracted from these layers. It is generally believed that the last layers typically involve global pooling operations and fully connected layers act as classifiers. These layers are responsible for extracting a high-level feature from the input images, which can be used to classify the image into different classes. Therefore, in this sense, the last layers of a CNN can be considered global feature extractors, as they take into account the entire image and produce a summary of its features that can be used for classification. Hence, they are more knowledgeable than the initial layers. On the other hand, the earlier layers of a CNN typically perform local feature extraction by detecting low-level visual features such as edges, corners, and textures in different regions of the input image. These features are gradually combined and transformed by subsequent layers to form higher-level features that are increasingly global. Hence, accuracy is greater in layer-to-layer training than the standard layer training. ## 5 Conclusion and Future Work We have researched layer-to-layer training within the same network and shown its advantage by comparing it with normal standard training. The second section of the architecture is treated as teacher network because of its ability to extract dense features. Layer-to-layer training resulted on greater accuracies for both normal CNN and transfer learning architectures. As the epochs increased, the accuracy increases in layer-to-layer training. Although, the layer-to-layer training resulted in higher accuracies, its training speed and memory consumption is higher compared to the normal conventional training. But as the dataset quantity increases, the RAM consumption for layer-to-layer training increases drastically and requires more than 15 GB of RAM for denser architectures such as ResNext V1 and ResNext V2. Therefore, more RAM requirement is recommended for layer-to-layer training. We can conclude that as the dataset classes increases and number of layers increases, more hyper-tuning is required, and pre-trained weights are required to improve the accuracy. In the future, the dense architectures can be trained with higher RAM such as multi-GPUs and train with much higher datasets to see the variation of accuracy between small datasets and very large datasets. Further, instead of computing multi-training methods and increasing the number of epochs consuming memory, we can deploy a layer-to-layer method to improve the performance of the network resulting in higher accuracy. In future, the experiments can be based on increasing the layers, epochs and training with high quality and quantity datasets to evaluate the performance of \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Network Architecture** & **Training Method** & **Total Training Time** & **Accuracy** & **Total Memory Consumption** \\ \hline \multirow{4}{*}{Standard CNN} & Layer-to-Layer training & 309.1986117362976 seconds & **80\%** & 8862.59765625 MB \\ \cline{2-5} & Standard & **72.70218634605408** & 78\% & **5277.4296875 MB** \\ \hline \multirow{4}{*}{VGG16} & Layer-to-Layer training & 2311.3207755088806 seconds & **10\%** & 6035.421875 MB \\ \cline{2-5} & Standard training & **265.588809967041** & 9.62\% & **5436.65234375 MB** \\ \cline{2-5} & \multirow{2}{*}{\begin{tabular}{c} Layer-to-Layer training \\ \end{tabular} } & 68081.966414779316 seconds & **63.98\%** & **7.77 GB** \\ \cline{2-5} & Standard training & **36045.163185596466** & 60.25\% & 5.2 GB \\ \hline \multirow{4}{*}{ResNet} & Layer-to-Layer training & 8790.185438156128 seconds & **56.85\%** & 6.9 GB \\ \cline{2-5} & \multirow{2}{*}{ \begin{tabular}{c} Standard training \\ \end{tabular} } & **3930.0629668235779** & 55.28\% & **5.5 GB** \\ \cline{1-1} \cline{2-5} & & **seconds** & & \\ \hline \end{tabular} \end{table} Table 1: Performance metrics of training methods on the CIFAR100 dataset layer-to-layer training. Further, the experiments can be conducted with multi-GPUs and multi-threading to check the training speed of both normal training and layer-to-layer training. Since, the layer-to-layer training resulted in better accuracy than the normal conventional training, the future work can even focus on extending this methodology to object detection and image segmentation models to evaluate the performance.
2302.12603
Parameterized shadowing for nonautonomous dynamics
For nonautonomous and nonlinear differential and difference equations depending on a parameter, we formulate sufficient conditions under which they exhibit $C^k$, $k\in \N$ shadowing with respect to a parameter. Our results are applicable to situations when the linear part is not hyperbolic. In the case when the linear part is hyperbolic, we obtain results dealing with parameterized Hyers-Ulam stability.
Lucas Backes, Davor Dragičević, Xiao Tang
2023-02-24T12:39:50Z
http://arxiv.org/abs/2302.12603v1
# Parameterized shadowing for ###### Abstract. For nonautonomous and nonlinear differential and difference equations depending on a parameter, we formulate sufficient conditions under which they exhibit \(C^{k}\), \(k\in\mathbb{N}\) shadowing with respect to a parameter. Our results are applicable to situations when the linear part is not hyperbolic. In the case when the linear part is hyperbolic, we obtain results dealing with parameterized Hyers-Ulam stability. Key words and phrases:\(C^{k}\) parametrized shadowing; nonautonomous systems; exponential dichotomy 2020 Mathematics Subject Classification: Primary: 37C50; Secondary: 34A34, 39A05 ## 1. Introduction In the present paper, we consider nonautonomous and nonlinear differential equations of the form \[x^{\prime}=A(t)+f_{\lambda}(t,x),\quad t\in\mathbb{R}. \tag{1}\] Here, \(A(t)\), \(t\in\mathbb{R}\) are linear operators acting on a Banach space \(X=(X,|\cdot|)\) and \(f_{\lambda}\colon\mathbb{R}\times X\to X\) is a nonlinear map for each \(\lambda\in\Sigma\), where \(\Sigma\) is an open subset of some Banach space. In [8], the authors have formulated very general conditions under which (1) admits a _shadowing property_ which guarantees that in a neighborhood of each approximate solution of (1) we can construct its exact solution. An important feature of the results established in [8] is that they do not require any hyperbolicity conditions for the linear part of (1). In the setting when the linear part of (1) is hyperbolic (i.e. admits an exponential dichotomy or trichotomy), the results of [8] essentially reduce to the previously known results devoted to Hyers-Ulam stability for (1) (see [4]). We stress that the literature devoted to Hyers-Ulam stability for differential and difference equations is vast and contains many interesting contributions. In particular, for some recent results devoted to the relationship between Hyers-Ulam stability and hyperbolicity, we refer to [2, 3, 4, 5, 6, 7, 9, 10, 11, 12, 13, 19] and references therein. In addition, we mention the works of Anderson and Onitsuka [1], Fukutaka and Onitsuka [14, 15, 16], Popa and Rasa [21, 22] and Wang et. al [23, 24] among others. For the description of the shadowing theory in the context of smooth dynamical systems, we refer to [17, 18, 20]. In order to describe the results of the present paper, suppose that for each \(\lambda\in\Sigma\) we have an approximate solution \(y_{\lambda}\) of (1) which is shadowed by an unique exact solution \(x_{\lambda}\). Our main objective is to study the dependence of Introduction Let \(X\) be a smooth smooth manifold and \(\mathcal{B}(X)\) be a smooth smooth manifold. We consider the following two cases: (1) \[\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X )=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R }:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\, \mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B }(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x \in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{ R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{ R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{ R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{ R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{ R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{ R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{ R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{ R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{ R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{ R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{ R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{ R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{ R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{ R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{ R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{ R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{ R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}: \,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B }(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{ R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\, \mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)= \{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}: \,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\, \mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)= \{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}: \,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\, \mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)= \{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{ R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\, \mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\, \mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)= \{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\, \mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)= \{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\, \mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)=\{x\in \mathbb{R}:\,\mathcal{B}(X)=\{x\in\mathbb{R}:\,\mathcal{B}(X)= _Furthermore, let \(\varepsilon\colon\mathbb{R}\to(0,\infty)\) be a continuous map such that_ \[L:=\sup_{t\in\mathbb{R}}\int_{-\infty}^{\infty}\varepsilon(s)\|\mathcal{G}(t,s) \|\,ds<+\infty, \tag{6}\] _and, given \(\lambda\in\Sigma\), let \(y_{\lambda}\colon\mathbb{R}\to X\) be a continuously differentiable map satisfying_ \[|y_{\lambda}^{\prime}(t)-A(t)y_{\lambda}(t)-f_{\lambda}(t,y_{\lambda}(t))|\leq \varepsilon(t)\quad\text{for }t\in\mathbb{R}. \tag{7}\] _Then, for each \(\lambda\in\Sigma\) there exists a unique solution \(x_{\lambda}\colon\mathbb{R}\to X\) of (4) such that_ \[\sup_{t\in\mathbb{R}}|x_{\lambda}(t)-y_{\lambda}(t)|\leq\frac{L}{1-q}. \tag{8}\] Proof.: Let \(\mathcal{Y}\) denote the space of all continuous maps \(z\colon\mathbb{R}\to X\) such that \[\|z\|_{\infty}:=\sup_{t\in\mathbb{R}}|z(t)|<+\infty.\] Then, \((\mathcal{Y},\|\cdot\|_{\infty})\) is a Banach space. Take a fixed \(\lambda\in\Sigma\) and set \[(\mathcal{T}_{\lambda}z)(t):=\int_{-\infty}^{\infty}\mathcal{G}(t,s)(A(s)y_{ \lambda}(s)+f_{\lambda}(s,y_{\lambda}(s)+z(s))-y_{\lambda}^{\prime}(s))\,ds, \tag{9}\] for \(t\in\mathbb{R}\) and \(z\in\mathcal{Y}\). Note that it follows from (3) and (7) that \[|A(s)y_{\lambda}(s)+f_{\lambda}(s,y_{\lambda}(s)+z(s))-y_{\lambda }^{\prime}(s)|\] \[\leq|A(s)y_{\lambda}(s)+f_{\lambda}(s,y_{\lambda}(s))-y_{\lambda }^{\prime}(s)|+|f_{\lambda}(s,y_{\lambda}(s)+z(s))-f_{\lambda}(s,y_{\lambda}( s))|\] \[\leq\varepsilon(s)+c(s)|z(s)|,\] for \(s\in\mathbb{R}\). Hence, \[|(\mathcal{T}_{\lambda}z)(t)|\leq\int_{-\infty}^{\infty}\|\mathcal{G}(t,s)\|( \varepsilon(s)+c(s)|z(s)|)\,ds,\] for \(t\in\mathbb{R}\). This together with (5) and (6) implies that \[\|\mathcal{T}_{\lambda}z\|_{\infty}\leq L+q\|z\|_{\infty},\quad z\in\mathcal{ Y}. \tag{10}\] In particular, \(\mathcal{T}_{\lambda}\colon\mathcal{Y}\to\mathcal{Y}\) is well-defined. Moreover, setting \(z=0\) in (10) we have that \[\|\mathcal{T}_{\lambda}0\|_{\infty}\leq L. \tag{11}\] Take now \(z_{1},z_{2}\in\mathcal{Y}\). Observe that (3) implies that \[|f_{\lambda}(s,y_{\lambda}(s)+z_{1}(s))-f_{\lambda}(s,y_{\lambda}(s)+z_{2}(s) )|\leq c(s)|z_{1}(s)-z_{2}(s)|,\] for \(s\in\mathbb{R}\). Consequently, \[|(\mathcal{T}_{\lambda}z_{1})(t)-(\mathcal{T}_{\lambda}z_{2})(t)|\leq\int_{- \infty}^{\infty}c(s)\|\mathcal{G}(t,s)\|z_{1}(s)-z_{2}(s)|\,ds,\] for \(t\in\mathbb{R}\). This together with (5) gives that \[\|\mathcal{T}_{\lambda}z_{1}-\mathcal{T}_{\lambda}z_{2}\|_{\infty}\leq q\|z_ {1}-z_{2}\|_{\infty}. \tag{12}\] Set \[\mathcal{D}:=\bigg{\{}z\in\mathcal{Y}:\|z\|_{\infty}\leq\frac{L}{1-q}\bigg{\}}.\] It is apparent that \(\mathcal{D}\) is a non-empty closed subset of \(\mathcal{Y}\) and it is therefore a complete metric space with the distance \(\|\cdot\|_{\infty}\). For \(z\in\mathcal{D}\), it follows from (11) and (12) that \[\|\mathcal{T}_{\lambda}z\|_{\infty}\leq\|\mathcal{T}_{\lambda}0\|_{\infty}+\| \mathcal{T}_{\lambda}z-\mathcal{T}_{\lambda}0\|_{\infty}\leq L+\frac{qL}{1-q}= \frac{L}{1-q}.\] Thus, \(\mathcal{T}_{\lambda}(\mathcal{D})\subset\mathcal{D}\). By (12), we have that \(\mathcal{T}_{\lambda}\) is a contraction on \(\mathcal{D}\), and therefore it has a unique fixed point \(z_{\lambda}\in\mathcal{D}\). It is straightforward to verify that \(x_{\lambda}:=y_{\lambda}+z_{\lambda}\) is a solution of (4). Moreover, since \(z_{\lambda}\in\mathcal{D}\), we have that (8) holds. The uniqueness of \(x_{\lambda}\) can be established by repeating the arguments in the proof of [8, Theorem 1]. The proof of the theorem is completed. ## 3. Regularity with respect to a parameter We are now interested in formulating sufficient conditions under which the map \(\lambda\mapsto x_{\lambda}(t)\) is of class \(C^{k}\) with \(k\in\mathbb{N}\), for each \(t\in\mathbb{R}\). ### \(C^{k}\) regularity without exponential dichotomy First, we recall some notions for clarity. Let \(\phi:Y_{1}\times Y_{2}\to Z\) be a map, where \(Y_{1}\), \(Y_{2}\) and \(Z\) are three Banach spaces. We say that the map \(y_{2}\mapsto\phi(y_{1},y_{2})\) is \(C^{k}\) on an open set \(\mathcal{U}\subset Y_{1}\times Y_{2}\) if \((y_{1},y_{2})\mapsto\phi(y_{1},y_{2})\) is \(k\)-times differentiable with respect to \(y_{2}\) on \(\mathcal{U}\) and \(\frac{\partial^{i}}{\partial y_{2}}\phi\), \(i\)-th partial derivative of \(\phi\) with respect to \(y_{2}\), is continuous on \(\mathcal{U}\) for \(1\leq i\leq k\). By \(\frac{\partial^{i}}{\partial y_{2}}\phi(y_{1},y_{2})\) we denote the value of \(\frac{\partial^{i}}{\partial y_{2}}\phi\) at the point \((y_{1},y_{2})\in Y_{1}\times Y_{2}\). For a map with more variables, we can have analogous notions. **Theorem 2**.: _For each \(\lambda\in\Sigma\) let \(y_{\lambda}:\mathbb{R}\to X\) be a continuously differentiable map satisfying (7) and suppose that the assumptions of Theorem 1 hold. Let \(x_{\lambda}:\mathbb{R}\to X\) be the map associated to \(y_{\lambda}\) by Theorem 1 and take \(k\in\mathbb{N}\). In addition, suppose there exists \(C>0\) such that the following conditions hold:_ * _the map_ \((\lambda,x)\mapsto f_{\lambda}(t,x)\) _is_ \(C^{k+1}\) _and for all_ \(2\leq i\leq k+1\) _and_ \(0\leq j\leq i\)_,_ \[\sup_{\lambda\in\Sigma}\sup_{x\in X}\left\|\frac{\partial^{i}}{\partial\lambda ^{i-j}\partial x^{j}}f_{\lambda}(t,x)\right\|\leq C\varepsilon(t);\] (13) * _the maps_ \(\lambda\mapsto y_{\lambda}(t)\) _and_ \(\lambda\mapsto y_{\lambda}^{\prime}(t)\) _both are_ \(C^{k+1}\) _such that_ \[\left|\frac{\partial^{i}}{\partial\lambda^{i}}y_{\lambda}^{\prime}(t)-A(t) \frac{\partial^{i}}{\partial\lambda^{i}}y_{\lambda}(t)-\frac{d^{i}}{d\lambda^ {i}}f_{\lambda}(t,y_{\lambda}(t))\right|\leq C\varepsilon(t)\] (14) _for all_ \(1\leq i\leq k+1\)_, all_ \(t\in\mathbb{R}\) _and all_ \(\lambda\in\Sigma\)_. Furthermore, for every_ \(\lambda_{0}\in\Sigma\) _there exist_ \(M_{\lambda_{0}}>1\) _and a neighborhood_ \(\Sigma_{0}\) _of_ \(\lambda_{0}\) _such that_ \[\max_{1\leq i\leq k+1}\sup_{t\in\mathbb{R}}\left\|\frac{\partial^{i}}{\partial \lambda^{i}}y_{\lambda}(t)\right\|\leq M_{\lambda_{0}}\quad\text{for each}\quad \lambda\in\Sigma_{0}.\] (15) _Then, the map \(\lambda\mapsto x_{\lambda}(t)\) is \(C^{k}\) for each \(t\in\mathbb{R}\)._ **Remark 1**.: _Observe that if the map \(\lambda\mapsto y_{\lambda}\) is constant and condition (13) holds then conditions (14) and (15) are automatically satisfied._ In order to establish the statement of Theorem 2, we need to set up several auxiliary results. We first observe that since \(x_{\lambda}=y_{\lambda}+z_{\lambda}\), it is sufficient to prove that \(\lambda\mapsto z_{\lambda}(t)\) is \(C^{k}\) for each \(t\in\mathbb{R}\). In fact, we will prove that the map \(\lambda\mapsto z_{\lambda}\) is \(C^{k}\) as a map from \(\Sigma\) to \(\mathcal{Y}\), which immediately implies the desired conclusion. Let \(\mathcal{T}_{\lambda}\) be the operator constructed in the proof of Theorem 1 (see (9)). **Lemma 1**.: _Suppose \(\Sigma\ni\lambda\mapsto\mathcal{T}_{\lambda}z_{\lambda_{0}}\) is continuous at \(\lambda_{0}\) for every \(\lambda_{0}\in\Sigma\). Then, \(\lambda\mapsto z_{\lambda}\) is continuous._ Proof.: Fix an arbitrary \(\lambda\in\Sigma\). Then, for \(h\in\Sigma\) we have (see (12)) that \[\|z_{\lambda+h}-z_{\lambda}\|_{\infty} =\|\mathcal{T}_{\lambda+h}z_{\lambda+h}-\mathcal{T}_{\lambda}z_{ \lambda}\|_{\infty}\] \[\leq\|\mathcal{T}_{\lambda+h}z_{\lambda+h}-\mathcal{T}_{\lambda+ h}z_{\lambda}\|_{\infty}+\|\mathcal{T}_{\lambda+h}z_{\lambda}-\mathcal{T}_{ \lambda}z_{\lambda}\|_{\infty}\] \[\leq q\|z_{\lambda+h}-z_{\lambda}\|_{\infty}+\|\mathcal{T}_{ \lambda+h}z_{\lambda}-\mathcal{T}_{\lambda}z_{\lambda}\|_{\infty}.\] Since \(q<1\), we see that \[\|z_{\lambda+h}-z_{\lambda}\|_{\infty}\leq\frac{1}{1-q}\|\mathcal{T}_{\lambda+ h}z_{\lambda}-\mathcal{T}_{\lambda}z_{\lambda}\|_{\infty}. \tag{16}\] According to our assumption, the right hand side of (16) goes to \(0\) as \(|h|\to 0\), which results in the continuity as claimed. The proof is completed. **Lemma 2**.: _Let \(k\in\mathbb{N}\). Suppose that the map \(\Sigma\times\mathcal{Y}\ni(\lambda,z)\mapsto\mathcal{T}_{\lambda}z\) is \(C^{k}\) on an open set \(\mathcal{W}\) containing the set \(\mathcal{S}:=\{(\lambda,z_{\lambda}):\lambda,\bar{\lambda}\in\Sigma\}\). Then, \(\Sigma\ni\lambda\mapsto z_{\lambda}\) is also \(C^{k}\)._ Proof.: We utilize induction to prove that the \(i\)-th derivative of the map \(\lambda\mapsto z_{\lambda}\) for \(1\leq i\leq k\) is continuous and has the form \[\frac{\partial^{i}}{\partial\lambda^{i}}z_{\lambda}=\left(\mathrm{Id}-\frac{ \partial}{\partial z}\mathcal{T}_{\lambda}z_{\lambda}\right)^{-1}\mathcal{F}( \lambda), \tag{17}\] where \[\mathcal{F}(\lambda):=\sum\left(\frac{\partial^{\ell}}{\partial\lambda^{\ell -j}\partial z^{j}}\mathcal{T}_{\lambda}z_{\lambda}\right)\left(\frac{\partial ^{\ell_{1}}}{\partial\lambda^{\ell_{1}}}z_{\lambda},\ldots,\frac{\partial^{ \ell_{j}}}{\partial\lambda^{\ell_{j}}}z_{\lambda}\right)\!,\] where the sum is taken over \(1\leq\ell\leq i\), \(0\leq j\leq\ell\) and some nonnegative integers \(\ell_{1},\ldots,\ell_{j}\) satisfying \(\ell_{1}+\ldots+\ell_{j}=i+j-\ell\). We start by observing that, since \((\lambda,z)\mapsto\mathcal{T}_{\lambda}z\) is \(C^{1}\) on the open set \(\mathcal{W}\) containing \(\mathcal{S}\), we have that \[z_{\lambda+h}-z_{\lambda} =\mathcal{T}_{\lambda+h}z_{\lambda+h}-\mathcal{T}_{\lambda}z_{\lambda}\] \[=\mathcal{T}_{\lambda+h}z_{\lambda+h}-\mathcal{T}_{\lambda+h}z_{ \lambda}+\mathcal{T}_{\lambda+h}z_{\lambda}-\mathcal{T}_{\lambda}z_{\lambda}\] \[=\int_{0}^{1}\left(\frac{\partial}{\partial z}\mathcal{T}_{\lambda +h}(z_{\lambda}+\theta(z_{\lambda+h}-z_{\lambda}))\right)(z_{\lambda+h}-z_{ \lambda})d\theta\] \[\quad+\left(\frac{\partial}{\partial\lambda}\mathcal{T}_{\lambda }z_{\lambda}\right)h+\mathbf{o}(h), \tag{18}\] where \(\mathbf{o}(h)\) means that \(\lim_{h\to 0}\frac{\|\mathbf{o}(h)\|_{\infty}}{|h|}=0\). Since the derivative of \(z\mapsto\mathcal{T}_{\lambda}z\) is continuous at \((\lambda,z_{\lambda})\), it follows from (18) and the conclusion of Lemma 1 that \[z_{\lambda+h}-z_{\lambda} =\left(\frac{\partial}{\partial z}\mathcal{T}_{\lambda}z_{\lambda}+ \alpha\right)(z_{\lambda+h}-z_{\lambda})\] \[\quad+\left(\frac{\partial}{\partial\lambda}\mathcal{T}_{\lambda} z_{\lambda}\right)h+\mathbf{o}(h), \tag{19}\] where \(\alpha\) is a linear operator such that \(\|\alpha\|\to 0\) as \(|h|\to 0\). Moreover, by (16), given \(\varepsilon>0\), whenever \(|h|\) is sufficiently small we have that \[\|z_{\lambda+h}-z_{\lambda}\|_{\infty} \leq\frac{1}{1-q}\|\mathcal{T}_{\lambda+h}z_{\lambda}-\mathcal{T}_ {\lambda}z_{\lambda}\|_{\infty}\] \[\leq\frac{1}{1-q}\left(\left\|\frac{\partial}{\partial\lambda} \mathcal{T}_{\lambda}z_{\lambda}\right\|+\varepsilon\right)|h|.\] In particular, when \(|h|\) is sufficiently small, we have that \[\frac{\|\alpha(z_{\lambda+h}-z_{\lambda})\|_{\infty}}{|h|} \leq\frac{\|\alpha\|\|z_{\lambda+h}-z_{\lambda}\|_{\infty}}{|h|}\] \[\leq\frac{\|\alpha\|}{1-q}\left(\left\|\frac{\partial}{\partial \lambda}\mathcal{T}_{\lambda}z_{\lambda}\right\|+\varepsilon\right).\] Therefore, since \(\|\alpha\|\to 0\) when \(|h|\to 0\), it follows that \(\alpha(z_{\lambda+h}-z_{\lambda})=\mathbf{o}(h)\). Plugging this information into (19) we obtain that \[z_{\lambda+h}-z_{\lambda}=\left(\frac{\partial}{\partial\lambda}\mathcal{T}_{ \lambda}z_{\lambda}\right)h+\left(\frac{\partial}{\partial z}\mathcal{T}_{ \lambda}z_{\lambda}\right)(z_{\lambda+h}-z_{\lambda})+\mathbf{o}(h). \tag{20}\] Now, observing that (12) implies \[\left\|\frac{\partial}{\partial z}\mathcal{T}_{\lambda}z_{\lambda }\right\| =\sup_{w\neq 0}\frac{\left\|\left(\frac{\partial}{\partial z} \mathcal{T}_{\lambda}z_{\lambda}\right)w\right\|_{\infty}}{\|w\|_{\infty}}\] \[=\sup_{w\neq 0}\lim_{t\to 0^{+}}\frac{\left\|\left(\frac{ \partial}{\partial z}\mathcal{T}_{\lambda}z_{\lambda}\right)tw\right\|_{ \infty}}{\|tw\|_{\infty}}\] \[\leq\sup_{w\neq 0}\lim_{t\to 0^{+}}\frac{\left\|\mathcal{T}_{ \lambda}(z_{\lambda}+tw)-\mathcal{T}_{\lambda}z_{\lambda}-\left(\frac{ \partial}{\partial z}\mathcal{T}_{\lambda}z_{\lambda}\right)tw\right\|_{ \infty}}{\|tw\|_{\infty}} \tag{21}\] \[\quad+\sup_{w\neq 0}\lim_{t\to 0^{+}}\frac{\left\|\mathcal{T}_{ \lambda}(z_{\lambda}+tw)-\mathcal{T}_{\lambda}z_{\lambda}\right\|_{\infty}}{ \|tw\|_{\infty}}\] \[\leq q<1,\] it follows that the map \(\mathrm{Id}-\frac{\partial}{\partial z}\mathcal{T}_{\lambda}z_{\lambda}\) is invertible. Consequently, using (20) we obtain that \[z_{\lambda+h}-z_{\lambda}=\left(\mathrm{Id}-\frac{\partial}{\partial z} \mathcal{T}_{\lambda}z_{\lambda}\right)^{-1}\left(\frac{\partial}{\partial \lambda}\mathcal{T}_{\lambda}z_{\lambda}\right)h+\mathbf{o}(h).\] This proves that \(\lambda\mapsto z_{\lambda}\) is differentiable and, moreover, that its derivative is given by \[\frac{\partial}{\partial\lambda}z_{\lambda}=\left(\mathrm{Id}-\frac{\partial }{\partial z}\mathcal{T}_{\lambda}z_{\lambda}\right)^{-1}\left(\frac{\partial }{\partial\lambda}\mathcal{T}_{\lambda}z_{\lambda}\right),\] which is of the form in (17) with \(i=1\). Due to Lemma 1 and the assumptions in Lemma 2, it is easy to see that the derivative map \(\lambda\mapsto\frac{\partial}{\partial\lambda}z_{\lambda}\) is continuous. Hence, it is proved that \(\lambda\mapsto z_{\lambda}\) is \(C^{1}\). Assume that (17) holds and that \(\frac{\partial^{i}}{\partial\lambda^{i}}z_{\lambda}\) is continuous for all \(1\leq i\leq m\), \(1\leq m\leq k-1\). Before proving that (17) holds for \(i=m+1\) and that \(\frac{\partial^{m+1}}{\partial\lambda^{m+1}}z_{\lambda}\) is continuous, let us discuss the differentiability of \(\left(\operatorname{Id}-\frac{\partial}{\partial z}\mathcal{T}_{\lambda}z_{ \lambda}\right)^{-1}\). By our assumptions, for sufficiently small \(h\in\Sigma\), we have that \[\left(\operatorname{Id}-\frac{\partial}{\partial z}\mathcal{T}_{ \lambda+h}z_{\lambda+h}\right)^{-1}-\left(\operatorname{Id}-\frac{\partial}{ \partial z}\mathcal{T}_{\lambda}z_{\lambda+h}\right)^{-1}\] \[= \left(\operatorname{Id}-\frac{\partial}{\partial z}\mathcal{T}_{ \lambda+h}z_{\lambda+h}\right)^{-1}\left(\frac{\partial}{\partial z}\mathcal{ T}_{\lambda+h}z_{\lambda+h}-\frac{\partial}{\partial z}\mathcal{T}_{\lambda}z_{ \lambda+h}\right)\] \[\left(\operatorname{Id}-\frac{\partial}{\partial z}\mathcal{T}_{ \lambda}z_{\lambda+h}\right)^{-1}\] \[= \left(\operatorname{Id}-\frac{\partial}{\partial z}\mathcal{T}_{ \lambda+h}z_{\lambda+h}\right)^{-1}\bigg{(}\int_{0}^{1}\left(\frac{\partial^{ 2}}{\partial\lambda\partial z}\mathcal{T}_{\lambda+\theta h}z_{\lambda+h} \right)\!hd\theta\bigg{)}\] \[\left(\operatorname{Id}-\frac{\partial}{\partial z}\mathcal{T}_{ \lambda}z_{\lambda+h}\right)^{-1}\] \[= \left(\operatorname{Id}-\frac{\partial}{\partial z}\mathcal{T}_{ \lambda}z_{\lambda}\right)^{-1}\bigg{(}\bigg{(}\frac{\partial^{2}}{\partial \lambda\partial z}\mathcal{T}_{\lambda}z_{\lambda}\bigg{)}h\bigg{)}\left( \operatorname{Id}-\frac{\partial}{\partial z}\mathcal{T}_{\lambda}z_{\lambda} \right)^{-1}+\mathbf{o}(h),\] and similarly \[\left(\operatorname{Id}-\frac{\partial}{\partial z}\mathcal{T}_{ \lambda}z_{\lambda+h}\right)^{-1}-\left(\operatorname{Id}-\frac{\partial}{ \partial z}\mathcal{T}_{\lambda}z_{\lambda}\right)^{-1}\] \[= \left(\operatorname{Id}-\frac{\partial}{\partial z}\mathcal{T}_{ \lambda}z_{\lambda}\right)^{-1}\bigg{(}\bigg{(}\frac{\partial^{2}}{\partial z }\mathcal{T}_{\lambda}z_{\lambda}\frac{\partial}{\partial\lambda}z_{\lambda} \bigg{)}h\bigg{)}\left(\operatorname{Id}-\frac{\partial}{\partial z} \mathcal{T}_{\lambda}z_{\lambda}\right)^{-1}+\mathbf{o}(h).\] It follows that \(\left(\operatorname{Id}-\frac{\partial}{\partial z}\mathcal{T}_{\lambda}z_{ \lambda}\right)^{-1}\) is differentiable with respect to \(\lambda\) and that its derivative is given by \[\frac{\partial}{\partial\lambda}\left(\operatorname{Id}-\frac{ \partial}{\partial z}\mathcal{T}_{\lambda}z_{\lambda}\right)^{-1}h=\left( \operatorname{Id}-\frac{\partial}{\partial z}\mathcal{T}_{\lambda}z_{\lambda} \right)^{-1}\] \[\left(\bigg{(}\frac{\partial^{2}}{\partial\lambda\partial z} \mathcal{T}_{\lambda}z_{\lambda}+\frac{\partial^{2}}{\partial z^{2}}\mathcal{T} _{\lambda}z_{\lambda}\frac{\partial}{\partial\lambda}z_{\lambda}\bigg{)}h \right)\left(\operatorname{Id}-\frac{\partial}{\partial z}\mathcal{T}_{ \lambda}z_{\lambda}\right)^{-1},\] for \(h\in\Sigma\). It follows that \(\frac{\partial^{m}}{\partial\lambda^{m}}z_{\lambda}\) is differentiable. By left-multiplying (17) (with \(i=m\)) by \(\operatorname{Id}-\frac{\partial}{\partial z}\mathcal{T}_{\lambda}z_{\lambda}\) and differentiating both sides, we obtain that \[\left(\operatorname{Id}-\frac{\partial}{\partial z}\mathcal{T}_{ \lambda}z_{\lambda}\right)\frac{\partial^{m+1}}{\partial\lambda^{m+1}}z_{ \lambda}-\frac{\partial^{2}}{\partial\lambda\partial z}\mathcal{T}_{\lambda}z _{\lambda}\frac{\partial^{m}}{\partial\lambda^{m}}z_{\lambda}\] \[\qquad-\frac{\partial^{2}}{\partial z^{2}}\mathcal{T}_{\lambda}z_ {\lambda}\bigg{(}\frac{\partial}{\partial\lambda}z_{\lambda},\frac{\partial^{m} }{\partial\lambda^{m}}z_{\lambda}\bigg{)}=\frac{\partial}{\partial\lambda} \mathcal{F}(\lambda),\] which gives that \[\frac{\partial^{m+1}}{\partial\lambda^{m+1}}z_{\lambda}=\left( \operatorname{Id}-\frac{\partial}{\partial z}\mathcal{T}_{\lambda}z_{\lambda} \right)^{-1}\] \[\bigg{(}\frac{\partial}{\partial\lambda}\mathcal{F}(\lambda)+ \frac{\partial^{2}}{\partial\lambda\partial z}\mathcal{T}_{\lambda}z_{\lambda} \frac{\partial^{m}}{\partial\lambda^{m}}z_{\lambda}+\frac{\partial^{2}}{ \partial z^{2}}\mathcal{T}_{\lambda}z_{\lambda}\bigg{(}\frac{\partial}{ \partial\lambda}z_{\lambda},\frac{\partial^{m}}{\partial\lambda^{m}}z_{\lambda} \bigg{)}\bigg{)}.\] Clearly, \(\frac{\partial^{m+1}}{\partial\lambda^{m+1}}z_{\lambda}\) is of the form (17) and is continuous according to the inductive assumption and \(C^{k}\) regularity of the map \((\lambda,z)\mapsto\mathcal{T}_{\lambda}z\). By induction, (17) is proved and \(\frac{\partial^{i}}{\partial\lambda^{i}}z_{\lambda}\) is continuous for all \(1\leq i\leq k\). Therefore, the map \(\lambda\to z_{\lambda}\) is \(C^{k}\). The proof of the lemma is completed. **Remark 2**.: _Observe that Lemmas 1 and 2 are general results about fixed points of contractions on Banach spaces. In fact, in the proof of these results we have only exploited the fact that \(z_{\lambda}\in\mathcal{D}\) is a fixed point of a contraction map \(\mathcal{T}_{\lambda}\) acting on a Banach space and not the particular form of the operator \(\mathcal{T}_{\lambda}\). In particular, these results hold true for any map \(\lambda\to z_{\lambda}\) where \(z_{\lambda}\) is a fixed point of a contraction \(\mathcal{T}_{\lambda}\) on a Banach space._ In order to conclude the proof of Theorem 2 what remains to be done is to show that the hypothesis given in its statement imply that the assumptions of Lemma 2 are satisfied. This is accomplished in Lemmas 3 and 4 below. **Lemma 3**.: _Suppose that the assumptions of Theorem 2 hold. Then, \(\mathcal{Y}\ni z\mapsto\mathcal{T}_{\lambda}z\) is \(C^{k}\) on an open set \(\mathcal{W}\) containing the set \(\mathcal{S}\), where \(\mathcal{S}\) is defined in Lemma 2._ Proof.: It suffices to prove that, for any \((\lambda_{0},z_{\bar{\lambda}_{0}})\in\mathcal{S}\), there exists an open set \(\mathcal{N}\ni(\lambda_{0},z_{\bar{\lambda}_{0}})\) on which the map \(z\mapsto\mathcal{T}_{\lambda}z\) is \(C^{k}\). We use induction to prove that on \(\mathcal{N}\), the \(i\)-th derivative of \(z\mapsto\mathcal{T}_{\lambda}z\) is given by \[\left(\left(\frac{\partial^{i}}{\partial z^{i}}\mathcal{T}_{ \lambda}z\right)(\eta_{1},\ldots,\eta_{i})\right)(t)\] \[= \int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\left(\frac{\partial^{ i}}{\partial x^{i}}f_{\lambda}(\tau,y_{\lambda}(\tau)+z(\tau))\right)(\eta_{1}( \tau),\ldots,\eta_{i}(\tau))d\tau \tag{22}\] for \(1\leq i\leq k\), where \(\eta_{j}\in\mathcal{Y}\) for \(1\leq j\leq i\). By arguing as in (21), it follows from (3) that \[\sup_{\lambda\in\Sigma,x\in X}\left\|\frac{\partial}{\partial x}f_{\lambda}(t,x)\right\|\leq c(t). \tag{23}\] We first show that (22) holds at the point \((\lambda_{0},z_{\bar{\lambda}_{0}})\). Using the definition of \(\mathcal{T}_{\lambda}\) and the first assumption of Theorem 2, we get that for \(\eta\in\mathcal{Y}\) with sufficiently small \(\|\eta\|_{\infty}\), \[\left(\mathcal{T}_{\lambda_{0}}(z_{\bar{\lambda}_{0}}+\eta)\right) (t)-\left(\mathcal{T}_{\lambda_{0}}z_{\bar{\lambda}_{0}}\right)(t)\] \[=\int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\bigg{\{}\int_{0}^{1} \left(\frac{\partial}{\partial x}f_{\lambda_{0}}(\tau,y_{\lambda_{0}}(\tau)+ z_{\bar{\lambda}_{0}}(\tau)+\theta\eta(\tau))\right)\eta(\tau)d\theta\bigg{\}}d\tau\] \[=\int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\left(\frac{\partial }{\partial x}f_{\lambda_{0}}(\tau,y_{\lambda_{0}}(\tau)+z_{\bar{\lambda}_{0}} (\tau))\right)\eta(\tau)d\tau\] \[\quad+\int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\alpha_{1}(\tau )d\tau, \tag{24}\] where \[\alpha_{1}(\tau)= \int_{0}^{1}\bigg{(}\frac{\partial}{\partial x}f_{\lambda_{0}}( \tau,y_{\lambda_{0}}(\tau)+z_{\bar{\lambda}_{0}}(\tau)+\theta\eta(\tau))\bigg{)} \eta(\tau)d\theta\] \[-\bigg{(}\frac{\partial}{\partial x}f_{\lambda_{0}}(\tau,y_{ \lambda_{0}}(\tau)+z_{\bar{\lambda}_{0}}(\tau))\bigg{)}\eta(\tau).\] By the first assumption of Theorem 2 again, \(|\alpha_{1}(\tau)|\) can be estimated as follows: \[|\alpha_{1}(\tau)|\leq\int_{0}^{1}\left\|\frac{\partial}{\partial x }f_{\lambda_{0}}(\tau,y_{\lambda_{0}}(\tau)+z_{\bar{\lambda}_{0}}(\tau)+\theta \eta(\tau))\right.\] \[\quad-\frac{\partial}{\partial x}f_{\lambda_{0}}(\tau,y_{\lambda _{0}}(\tau)+z_{\bar{\lambda}_{0}}(\tau))\bigg{\|}|\eta(\tau)|d\theta\] \[\leq\int_{0}^{1}\bigg{(}\int_{0}^{1}\left\|\frac{\partial^{2}}{ \partial x^{2}}f_{\lambda_{0}}(\tau,y_{\lambda_{0}}(\tau)+z_{\bar{\lambda}_{0 }}(\tau)+\nu\theta\eta(\tau))\right\|\!\|\eta(\tau)|d\nu\bigg{)}|\eta(\tau)|d\theta\] \[\leq C\varepsilon(\tau)|\eta(\tau)|^{2}. \tag{25}\] Due to (5), (6), (23) and (25), all the integrals above converge. In addition, by (6) and (25), we have that \[\frac{1}{\|\eta\|_{\infty}}\left|\int_{-\infty}^{\infty}\mathcal{G}(t,\tau) \alpha_{1}(\tau)d\tau\right|\leq C\int_{-\infty}^{\infty}\|\mathcal{G}(t,\tau) \|\varepsilon(\tau)|\eta(\tau)|d\tau\leq CL\|\eta\|_{\infty},\] for every \(t\in\mathbb{R}\), and consequently \[\frac{1}{\|\eta\|_{\infty}}\sup_{t\in\mathbb{R}}\left|\int_{-\infty}^{\infty} \mathcal{G}(t,\tau)\alpha_{1}(\tau)d\tau\right|\xrightarrow{\|\eta\|_{\infty} \to 0}0.\] This fact combined with (24) implies that \(z\mapsto\mathcal{T}_{\lambda_{0}}z\) is differentiable at \(z_{\bar{\lambda}_{0}}\) and that its derivative is given by \[\left(\left(\frac{\partial}{\partial z}\mathcal{T}_{\lambda_{0}}z_{\bar{ \lambda}_{0}}\right)\eta\right)(t)= \int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\left(\frac{\partial}{ \partial x}f_{\lambda_{0}}(\tau,y_{\lambda_{0}}(\tau)+z_{\bar{\lambda}_{0}}( \tau))\right)\eta(\tau)d\tau, \tag{26}\] for \(t\in\mathbb{R}\) and \(\eta\in\mathcal{Y}\). Similarly, one can show that \(z\mapsto\mathcal{T}_{\lambda}z\) is also differentiable at every point in the neighborhood \(\mathcal{N}\) of \((\lambda_{0},z_{\bar{\lambda}_{0}})\) and that its derivative has the same form as in (26). Assume that (22) holds for \(i=j\) with \(1\leq j\leq k-1\). Then, by the inductive assumption and the first hypothesis of Theorem 2, for \(\eta\in\mathcal{Y}\) with \(\|\eta\|_{\infty}\) sufficiently small, we have that \[\left(\frac{\partial^{j}}{\partial z^{j}}\mathcal{T}_{\lambda_{0} }(z_{\bar{\lambda}_{0}}+\eta)(\eta_{1},\ldots,\eta_{j})\right)(t)-\left(\frac{ \partial^{j}}{\partial z^{j}}\mathcal{T}_{\lambda_{0}}z_{\bar{\lambda}_{0}}( \eta_{1},\ldots,\eta_{j})\right)(t)\] \[=\int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\left(\frac{\partial ^{j}}{\partial x^{j}}f_{\lambda_{0}}(\tau,y_{\lambda_{0}}(\tau)+z_{\bar{ \lambda}_{0}}(\tau)+\eta(\tau))\right)(\eta_{1}(\tau),\ldots,\eta_{j}(\tau))d\tau\] \[\qquad-\int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\left(\frac{ \partial^{j}}{\partial x^{j}}f_{\lambda_{0}}(\tau,y_{\lambda_{0}}(\tau)+z_{ \bar{\lambda}_{0}}(\tau))\right)(\eta_{1}(\tau),\ldots,\eta_{j}(\tau))d\tau\] \[=\int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\bigg{\{}\int_{0}^{1} \left(\frac{\partial^{j+1}}{\partial x^{j+1}}f_{\lambda_{0}}(\tau,y_{\lambda _{0}}(\tau)+z_{\bar{\lambda}_{0}}(\tau)+\theta\eta(\tau))\right)\eta(\tau)d \theta\bigg{\}}\] \[\qquad(\eta_{1}(\tau),\ldots,\eta_{j}(\tau))d\tau\] \[=\int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\left\{\left(\frac{ \partial^{j+1}}{\partial x^{j+1}}f_{\lambda_{0}}(\tau,y_{\lambda_{0}}(\tau)+z_{ \bar{\lambda}_{0}}(\tau))\right)\eta(\tau)\right\}(\eta_{1}(\tau),\ldots,\eta_{ j}(\tau))d\tau\] \[\qquad+\int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\tilde{\alpha}( \tau)(\eta_{1}(\tau),\ldots,\eta_{j}(\tau))d\tau, \tag{27}\] where \(\eta_{\ell}\in\mathcal{Y}\) for all \(1\leq\ell\leq j\) and \[\tilde{\alpha}(\tau) =\int_{0}^{1}\left(\frac{\partial^{j+1}}{\partial x^{j+1}}f_{ \lambda_{0}}(\tau,y_{\lambda_{0}}(\tau)+z_{\bar{\lambda}_{0}}(\tau)+\theta\eta( \tau))\right)\eta(\tau)d\theta\] \[\quad-\bigg{(}\frac{\partial^{j+1}}{\partial x^{j+1}}f_{\lambda_{ 0}}(\tau,y_{\lambda_{0}}(\tau)+z_{\bar{\lambda}_{0}}(\tau))\bigg{)}\eta(\tau).\] According to the first assumption of Theorem 2, \[\|\tilde{\alpha}(\tau)\| \leq\int_{0}^{1}\bigg{\|}\frac{\partial^{j+1}}{\partial x^{j+1}}f _{\lambda_{0}}(\tau,y_{\lambda_{0}}(\tau)+z_{\bar{\lambda}_{0}}(\tau)+\theta \eta(\tau))\] \[\quad-\frac{\partial^{j+1}}{\partial x^{j+1}}f_{\lambda_{0}}( \tau,y_{\lambda_{0}}(\tau)+z_{\bar{\lambda}_{0}}(\tau))\bigg{\|}|\eta(\tau)|d\theta\] \[\leq\int_{0}^{1}\bigg{(}\int_{0}^{1}\bigg{\|}\frac{\partial^{j+2} }{\partial x^{j+2}}f_{\lambda_{0}}(\tau,y_{\lambda_{0}}(\tau)+z_{\bar{\lambda }_{0}}(\tau)+\nu\theta\eta(\tau))\bigg{\|}|\eta(\tau)|d\nu\bigg{)}|\eta(\tau)|d\theta\] \[\leq C\varepsilon(\tau)|\eta(\tau)|^{2}.\] Then, we have that \[\frac{1}{\|\eta\|_{\infty}\prod_{\ell=1}^{j}\|\eta_{\ell}\|_{\infty}}\sup_{t \in\mathbb{R}}\bigg{|}\int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\tilde{\alpha }(\tau)(\eta_{1}(\tau),\ldots,\eta_{j}(\tau))d\tau\bigg{|}\leq CL\|\eta\|_{ \infty}.\] This together with (27) implies that (22) holds at point \((\lambda_{0},z_{\bar{\lambda}_{0}})\) for \(i=j+1\). Similarly, one can show that \(z\mapsto\mathcal{T}_{\lambda}z\) is also differentiable of order \(j+1\) at every point in the neighborhood \(\mathcal{N}\) of \((\lambda_{0},z_{\bar{\lambda}_{0}})\) and that the \((j+1)\)-th derivative has the same form as in (22). Thus, by induction, (22) is proved. Now, we show that \(\frac{\partial^{i}}{\partial z^{i}}\mathcal{T}\) is continuous on \(\mathcal{N}\) for all \(1\leq i\leq k\). Without loss of generality, we only show the continuity at \((\lambda_{0},z_{\bar{\lambda}_{0}})\). In fact, fixing any \(1\leq i\leq k\), using the first assumption of Theorem 2, for every \(\eta\in\mathcal{Y},\mu\in\Sigma\) sufficiently small, by (6), (13) and (15) we have \[\bigg{\|}\frac{\partial^{i}}{\partial z^{i}}\mathcal{T}_{\lambda _{0}}(z_{\bar{\lambda}_{0}}+\eta)-\frac{\partial^{i}}{\partial z^{i}} \mathcal{T}_{\lambda_{0}}z_{\bar{\lambda}_{0}}\bigg{\|}=\sup_{\begin{subarray} {c}0\neq\eta_{j}\in\mathcal{Y}\\ 1\leq j\leq i\end{subarray}}\frac{1}{\prod_{j=1}^{i}\|\eta_{j}\|_{\infty}}\cdot\] \[\quad\sup_{t\in\mathbb{R}}\bigg{|}\int_{-\infty}^{\infty}\mathcal{ G}(t,\tau)\left(\frac{\partial^{i}}{\partial x^{i}}f_{\lambda_{0}}(\tau,y_{ \lambda_{0}}(\tau)+z_{\bar{\lambda}_{0}}(\tau)+\eta(\tau))\right.\] \[\quad\quad\left.-\frac{\partial^{i}}{\partial x^{i}}f_{\lambda_{ 0}}(\tau,y_{\lambda_{0}}(\tau)+z_{\bar{\lambda}_{0}}(\tau))\right)(\eta_{1}( \tau),\ldots,\eta_{i}(\tau))d\tau\bigg{|}\] \[\leq \sup_{t\in\mathbb{R}}\int_{-\infty}^{\infty}\|\mathcal{G}(t,\tau )\|\bigg{\|}\frac{\partial^{i}}{\partial x^{i}}f_{\lambda_{0}}(\tau,y_{\lambda _{0}}(\tau)+z_{\bar{\lambda}_{0}}(\tau)+\eta(\tau))\] \[\quad\quad-\frac{\partial^{i}}{\partial x^{i}}f_{\lambda_{0}}(\tau,y_{\lambda_{0}}(\tau)+z_{\bar{\lambda}_{0}}(\tau))\bigg{\|}d\tau\] \[\leq C\sup_{t\in\mathbb{R}}\int_{-\infty}^{\infty}\|\mathcal{G}(t,\tau )\|\varepsilon(\tau)|\eta(\tau)|d\tau\leq CL\|\eta\|_{\infty}.\] and \[\left\|\frac{\partial^{i}}{\partial z^{i}}\mathcal{T}_{\lambda_{0}+ \mu}z\bar{z}_{\lambda_{0}}-\frac{\partial^{i}}{\partial z^{i}}\mathcal{T}_{ \lambda_{0}}z\bar{z}_{\lambda_{0}}\right\|=\sup_{\begin{subarray}{c}0\neq\eta_{ 1}\in\mathcal{Y}\\ 1\leq j\leq i\end{subarray}}\frac{1}{\prod_{j=1}^{i}\|\eta_{j}\|_{\infty}}\cdot\] \[\qquad\sup_{t\in\mathbb{R}}\left|\int_{-\infty}^{\infty}\mathcal{ G}(t,\tau)\left(\frac{\partial^{i}}{\partial x^{i}}f_{\lambda_{0}+\mu}(\tau,y_{ \lambda_{0}+\mu}(\tau)+z\bar{\lambda}_{0}(\tau))\right.\right.\] \[\qquad\left.\left.-\frac{\partial^{i}}{\partial x^{i}}f_{\lambda_ {0}}(\tau,y_{\lambda_{0}}(\tau)+z\bar{\lambda}_{0}(\tau))\right)(\eta_{1}( \tau),\ldots,\eta_{i}(\tau))d\tau\right|\] \[\leq \sup_{t\in\mathbb{R}}\int_{-\infty}^{\infty}\|\mathcal{G}(t,\tau )\|\cdot\] \[\left(\left\|\frac{\partial^{i}}{\partial x^{i}}f_{\lambda_{0}+ \mu}(\tau,y_{\lambda_{0}+\mu}(\tau)+z\bar{\lambda}_{0}(\tau))-\frac{\partial^ {i}}{\partial x^{i}}f_{\lambda_{0}}(\tau,y_{\lambda_{0}+\mu}(\tau)+z\bar{ \lambda}_{0}(\tau))\right\|\right.\] \[\left.+\left\|\frac{\partial^{i}}{\partial x^{i}}f_{\lambda_{0}}( \tau,y_{\lambda_{0}+\mu}(\tau)+z\bar{\lambda}_{0}(\tau))-\frac{\partial^{i}}{ \partial x^{i}}f_{\lambda_{0}}(\tau,y_{\lambda_{0}}(\tau)+z\bar{\lambda}_{0}( \tau))\right\|\right)d\tau\] \[\leq C\sup_{t\in\mathbb{R}}\int_{-\infty}^{\infty}\|\mathcal{G}(t, \tau)\|\varepsilon(\tau)(|\mu|+M_{\lambda_{0}}|\mu|)d\tau\] \[\leq CL(M_{\lambda_{0}}+1)|\mu|.\] Hence, it follows that \(\frac{\partial^{i}}{\partial z^{i}}\mathcal{T}\) is continuous at \((\lambda_{0},z_{\lambda_{0}})\) and it is proved that the map \(z\mapsto\mathcal{T}_{\lambda}z\) is \(C^{k}\) on \(\mathcal{N}\). This completes the proof of the lemma. **Lemma 4**.: _Suppose we are in the hypotheses of Theorem 2. Then, \(\Sigma\ni\lambda\mapsto\mathcal{T}_{\lambda}z\) is \(C^{k}\) on an open set \(\mathcal{W}\) containing the set \(\mathcal{S}\), where \(\mathcal{S}\) is defined in Lemma 2._ In order to prove Lemma 4, we need the following auxiliary result. **Lemma 5**.: _Fix \(\lambda_{*}\in\Sigma\) and \(2\leq i\leq k+1\) and suppose that conditions (13) and (15) are satisfied. Then, there exists \(N>0\) such that for every \(\mu\in\Sigma\) with sufficiently small \(|\mu|\) and for all \(\tau\in\mathbb{R}\),_ \[\left\|\left\{\frac{d^{i}}{d\lambda^{i}}f_{\lambda}(\tau,y_{\lambda}(\tau)+z( \tau))-\frac{d^{i}}{d\lambda^{i}}f_{\lambda}(\tau,y_{\lambda}(\tau))\right\} \right|_{\lambda_{*}+\mu}\right\|\leq C\varepsilon(\tau)NM_{*}^{i}\|z\|_{ \infty}, \tag{28}\] _where \(z\in\mathcal{Y}\) and \(M_{*}>1\) is a constant such that_ \[\max_{1\leq j\leq k+1}\sup_{t\in\mathbb{R}}\left\|\frac{\partial^{j}}{ \partial\lambda^{j}}y_{\lambda_{*}+\mu}(t)\right\|\leq M_{*} \tag{29}\] _for every \(\mu\in\Sigma\) with sufficiently small \(|\mu|\) whose existence is given by (15)._ Proof.: By the chain rule, differentiating \(f_{\lambda}(\tau,y_{\lambda}(\tau)+z(\tau))\) with respect to \(\lambda\) recursively we get that \(\frac{d^{i}}{d\lambda^{i}}f_{\lambda}(\tau,y_{\lambda}(\tau)+z(\tau))\) is a sum of factors of the form \[\frac{\partial^{m}}{\partial\lambda^{m-\ell}\partial x^{\ell}}f_{\lambda}( \tau,y_{\lambda}(\tau)+z(\tau))\bigg{(}\frac{\partial^{i_{1}}}{\partial \lambda^{j_{1}}}y_{\lambda}(\tau),\ldots,\frac{\partial^{i_{\ell}}}{\partial \lambda^{j_{\ell}}}y_{\lambda}(\tau)\bigg{)},\] where \(1\leq m\leq i\) and \(j_{1}+\cdots+j_{\ell}=i-m+\ell\). Similarly, \(\frac{d^{i}}{d\lambda^{i}}f_{\lambda}(\tau,y_{\lambda}(\tau))\) is also a sum of factors of the form \[\frac{\partial^{m}}{\partial\lambda^{m-\ell}\partial x^{\ell}}f_{\lambda}(\tau, y_{\lambda}(\tau))\bigg{(}\frac{\partial^{j_{1}}}{\partial\lambda^{j_{1}}}y_{ \lambda}(\tau),\ldots,\frac{\partial^{j_{\ell}}}{\partial\lambda^{j_{\ell}}}y_ {\lambda}(\tau)\bigg{)}.\] with \(1\leq m\leq i\) and \(j_{1}+\cdots+j_{\ell}=i-m+\ell\). By (13) and (15), we derive that for every \(\mu\in\Sigma\) with sufficiently small \(|\mu|\), \[\bigg{\|}\bigg{\{}\frac{\partial^{m}}{\partial\lambda^{m-\ell} \partial x^{\ell}}f_{\lambda}(\tau,y_{\lambda}(\tau)+z(\tau))\bigg{(}\frac{ \partial^{j_{1}}}{\partial\lambda^{j_{1}}}y_{\lambda}(\tau),\ldots,\frac{ \partial^{j_{\ell}}}{\partial\lambda^{j_{\ell}}}y_{\lambda}(\tau)\bigg{)}\] \[\quad-\frac{\partial^{m}}{\partial\lambda^{m-\ell}\partial x^{ \ell}}f_{\lambda}(\tau,y_{\lambda}(\tau))\bigg{(}\frac{\partial^{j_{1}}}{ \partial\lambda^{j_{1}}}y_{\lambda}(\tau),\ldots,\frac{\partial^{j_{\ell}}}{ \partial\lambda^{j_{\ell}}}y_{\lambda}(\tau)\bigg{)}\bigg{\}}\bigg{|}_{\lambda_ {+}+\mu}\bigg{\|}\] \[\leq\sup_{0\leq\theta\leq 1}\bigg{\|}\frac{\partial^{m+1}}{ \partial\lambda^{m-\ell}\partial x^{\ell+1}}f_{\lambda_{*}+\mu}(\tau,y_{ \lambda_{*}+\mu}(\tau)+\theta z(\tau))\bigg{\|}\cdot\|z\|_{\infty}.\] \[\qquad\bigg{\|}\frac{\partial^{j_{1}}}{\partial\lambda^{j_{1}}}y_ {\lambda_{*}+\mu}(\tau)\bigg{\|}\cdots\bigg{\|}\frac{\partial^{j_{\ell}}}{ \partial\lambda^{j_{\ell}}}y_{\lambda_{*}+\mu}(\tau)\bigg{\|}\] \[\leq C\varepsilon(\tau)M_{*}^{i}\|z\|_{\infty},\] where \(M_{*}\) is a constant satisfying (29). Finally, since \(\frac{d^{i}}{d\lambda^{i}}f_{\lambda}(\tau,y_{\lambda}(\tau)+z(\tau))\) and \(\frac{d^{i}}{d\lambda^{i}}f_{\lambda}(\tau,y_{\lambda}(\tau)))\) have a finite number of factors (depending only on \(i\)), there exists a sufficiently large integer \(N=N(i)>0\) such that (28) holds. The proof is completed. We now continue with the proof of Lemma 4. Proof of Lemma 4.: The proof is similar to that of Lemma 3. Fixing a point \((\lambda_{0},z_{\bar{\lambda}_{0}})\in\mathcal{S}\) arbitrarily, we use induction to prove that there exists an open neighborhood \(\mathcal{V}\) of \((\lambda_{0},z_{\bar{\lambda}_{0}})\) on which the \(i\)-th derivative of \(\lambda\mapsto\mathcal{T}_{\lambda}z\) is given by \[\bigg{(}\bigg{(}\frac{\partial^{i}}{\partial\lambda^{i}}\mathcal{ T}_{\lambda}z\bigg{)}\left(\mu_{1},\ldots,\mu_{i}\right)\bigg{)}\left(t \right)=\int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\bigg{(}A(\tau)\frac{\partial ^{i}}{\partial\lambda^{i}}y_{\lambda}(\tau)\] \[\quad+\frac{d^{i}}{d\lambda^{i}}f_{\lambda}(\tau,y_{\lambda}( \tau)+z(\tau))-\frac{\partial^{i}}{\partial\lambda^{i}}y_{\lambda}^{\prime}( \tau)\bigg{)}(\mu_{1},\ldots,\mu_{i})d\tau \tag{30}\] for \(1\leq i\leq k\), where \(\mu_{j}\in\Sigma\) for \(1\leq j\leq i\) and \(\frac{d^{i}}{d\lambda^{i}}\) denotes \(i\)-th derivative with respect to \(\lambda\). We first show that (30) holds at \((\lambda_{0},z_{\bar{\lambda}_{0}})\). In order to simplify notation, we introduce \[\mathcal{H}(t,\lambda) :=A(t)y_{\lambda}(t)+f_{\lambda}(t,y_{\lambda}(t))-y_{\lambda}^{ \prime}(t),\] \[\mathcal{M}(t,\lambda,\bar{\lambda}) :=A(t)y_{\lambda}(t)+f_{\lambda}(t,y_{\lambda}(t)+z_{\bar{\lambda} }(t))-y_{\lambda}^{\prime}(t).\] Using the definition of \(\mathcal{T}_{\lambda}\) and the assumptions of Theorem 2, we get that for \(\mu\in\Sigma\) with sufficiently small \(|\mu|\), \[\left(\mathcal{T}_{\lambda_{0}+\mu}z_{\bar{\lambda}_{0}}\right)(t)- \left(\mathcal{T}_{\lambda_{0}}z_{\bar{\lambda}_{0}}\right)(t) \tag{31}\] \[=\int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\Big{\{}\mathcal{M}( \tau,\lambda_{0}+\mu,\bar{\lambda}_{0})-\mathcal{M}(\tau,\lambda_{0},\bar{ \lambda}_{0})\Big{\}}d\tau\] \[=\int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\bigg{\{}\int_{0}^{1} \frac{\partial}{\partial\lambda}\mathcal{M}(\tau,\lambda_{0}+\theta\mu,\bar{ \lambda}_{0})\mu d\theta\bigg{\}}d\tau\] \[=\int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\frac{\partial}{ \partial\lambda}\mathcal{M}(\tau,\lambda_{0},\bar{\lambda}_{0})\mu d\tau+\int _{-\infty}^{\infty}\mathcal{G}(t,\tau)\beta(\tau)d\tau,\] where \[\beta(\tau)=\int_{0}^{1}\frac{\partial}{\partial\lambda}\mathcal{M}(\tau, \lambda_{0}+\theta\mu,\bar{\lambda}_{0})\mu d\theta-\frac{\partial}{\partial \lambda}\mathcal{M}(\tau,\lambda_{0},\bar{\lambda}_{0})\mu.\] In order to prove that (30) holds at \((\lambda_{0},z_{\bar{\lambda}_{0}})\) for \(i=1\), it suffices to show \[\frac{1}{|\mu|}\sup_{t\in\mathbb{R}}\left|\int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\beta(\tau)d\tau\right|\xrightarrow{|\mu|\to 0}0. \tag{32}\] By the assumptions of Theorem 2 (in particular, assumptions (14) and (15)) and Lemma 5, we have that \[|\beta(\tau)|\leq\int_{0}^{1}\bigg{\|}\frac{\partial}{\partial \lambda}\mathcal{M}(\tau,\lambda_{0}+\theta\mu,\bar{\lambda}_{0})-\frac{ \partial}{\partial\lambda}\mathcal{M}(\tau,\lambda_{0},\bar{\lambda}_{0}) \bigg{\|}|\mu|d\theta\] \[\leq\int_{0}^{1}\bigg{(}\int_{0}^{1}\bigg{\{}\bigg{\|}\frac{ \partial^{2}}{\partial\lambda^{2}}\mathcal{M}(\tau,\lambda_{0}+\nu\theta\mu) \bigg{\|}\] \[\quad+\bigg{\|}\bigg{(}\frac{d^{2}}{d\lambda^{2}}f_{\lambda}(\tau,y_{\lambda}(\tau)+z_{\bar{\lambda}_{0}}(\tau))-\frac{d^{2}}{d\lambda^{2}}f_{ \lambda}(\tau,y_{\lambda}(\tau))\bigg{)}\bigg{|}_{\lambda_{0}+\nu\theta\mu} \bigg{\|}\bigg{\}}|\mu|d\nu\bigg{)}|\mu|d\theta\] \[\leq C\varepsilon(\tau)(1+NM_{*}^{2}\|z_{\bar{\lambda}_{0}}\|_{ \infty})|\mu|^{2}.\] Therefore, by (6) we have that \[\frac{1}{|\mu|}\left|\int_{-\infty}^{\infty}\mathcal{G}(t,\tau) \beta(\tau)d\tau\right| \leq\int_{-\infty}^{\infty}\|\mathcal{G}(t,\tau)\|C\varepsilon( \tau)(1+NM_{\lambda_{0}}^{2}\|z_{\bar{\lambda}_{0}}\|_{\infty})|\mu|d\tau\] \[\leq CL(1+NM_{*}^{2}\|z_{\bar{\lambda}_{0}}\|_{\infty})|\mu|,\] for every \(t\in\mathbb{R}\) and thus (32) holds. Consequently, since all the integrals in (31) converge, it follows that \(\lambda\mapsto\mathcal{T}_{\lambda}z_{\bar{\lambda}_{0}}\) is differentiable at \(\lambda_{0}\) and that its derivative is given by \[\left(\left(\frac{\partial}{\partial\lambda}\mathcal{T}_{\lambda_{0}}z_{\bar{ \lambda}_{0}}\right)\mu\right)(t)=\int_{-\infty}^{\infty}\mathcal{G}(t,\tau) \frac{\partial}{\partial\lambda}\mathcal{M}(\tau,\lambda_{0},\bar{\lambda}_{0} )\mu d\tau \tag{33}\] for \(t\in\mathbb{R}\) and \(\mu\in\Sigma\). Similarly, we can show that \(\lambda\mapsto\mathcal{T}_{\lambda}z\) is also differentiable at every point in the neighborhood \(\mathcal{V}\) of \((\lambda_{0},z_{\bar{\lambda}_{0}})\) and the derivative has the same form as in (33). Assume that (30) holds at the point \((\lambda_{0},z_{\bar{\lambda}_{0}})\) for \(i=j\) with \(1\leq j\leq k-1\). Then, by the inductive assumption and the hypotheses of Theorem 2, for \(\mu\in\Sigma\) with sufficiently small \(|\mu|\) we get that \[\left(\frac{\partial^{j}}{\partial\lambda^{j}}\mathcal{T}_{\lambda _{0}+\mu}z_{\bar{\lambda}_{0}}(\mu_{1},\ldots,\mu_{j})\right)(t)-\left(\frac{ \partial^{j}}{\partial\lambda^{j}}\mathcal{T}_{\lambda_{0}}z_{\bar{\lambda}_{ 0}}(\mu_{1},\ldots,\mu_{j})\right)(t)\] \[=\int_{-\infty}^{\infty}\!\!\mathcal{G}(t,\tau)\bigg{\{}\frac{ \partial^{j}}{\partial\lambda^{j}}\mathcal{M}(\tau,\lambda_{0}+\mu,\bar{ \lambda}_{0})-\frac{\partial^{j}}{\partial\lambda^{j}}\mathcal{M}(\tau, \lambda_{0},\bar{\lambda}_{0})\bigg{\}}(\mu_{1},\ldots,\mu_{j})d\tau\] \[=\int_{-\infty}^{\infty}\!\mathcal{G}(t,\tau)\bigg{\{}\int_{0}^{1 }\bigg{(}\frac{\partial^{j+1}}{\partial\lambda^{j+1}}\mathcal{M}(\tau,\lambda _{0}+\theta\mu,\bar{\lambda}_{0})\bigg{)}\mu d\theta\bigg{\}}(\mu_{1},\ldots, \mu_{j})d\tau\] \[=\int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\bigg{\{}\bigg{(}\frac{ \partial^{j+1}}{\partial\lambda^{j+1}}\mathcal{M}(\tau,\lambda_{0},\bar{ \lambda}_{0})\bigg{)}\mu\bigg{\}}(\mu_{1},\ldots,\mu_{j})d\tau\] \[\quad+\int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\tilde{\beta}( \tau)(\mu_{1},\ldots,\mu_{j})d\tau, \tag{34}\] where \[\tilde{\beta}(\tau)=\int_{0}^{1}\bigg{(}\frac{\partial^{j+1}}{\partial\lambda ^{j+1}}\mathcal{M}(\tau,\lambda_{0}+\theta\mu,\bar{\lambda}_{0})\bigg{)}\mu d \theta-\bigg{(}\frac{\partial^{j+1}}{\partial\lambda^{j+1}}\mathcal{M}(\tau, \lambda_{0},\bar{\lambda}_{0})\bigg{)}\mu.\] It follows from (14), (15) and Lemma 5 that \[|\tilde{\beta}(\tau)|\leq\int_{0}^{1}\bigg{(}\int_{0}^{1}\bigg{\|} \frac{\partial^{j+2}}{\partial\lambda^{j+2}}\mathcal{H}(\tau,\lambda_{0}+\nu \theta\mu)\bigg{\|}\] \[+\bigg{\|}\bigg{\{}\frac{d^{j+2}}{d\lambda^{j+2}}f_{\lambda}(\tau,y_{\lambda}(\tau)+z_{\bar{\lambda}_{0}}(\tau))-\frac{d^{j+2}}{d\lambda^{j+2} }f_{\lambda}(\tau,y_{\lambda}(\tau))\bigg{\}}\bigg{|}_{\lambda_{0}+\nu\theta \mu}\bigg{\|}|\mu|d\nu\bigg{)}|\mu|d\theta\] \[\leq C\varepsilon(\tau)(1+NM_{*}^{j+2}\|z_{\bar{\lambda}_{0}}\|_ {\infty})|\mu|^{2}.\] Then, we have that \[\frac{1}{|\mu|\prod_{\ell=1}^{j}|\mu_{\ell}|}\sup_{t\in\mathbb{R} }\bigg{|}\int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\tilde{\beta}(\tau)(\mu_{1 },\ldots,\mu_{j})d\tau\bigg{|}\] \[\leq CL(1+NM_{*}^{j+2}\|z_{\bar{\lambda}_{0}}\|_{\infty})|\mu|.\] This together with (34) results in that (30) holds at point \((\lambda_{0},z_{\bar{\lambda}_{0}})\) for \(i=j+1\). Similarly, by induction one can show that \(\lambda\mapsto\mathcal{T}_{\lambda}z\) is also differentiable of order \(j+1\) at every point in the neighborhood \(\mathcal{V}\) of \((\lambda_{0},z_{\bar{\lambda}_{0}})\) and that the \((j+1)\)-th derivative has the same form as in (30). Hence, by induction, (30) is proved. Now, we show that \(\frac{\partial^{i}}{\partial\lambda^{j}}\mathcal{T}\) is continuous on \(\mathcal{V}\) for all \(1\leq i\leq k\). Without loss of generality, we only show the continuity at \((\lambda_{0},z_{\bar{\lambda}_{0}})\). In fact, fixing any \(1\leq i\leq k\), for every \(\mu\in\Sigma\) sufficiently small, using (6), the assumptions of Theorem 2 and Lemma 5, we have \[\left\|\frac{\partial^{i}}{\partial\lambda^{i}}\mathcal{T}_{\lambda _{0}+\mu}z_{\bar{\lambda}_{0}}-\frac{\partial^{i}}{\partial\lambda^{i}} \mathcal{T}_{\lambda_{0}}z_{\bar{\lambda}_{0}}\right\|=\sup_{\begin{subarray}{ c}0\neq\mu_{j}\in\Sigma\\ 1\leq j\leq i\end{subarray}}\frac{1}{|\mu_{1}|\cdots|\mu_{i}|}\sup_{t\in\mathbb{ R}}\bigg{|}\int_{-\infty}^{\infty}\mathcal{G}(t,\tau)\] \[\quad\bigg{\{}\frac{\partial^{i}}{\partial\lambda^{i}}\mathcal{ M}(\tau,\lambda_{0}+\mu,\bar{\lambda}_{0})-\frac{\partial^{i}}{\partial\lambda^{i}} \mathcal{M}(\tau,\lambda_{0},\bar{\lambda}_{0})\bigg{\}}(\mu_{1},\ldots,\mu_{i} )d\tau\bigg{|}\] \[\leq\sup_{t\in\mathbb{R}}\int_{-\infty}^{\infty}\|\mathcal{G}(t, \tau)\|\bigg{\{}\int_{0}^{1}\bigg{(}\bigg{\|}\frac{\partial^{i+1}}{\partial \lambda^{i+1}}\mathcal{M}(\tau,\lambda_{0}+\theta\mu)\bigg{\|}\] \[\quad+\bigg{\|}\bigg{\{}\frac{d^{i+1}}{d\lambda^{i+1}}f_{\lambda }(\tau,y_{\lambda}(\tau)+z_{\bar{\lambda}_{0}}(\tau))-\frac{d^{i+1}}{d\lambda^ {i+1}}f_{\lambda}(\tau,y_{\lambda}(\tau))\bigg{\}}\bigg{|}_{\lambda_{0}+\theta \mu}\bigg{\|}\bigg{\}}|\mu|d\theta\bigg{\}}d\tau\] \[\leq CL(1+NM_{*}^{i+1}\|z_{\bar{\lambda}_{0}}\|_{\infty})|\mu|. \tag{35}\] Similarly, we can prove that for \(\eta\in\mathcal{Y}\) sufficiently small \[\left\|\frac{\partial^{i}}{\partial\lambda^{i}}\mathcal{T}_{\lambda_{0}}(z_{ \bar{\lambda}_{0}}+\eta)-\frac{\partial^{i}}{\partial\lambda^{i}}\mathcal{T}_ {\lambda_{0}}z_{\bar{\lambda}_{0}}\right\|\leq CLNM_{*}^{i+1}\|\eta\|_{\infty}.\] Hence, it is shown that \(\frac{\partial^{i}}{\partial\lambda^{i}}\mathcal{T}\) is continuous at \((\lambda_{0},z_{\bar{\lambda}_{0}})\). Therefore, it is proved that the map \(\lambda\mapsto\mathcal{T}_{\lambda}z\) is \(C^{i}\) at \((\lambda_{0},z_{\bar{\lambda}_{0}})\) for all \(1\leq i\leq k\). The proof is completed. Finally, we observe that Theorem 2 follows readily from Lemmas 2, 3 and 4. ### Parameterized Hyers-Ulam stability In this subsection, we discuss an important consequence of Theorem 2. We first recall the notion of exponential dichotomy. **Definition 1**.: _The equation (2) is said to admit an exponential dichotomy if there exist a family \(P(t)\), \(t\in\mathbb{R}\), of projections on \(X\) and constants \(D,\rho>0\) such that:_ * _for_ \(t,s\in\mathbb{R}\)_,_ \(T(t,s)P(s)=P(t)T(t,s)\)_;_ * _for_ \(t,s\in\mathbb{R}\)_,_ \(\|\mathcal{G}(t,s)\|\leq De^{-\rho|t-s|}\)_, where_ \[\mathcal{G}(t,s)=\begin{cases}T(t,s)P(s)&t\geq s;\\ -T(t,s)(\mathrm{Id}-P(s))&t<s.\end{cases}\] The following result is a straightforward consequence of Theorem 1. **Corollary 1**.: _Suppose that (2) admits an exponential dichotomy and that (3) holds with \(c(t)=c\) for all \(t\in\mathbb{R}\), where \(c\geq 0\). Finally, assume that_ \[\tilde{q}:=\frac{2cD}{\rho}<1.\] _Given \(\varepsilon>0\), for each \(\lambda\in\Sigma\) let \(y_{\lambda}\colon\mathbb{R}\to X\) be a differentiable map satisfying_ \[|y_{\lambda}^{\prime}(t)-A(t)y_{\lambda}(t)-f_{\lambda}(t,y_{\lambda}(t))|\leq \varepsilon,\quad\text{for all }t\in\mathbb{R}. \tag{36}\] _Then, for each \(\lambda\in\Sigma\), there exists a unique solution \(x_{\lambda}\colon\mathbb{R}\to X\) of (4) such that_ \[\sup_{t\in\mathbb{R}}|x_{\lambda}(t)-y_{\lambda}(t)|\leq\frac{\tilde{L}}{1- \tilde{q}},\] _where \(\tilde{L}=\frac{2\varepsilon D}{\rho}\)._ Proof.: The desired conclusion follows readily from Theorem 1 applied to the case when \(\varepsilon(t)=\varepsilon\) and \(c(t)=c\) for all \(t\in\mathbb{R}\). The following result is a direct consequence of Theorem 2. **Corollary 2**.: _Suppose that the assumptions of Corollary 1 hold. Take \(\varepsilon>0\) and for each \(\lambda\in\Sigma\) let \(y_{\lambda}\colon\mathbb{R}\to X\) be a differentiable map satisfying (36). In addition, suppose there exists \(C>0\) such that the following conditions hold:_ * _the map_ \((\lambda,x)\mapsto f_{\lambda}(t,x)\) _is_ \(C^{k+1}\) _and for all_ \(2\leq i\leq k+1\) _and_ \(0\leq j\leq i\)_,_ \[\sup_{\lambda\in\Sigma}\sup_{x\in X}\left\|\frac{\partial^{i}}{\partial \lambda^{i-j}\partial x^{j}}f_{\lambda}(t,x)\right\|\leq C;\] * _the maps_ \(\lambda\mapsto y_{\lambda}(t)\) _and_ \(\lambda\mapsto y_{\lambda}^{\prime}(t)\) _both are_ \(C^{k+1}\) _such that_ \[\left|\frac{\partial^{i}}{\partial\lambda^{i}}y_{\lambda}^{\prime}(t)-A(t) \frac{\partial^{i}}{\partial\lambda^{i}}y_{\lambda}(t)-\frac{d^{i}}{d\lambda^ {i}}f_{\lambda}(t,y_{\lambda}(t))\right|\leq C\] _for all_ \(1\leq i\leq k+1\)_, all_ \(t\in\mathbb{R}\) _and all_ \(\lambda\in\Sigma\)_. Furthermore, for every_ \(\lambda_{0}\in\Sigma\) _there exist_ \(M_{\lambda_{0}}>1\) _and a neighborhood_ \(\Sigma_{0}\) _of_ \(\lambda_{0}\) _satisfying (_15_)._ _Then, \(\lambda\mapsto x_{\lambda}(t)\) is \(C^{k}\) for each \(t\in\mathbb{R}\)._ Let us discuss a simple concrete example to which our results are applicable. **Example 1**.: _Take \(X=\mathbb{R}\) and \(A(t)=-1\) for \(t\in\mathbb{R}\). Then, (2) admits an exponential dichotomy with \(P(t)=\mathrm{Id}\), \(D=\rho=1\). Consider \(\Sigma=(0,1)\subset\mathbb{R}\) and set_ \[f_{\lambda}(t,x)=\lambda\sin t,\quad\text{for $\lambda\in\Sigma$ and $t,x\in\mathbb{R}$.}\] _It is straightforward to verify that \(f_{\lambda}(t,x)\) satisfies all the assumptions of Corollary 2 for every \(k\in\mathbb{N}\). Fix \(\varepsilon>0\) and for each \(\lambda\in\Sigma\) set_ \[y_{\lambda}(t)=\frac{\lambda}{2}(\sin t-\cos t)+\frac{\lambda\varepsilon}{2}( \sin t+\cos t).\] _Note that_ \[y_{\lambda}^{\prime}(t)+y_{\lambda}(t)-f_{\lambda}(t,y_{\lambda}(t))=\lambda \varepsilon\cos t. \tag{37}\] _In particular,_ \[\sup_{t\in\mathbb{R}}|y_{\lambda}^{\prime}(t)+y_{\lambda}(t)-f_{\lambda}(t,y_ {\lambda}(t))|\leq\varepsilon,\] _that is, (36) is satisfied. Moreover,_ \[\left|\frac{\partial}{\partial\lambda}y_{\lambda}^{\prime}(t)-A( t)\frac{\partial}{\partial\lambda}y_{\lambda}(t)-\frac{d}{d\lambda}f_{\lambda}(t,y_ {\lambda}(t))\right|\] \[=\left|\frac{\partial}{\partial\lambda}y_{\lambda}^{\prime}(t)+ \frac{\partial}{\partial\lambda}y_{\lambda}(t)-\sin t\right|=\varepsilon| \cos t|\leq\varepsilon\] _and clearly the derivatives of order \(\geq 2\) with respect to \(\lambda\) of each term on the left-hand side of (37) are all zero. Furthermore, one can also easily show that_ \[\left|\frac{\partial}{\partial\lambda}y_{\lambda}(t)\right|\leq 1+\varepsilon\quad \text{ and }\quad\frac{\partial^{i}}{\partial\lambda^{i}}y_{\lambda}(t)=0\] _for every \(i\geq 2\). Therefore, all the hypotheses of Corollary 2 in this example are satisfied. This allows us to conclude that if \(x_{\lambda}\) is the map associated to \(y_{\lambda}\) by Corollary 1 then the map \(\lambda\to x_{\lambda}(t)\) is \(C^{k}\) for each \(t\in\mathbb{R}\)._ ### Beyond exponential dichotomy The purpose of this subsection is to illustrate that Theorem 2 is applicable to situations when (2) does not admit an exponential dichotomy. We consider a particular example. **Example 2**.: _Suppose that \(\rho\colon\mathbb{R}\to(0,\infty)\) is a continuously differentiable function such that \(\rho(0)=1\) and that \(\rho\) is increasing. Finally, we assume that \(\lim_{t\to\infty}\rho(t)=\infty\) and \(\lim_{t\to-\infty}\rho(t)=0\). We take \(X=\mathbb{R}\), \(\Sigma=(-1,1)\subset\mathbb{R}\) and set_ \[A(t)=-\frac{\rho^{\prime}(t)}{\rho(t)},\quad t\in\mathbb{R}.\] _In particular,_ \[T(t,s)=\frac{\rho(s)}{\rho(t)},\quad t,s\in\mathbb{R}.\] _Moreover, (2) has no nonzero bounded solution. Taking \(P(t)=\mathrm{Id}\) for \(t\in\mathbb{R}\), we have that_ \[\|\mathcal{G}(t,s)\|\leq 1,\quad t,s\in\mathbb{R}.\] _In addition, we choose \(\tilde{c}\colon\mathbb{R}\to\mathbb{R}\) of the form \(\tilde{c}(t)=e^{-a|t|}\), \(t\in\mathbb{R}\) where \(a>2\). Furthermore, we now choose a continuous function \(\varepsilon\colon\mathbb{R}\to\mathbb{R}\) with the property that there exists \(C>1\) such that \(\tilde{c}(t)\leq\varepsilon(t)\leq C\tilde{c}(t)\) for \(t\in\mathbb{R}\). Moreover, for \(\lambda\in\Sigma\), we set_ \[f_{\lambda}(t,x)=\lambda\tilde{c}(t)\rho(-|t|)x,\quad(t,x)\in\mathbb{R}^{2}.\] _Hence, (3) is satisfied with \(c(t)=\tilde{c}(t)\rho(-|t|)\) and_ \[q=\int_{-\infty}^{\infty}c(s)\|\mathcal{G}(t,s)\|\,ds\leq\int_{-\infty}^{ \infty}\tilde{c}(s)\rho(-|s|)\,ds\leq\int_{-\infty}^{\infty}\tilde{c}(s)\,ds =\frac{2}{a}<1.\] _In particular, (5) holds. Similarly we have that (6) is valid. For \(\lambda\in\Sigma\), we define \(y_{\lambda}\colon\mathbb{R}\to\mathbb{R}\) by_ \[y_{\lambda}(t)=\frac{\lambda}{2\rho(t)},\quad t\in\mathbb{R}.\] _Then,_ \[|y_{\lambda}^{\prime}(t)-A(t)y_{\lambda}(t)-f_{\lambda}(t,y_{\lambda}(t))|\leq c (t)|y_{\lambda}(t)|\leq\tilde{c}(t)\leq\varepsilon(t),\quad t\in\mathbb{R}.\] _Thus, (7) is satisfied. It is straightforward to verify that (13), (14) and (15) are satisfied for each \(k\in\mathbb{N}\). Hence, Theorem 2 implies that for each \(t\in\mathbb{R}\), map \(\lambda\mapsto x_{\lambda}(t)\) is \(C^{k}\) for each \(k\in\mathbb{N}\). Finally, we observe that by choosing an appropriate function \(\rho\), for instance,_ \[\rho(t)=\begin{cases}1+t&t\geq 0;\\ \frac{1}{1+|t|}&t<0,\end{cases}\] _equation (2) does not admit an exponential dichotomy._ ## 4. The discrete time case In this section we present a discrete time version of Theorem 2. Let \(X=(X,|\cdot|)\), \(\Sigma=(\Sigma,|\cdot|)\) and \((\mathcal{B}(X),\|\cdot\|)\) be as in Section 2. Given a sequence \((A_{n})_{n\in\mathbb{Z}}\) of invertible operators in \(\mathcal{B}(X)\), let us consider the associated linear difference equation \[x_{n+1}=A_{n}x_{x},\qquad n\in\mathbb{Z}. \tag{38}\] For \(m,n\in\mathbb{Z}\), set \[\mathcal{A}(m,n)=\begin{cases}A_{m-1}\cdots A_{n}&\text{for $m>n$};\\ \operatorname{Id}&\text{for $m=n$};\\ A_{m}^{-1}\cdots A_{n-1}^{-1}&\text{for $m<n$}.\end{cases} \tag{39}\] Let \((P_{n})_{n\in\mathbb{Z}}\) be a sequence in \(\mathcal{B}(X)\) and define \[\hat{\mathcal{G}}(m,n)=\begin{cases}\mathcal{A}(m,n)P_{n}&\text{for $m\geq n$};\\ -\mathcal{A}(m,n)(\operatorname{Id}-P_{n})&\text{for $m<n$}.\end{cases} \tag{40}\] Moreover, for each \(\lambda\in\Sigma\) suppose we have a measurable map \(f_{\lambda}\colon\mathbb{Z}\times X\to X\) with the property that there exists a sequence \((c_{n})_{n}\) in \([0,\infty)\) such that \[|f_{\lambda}(n,x)-f_{\lambda}(n,y)|\leq c_{n}|x-y|, \tag{41}\] for every \(n\in\mathbb{Z}\), \(\lambda\in\Sigma\) and \(x,y\in X\). Finally, associated to these choices, for each \(\lambda\in\Sigma\) we consider the semilinear difference equation given by \[x_{n+1}=A_{n}x_{n}+f_{\lambda}(n,x_{n}),\quad n\in\mathbb{Z}. \tag{42}\] The following result is established in [8, Theorem 3]. **Theorem 3**.: _Assume that (38) admits no non-trivial bounded solution and_ \[\hat{q}:=\sup_{m\in\mathbb{Z}}\bigg{(}\sum_{n\in\mathbb{Z}}c_{n-1}\|\hat{ \mathcal{G}}(m,n)\|\bigg{)}<1. \tag{43}\] _Moreover, let \((\varepsilon_{n})_{n\in\mathbb{Z}}\) be a sequence in \((0,+\infty)\) such that_ \[\hat{L}:=\sup_{m\in\mathbb{Z}}\bigg{(}\sum_{n\in\mathbb{Z}}\varepsilon_{n-1} \|\hat{\mathcal{G}}(m,n)\|\bigg{)}<\infty. \tag{44}\] _Then, for each \(\lambda\in\Sigma\) and for each sequence \(\textbf{y}^{\lambda}:=(y^{\lambda}_{n})_{n\in\mathbb{Z}}\subset X\) satisfying_ \[|y^{\lambda}_{n+1}-A_{n}y^{\lambda}_{n}-f_{\lambda}(n,y^{\lambda}_{n})|\leq \varepsilon_{n}\quad\text{ for all $n\in\mathbb{Z}$}, \tag{45}\] _there exists a unique sequence \(\textbf{x}^{\lambda}:=(x^{\lambda}_{n})_{n\in\mathbb{Z}}\subset X\) satisfying (42) such that_ \[|x^{\lambda}_{n}-y^{\lambda}_{n}|\leq\frac{\hat{L}}{1-\hat{q}},\quad\text{ for every $n\in\mathbb{Z}$}. \tag{46}\] Given a sequence \((y^{\lambda}_{n})_{n\in\mathbb{Z}}\subset X\) satisfying (45), our objective now is to formulate sufficient conditions under which the map \(\lambda\mapsto x^{\lambda}_{n}\) is of class \(C^{k}\) with \(k\in\mathbb{N}\) for every \(n\in\mathbb{Z}\). As in the previous section, by \(\frac{\partial^{i}}{\partial x^{i}}f_{\lambda}(n,x)\) and \(\frac{\partial^{i}}{\partial\lambda^{i}}f_{\lambda}(n,x)\) we denote the \(i\)-th partial derivative of \(f_{\lambda}(n,x)\) with respect to \(x\) and \(\lambda\), respectively. **Theorem 4**.: _Let \(\mathbf{y}^{\lambda}:=(y^{\lambda}_{n})_{n\in\mathbb{Z}}\) be a sequence in \(X\) satisfying (45) and suppose that the assumptions of Theorem 3 hold. Let \(\mathbf{x}^{\lambda}:=(x^{\lambda}_{n})_{n\in\mathbb{Z}}\) be the sequence associated to \(\mathbf{y}^{\lambda}\) by Theorem 3 and take \(k\in\mathbb{N}\). Moreover, suppose there exists \(C>0\) such that the following conditions hold:_ * _for every_ \(n\in\mathbb{Z}\) _the map_ \((\lambda,x)\mapsto f_{\lambda}(n,x)\) _is_ \(C^{k+1}\) _and for all_ \(2\leq i\leq k+1\) _and_ \(0\leq j\leq i\)_,_ \[\sup_{\lambda\in\Sigma}\sup_{x\in X}\left\|\frac{\partial^{i}}{\partial\lambda ^{i-j}\partial x^{j}}f_{\lambda}(n,x)\right\|\leq C\varepsilon_{n};\] (47) * _the map_ \(\lambda\mapsto y^{\lambda}_{n}\) _is_ \(C^{k+1}\) _for every_ \(n\in\mathbb{Z}\) _and_ \[\left|\frac{\partial^{i}}{\partial\lambda^{i}}y^{\lambda}_{n+1}-A_{n}\frac{ \partial^{i}}{\partial\lambda^{i}}y^{\lambda}_{n}-\frac{d^{i}}{d\lambda^{i}}f _{\lambda}(n,y^{\lambda}_{n})\right|\leq C\varepsilon_{n}\] (48) _for every_ \(1\leq i\leq k+1\)_, all_ \(n\in\mathbb{Z}\) _and all_ \(\lambda\in\Sigma\)_. Furthermore, for every_ \(\lambda_{0}\in\Sigma\) _there exist_ \(\hat{M}_{\lambda_{0}}>1\) _and a neighborhood_ \(\Sigma_{0}\) _of_ \(\lambda_{0}\) _such that_ \[\max_{1\leq i\leq k+1}\sup_{n\in\mathbb{Z}}\left\|\frac{\partial^{i}}{\partial \lambda^{i}}y^{\lambda}_{n}\right\|\leq\hat{M}_{\lambda_{0}}\quad\text{for each}\quad\lambda\in\Sigma_{0}.\] (49) _Then, the map \(\lambda\mapsto x^{\lambda}_{n}\) is of class \(C^{k}\) for every \(n\in\mathbb{Z}\)._ The proof of this theorem is very similar to the proof of its continuous time version Theorem 2. For this reason, at some steps of the proof, we will only provide a sketch of the argument. Let us start with the proof of Theorem 4 by recalling some constructions from the proof of [8, Theorem 3]. Let \[\hat{\mathcal{Y}}:=\bigg{\{}\mathbf{z}=(z_{n})_{n\in\mathbb{Z}}\subset X:\ \| \mathbf{z}\|_{\infty}:=\sup_{n\in\mathbb{Z}}|z_{n}|<+\infty\bigg{\}}.\] Then, \((\hat{\mathcal{Y}},\|\cdot\|_{\infty})\) is a Banach space. For \(\mathbf{z}\in\hat{\mathcal{Y}}\), we set \[(\hat{\mathcal{T}}_{\lambda}\mathbf{z})_{n}=\sum_{k\in\mathbb{Z}}\hat{ \mathcal{G}}(n,k)(A_{k-1}y^{\lambda}_{k-1}+f_{\lambda}(k-1,z_{k-1}+y^{\lambda }_{k-1})-y^{\lambda}_{k}),\] for \(n\in\mathbb{Z}\). It is observed in [8, Theorem 3] that \(\hat{\mathcal{T}}_{\lambda}\) is well-defined and, moreover, that it is a contraction on \[\hat{\mathcal{D}}:=\left\{\mathbf{z}\in\hat{\mathcal{Y}}:\|\mathbf{z}\|_{ \infty}\leq\frac{\hat{L}}{1-\hat{q}}\right\},\] where \(\hat{L}\) and \(\hat{q}\) are given in (44) and (43) respectively. In particular, it has a unique fixed point \(\mathbf{z}^{\lambda}\in\hat{\mathcal{D}}\) such that \(\hat{\mathcal{T}}_{\lambda}\mathbf{z}^{\lambda}=\mathbf{z}^{\lambda}\). Finally, it is observed that \(\mathbf{x}^{\lambda}=\mathbf{y}^{\lambda}+\mathbf{z}^{\lambda}\) is a solution of (42) satisfying (46). In particular, in order to prove Theorem 4, it remains to show that \(\lambda\to\mathbf{z}^{\lambda}\) is of class \(C^{k}\). With this purpose in mind we present some auxiliary results. **Lemma 6**.: _Let \(k\in\mathbb{N}\). Suppose that the map \(\Sigma\times\hat{\mathcal{Y}}\ni(\lambda,\mathbf{z})\mapsto\hat{\mathcal{T}}_{ \lambda}\mathbf{z}\) is \(C^{k}\) on an open set \(\hat{\mathcal{W}}\) containing the set \(\hat{\mathcal{S}}:=\{(\lambda,\mathbf{z}^{\lambda}):\lambda,\bar{\lambda}\in\Sigma\}\). Then, \(\Sigma\ni\lambda\mapsto\mathbf{z}^{\lambda}\) is also \(C^{k}\)._ Proof.: This result follows from Lemma 2 and Remark 2. **Lemma 7**.: _Suppose that the assumptions of Theorem 4 hold. Then, \(\hat{\mathcal{Y}}\ni\boldsymbol{z}\mapsto\mathcal{T}_{\lambda}\boldsymbol{z}\) is \(C^{k}\) on an open set \(\hat{\mathcal{W}}\) containing \(\hat{\mathcal{S}}\), where \(\hat{\mathcal{S}}\) is as in Lemma 6._ Proof.: The proof of this result is very similar to the proof of Lemma 3 and, for this reason, we only present a sketch of the argument. It suffices to prove that, for any \((\lambda_{0},\mathbf{z}^{\lambda_{0}})\in\hat{\mathcal{S}}\), there exists an open set \(\hat{\mathcal{N}}\ni(\lambda_{0},\mathbf{z}^{\lambda_{0}})\) on which the map \(\mathbf{z}\mapsto\hat{\mathcal{T}}_{\lambda}\mathbf{z}\) is \(C^{k}\). We use induction to prove that on \(\hat{\mathcal{N}}\), the \(i\)-th derivative of \(\mathbf{z}\mapsto\hat{\mathcal{T}}_{\lambda}\mathbf{z}\) is given by \[\left(\left(\frac{\partial^{i}}{\partial\mathbf{z}^{i}}\hat{ \mathcal{T}}_{\lambda}\mathbf{z}\right)(\boldsymbol{\eta}^{1},\ldots, \boldsymbol{\eta}^{i})\right)_{n}\] \[= \sum_{k\in\mathbb{Z}}\hat{\mathcal{G}}(n,k)\left(\frac{\partial^ {i}}{\partial x^{i}}f_{\lambda}(k-1,y_{k-1}^{\lambda}+z_{k-1})\right)(\eta_{k -1}^{1},\ldots,\eta_{k-1}^{i}) \tag{50}\] for \(1\leq i\leq k\), where \(\boldsymbol{\eta}^{j}=(\eta_{n}^{j})_{n\in\mathbb{Z}}\in\hat{\mathcal{Y}}\) for \(1\leq j\leq i\). By arguing as in (21), it follows from (41) that \[\sup_{\lambda\in\Sigma,x\in X}\left\|\frac{\partial}{\partial x}f_{\lambda}( n,x)\right\|\leq c_{n}\text{ for every }n\in\mathbb{Z}. \tag{51}\] We first show that (50) holds at the point \((\lambda_{0},\mathbf{z}^{\bar{\lambda}_{0}})\). Using the definition of \(\hat{\mathcal{T}}_{\lambda}\) and the first assumption of Theorem 4, we can show that for \(\boldsymbol{\eta}\in\hat{\mathcal{Y}}\) with sufficiently small \(\|\boldsymbol{\eta}\|_{\infty}\), \[\left(\hat{\mathcal{T}}_{\lambda_{0}}(\mathbf{z}^{\bar{\lambda}_ {0}}+\boldsymbol{\eta})\right)_{n}-\left(\hat{\mathcal{T}}_{\lambda_{0}} \mathbf{z}^{\bar{\lambda}_{0}}\right)_{n}\\ =\sum_{k\in\mathbb{Z}}\hat{\mathcal{G}}(n,k)\left(\frac{\partial }{\partial x}f_{\lambda_{0}}(k-1,y_{k-1}^{\lambda_{0}}+z_{k-1}^{\bar{\lambda}_ {0}})\right)\eta_{k-1}+\sum_{k\in\mathbb{Z}}\hat{\mathcal{G}}(n,k)\alpha_{k-1}, \tag{52}\] where \[\alpha_{k-1} =\int_{0}^{1}\frac{\partial}{\partial x}f_{\lambda_{0}}(k-1,y_{k -1}^{\lambda_{0}}+z_{k-1}^{\bar{\lambda}_{0}}+\theta\eta_{k-1})\eta_{k-1}d\theta\] \[\quad-\frac{\partial}{\partial x}f_{\lambda_{0}}(k-1,y_{k-1}^{ \lambda_{0}}+z_{k-1}^{\bar{\lambda}_{0}})\eta_{k-1}.\] Using again the first assumption of Theorem 4, \(|\alpha_{k-1}|\) can be estimated as \[|\alpha_{k-1}|\leq C\varepsilon_{k-1}|\eta_{k-1}|^{2}. \tag{53}\] By (44) and (53), we have that \[\frac{1}{\|\boldsymbol{\eta}\|_{\infty}}\left|\sum_{k\in\mathbb{Z}}\hat{ \mathcal{G}}(n,k)\alpha_{k-1}\right|\leq C\sum_{k\in\mathbb{Z}}\|\hat{ \mathcal{G}}(n,k)\|\varepsilon_{k-1}|\eta_{k-1}|\leq C\hat{L}\|\boldsymbol{ \eta}\|_{\infty},\] for every \(n\in\mathbb{Z}\), and consequently \[\frac{1}{\|\boldsymbol{\eta}\|_{\infty}}\sup_{n\in\mathbb{Z}}\left|\sum_{k\in \mathbb{Z}}\hat{\mathcal{G}}(n,k)\alpha_{k-1}\right|\xrightarrow[]{\| \boldsymbol{\eta}\|_{\infty}\to 0}0.\] This fact combined with (52) implies that \(\mathbf{z}\mapsto\hat{\mathcal{T}}_{\lambda_{0}}\mathbf{z}\) is differentiable at \(\mathbf{z}^{\bar{\lambda}_{0}}\) and that its derivative is given by \[\left(\left(\frac{\partial}{\partial\mathbf{z}}\hat{\mathcal{T}}_{\lambda_{0}} \mathbf{z}^{\bar{\lambda}_{0}}\right)\boldsymbol{\eta}\right)_{n}=\sum_{k\in \mathbb{Z}}\hat{\mathcal{G}}(n,k)\left(\frac{\partial}{\partial x}f_{\lambda_{0 }}(k-1,y_{k-1}^{\lambda_{0}}+z_{k-1}^{\bar{\lambda}_{0}})\right)\eta_{k-1} \tag{54}\] for \(n\in\mathbb{Z}\) and \(\mathbf{\eta}=(\eta_{n})_{n\in\mathbb{Z}}\in\hat{\mathcal{Y}}\). Similarly, we can show that \(\mathbf{z}\mapsto\hat{\mathcal{T}}_{\lambda}\mathbf{z}\) is also differentiable at every point in the neighborhood \(\hat{\mathcal{N}}\) of \((\lambda_{0},\mathbf{z}^{\bar{\lambda}_{0}})\) and the derivative has the same form as in (54). Assume that (50) holds for \(i=j\) with \(1\leq j\leq k-1\). Then, using the inductive assumption and the first hypothesis of Theorem 4, for \(\mathbf{\eta}\in\hat{\mathcal{Y}}\) with \(\|\mathbf{\eta}\|_{\infty}\) sufficiently small, we can show that \[\left(\frac{\partial^{j}}{\partial\mathbf{z}^{j}}\hat{\mathcal{T }}_{\lambda_{0}}(\mathbf{z}^{\bar{\lambda}_{0}}+\mathbf{\eta})(\mathbf{\eta}^{1}, \ldots,\mathbf{\eta}^{j})\right)_{n}-\left(\frac{\partial^{j}}{\partial\mathbf{z}^ {j}}\hat{\mathcal{T}}_{\lambda_{0}}\mathbf{z}^{\bar{\lambda}_{0}}(\mathbf{\eta}^{ 1},\ldots,\mathbf{\eta}^{j})\right)_{n}\] \[=\sum_{k\in\mathbb{Z}}\hat{\mathcal{G}}(n,k)\left(\frac{\partial ^{j+1}}{\partial x^{j+1}}f_{\lambda_{0}}(k-1,y_{k-1}^{\lambda_{0}}+z_{k-1}^{ \bar{\lambda}_{0}})\eta_{k-1}\right)(\eta_{k-1}^{1},\ldots,\eta_{k-1}^{j})\] \[\qquad+\sum_{k\in\mathbb{Z}}\hat{\mathcal{G}}(n,k)\left(\tilde{ \alpha}_{k-1}\right)(\eta_{k-1}^{1},\ldots,\eta_{k-1}^{j}), \tag{55}\] where \(\mathbf{\eta}^{\ell}=(\eta_{n}^{\ell})_{n\in\mathbb{Z}}\in\hat{\mathcal{Y}}\) for all \(1\leq\ell\leq j\) and \[\tilde{\alpha}_{k-1}= \int_{0}^{1}\left(\frac{\partial^{j+1}}{\partial x^{j+1}}f_{ \lambda_{0}}(k-1,y_{k-1}^{\lambda_{0}}+z_{k-1}^{\bar{\lambda}_{0}}+\theta\eta _{k-1})\eta_{k-1}\right)d\theta\] \[-\frac{\partial^{j+1}}{\partial x^{j+1}}f_{\lambda_{0}}(k-1,y_{k -1}^{\lambda_{0}}+z_{k-1}^{\bar{\lambda}_{0}})\eta_{k-1}.\] Using again the first hypothesis of Theorem 4 we can show that \[\|\tilde{\alpha}_{k-1}\|\leq C\varepsilon_{k-1}|\eta_{k-1}|^{2}.\] Then, we have that \[\frac{1}{\|\mathbf{\eta}\|_{\infty}\|\mathbf{\eta}^{1}\|_{\infty}\cdots\|\mathbf{\eta}^{j} \|_{\infty}}\sup_{n\in\mathbb{Z}}\left|\sum_{k\in\mathbb{Z}}\hat{\mathcal{G}} (n,k)\left(\tilde{\alpha}_{k-1}\right)(\eta_{k-1}^{1},\ldots,\eta_{k-1}^{j}) \right|\leq C\hat{L}\|\mathbf{\eta}\|_{\infty}.\] This together with (55) implies that the results in (50) holds at the point \((\lambda_{0},\mathbf{z}_{\bar{\lambda}_{0}})\) for \(i=j+1\). Similarly, we can show that \(\mathbf{z}\mapsto\hat{\mathcal{T}}_{\lambda}\mathbf{z}\) is also differentiable of order \(j+1\) at every point in the neighborhood \(\hat{\mathcal{N}}\) of \((\lambda_{0},\mathbf{z}^{\bar{\lambda}_{0}})\) and that the \(j+1\)-th derivative has the same form as in (50). Thus, by induction, (50) is proved. Now, we show that \(\frac{\partial^{i}}{\partial\mathbf{z}^{i}}\hat{\mathcal{T}}\) is continuous on \(\hat{\mathcal{N}}\) for all \(1\leq i\leq k\). Without loss of generality, we only show the continuity at \((\lambda_{0},\mathbf{z}^{\bar{\lambda}_{0}})\). In fact, fixing any \(1\leq i\leq k\), using the assumptions of Theorem 4 and (44) we can show that there exists \(\hat{M}_{\lambda_{0}}>0\) such that for every \(\mathbf{\eta}\in\hat{\mathcal{Y}}\) and \(\mu\in\Sigma\) sufficiently small, \[\left\|\frac{\partial^{i}}{\partial\mathbf{z}^{i}}\hat{\mathcal{T }}_{\lambda_{0}+\mu}(\mathbf{z}^{\bar{\lambda}_{0}}+\mathbf{\eta})-\frac{\partial^ {i}}{\partial\mathbf{z}^{i}}\hat{\mathcal{T}}_{\lambda_{0}}\mathbf{z}^{\bar{ \lambda}_{0}}\right\|\leq C(\hat{M}_{\lambda_{0}}+1)\hat{L}(|\mu|+\|\mathbf{\eta}\|_ {\infty}).\] Hence, it follows that \(\frac{\partial^{i}}{\partial\mathbf{z}^{i}}\hat{\mathcal{T}}\) is continuous at \((\lambda_{0},\mathbf{z}^{\bar{\lambda}_{0}})\) and it is proved that the map \(\mathbf{z}\mapsto\hat{\mathcal{T}}_{\lambda}\mathbf{z}\) is \(C^{k}\) on \(\hat{\mathcal{N}}\). This completes the proof of the lemma. **Lemma 8**.: _Suppose we are in the hypotheses of Theorem 4. Then, \(\Sigma\ni\lambda\mapsto\hat{\mathcal{T}}_{\lambda}\mathbf{z}\) is \(C^{k}\) on an open set \(\hat{\mathcal{W}}\) containing \(\hat{\mathcal{S}}\), where \(\hat{\mathcal{S}}\) is as in Lemma 6._ In order to prove this lemma we need the following auxiliary result whose proof is completely analogous to the proof of Lemma 5 and therefore we omit it. **Lemma 9**.: _Fix \(\lambda_{*}\in\Sigma\) and \(2\leq i\leq k+1\) and suppose that conditions (47) and (49) are satisfied. Then, there exists \(N>0\) such that for every \(\mu\in\Sigma\) with sufficiently small \(|\mu|\) and for all \(n\in\mathbb{Z}\),_ \[\bigg{\|}\bigg{\{}\frac{d^{i}}{d\lambda^{i}}f_{\lambda}(n,y_{n}^{\lambda}+z_{n })-\frac{d^{i}}{d\lambda^{i}}f_{\lambda}(n,y_{n}^{\lambda})\bigg{\}}\bigg{|}_{ \lambda_{*}+\mu}\bigg{\|}\leq C\varepsilon_{n}N\hat{M}_{*}^{i}\|\boldsymbol{z} \|_{\infty}, \tag{56}\] _where \(\boldsymbol{z}=(z_{n})_{n\in\mathbb{Z}}\in\hat{\mathcal{V}}\) and \(\hat{M}_{*}>1\) is a constant such that_ \[\max_{1\leq j\leq k+1}\sup_{n\in\mathbb{Z}}\bigg{\|}\frac{\partial^{j}}{ \partial\lambda^{j}}y_{n}^{\lambda_{*}+\mu}\bigg{\|}\leq\hat{M}_{*} \tag{57}\] _for every \(\mu\in\Sigma\) with sufficiently small \(|\mu|\) whose existence is given by (49)._ Proof of Lemma 8.: The proof of this result is similar to that of Lemma 4 and again we provide only a sketch of the argument. For any \((\lambda_{0},\mathbf{z}^{\bar{\lambda}_{0}})\in\hat{\mathcal{S}}\), we use induction to prove that there exists an open neighborhood \(\hat{\mathcal{V}}\) of \((\lambda_{0},\mathbf{z}^{\bar{\lambda}_{0}})\) on which the \(i\)-th derivative of \(\lambda\mapsto\hat{\mathcal{T}}_{\lambda}\mathbf{z}\) is given by \[\bigg{(}\bigg{(}\frac{\partial^{i}}{\partial\lambda^{i}}\hat{ \mathcal{T}}_{\lambda}\mathbf{z}\bigg{)}\left(\mu_{1},\ldots,\mu_{i}\right) \bigg{)}_{n}=\sum_{k\in\mathbb{Z}}\hat{\mathcal{G}}(n,k)\Bigg{(}A_{k-1}\frac{ \partial^{i}}{\partial\lambda^{i}}y_{k-1}^{\lambda}\] \[\quad+\frac{d^{i}}{d\lambda^{i}}f_{\lambda}(k-1,y_{k-1}^{\lambda }+z_{k-1})-\frac{\partial^{i}}{\partial\lambda^{i}}y_{k}^{\lambda}\Bigg{)}( \mu_{1},\ldots,\mu_{i}) \tag{58}\] for \(1\leq i\leq k\), where \(\mu_{j}\in\Sigma\) for \(1\leq j\leq i\). We first show that (58) holds at \((\lambda_{0},\mathbf{z}^{\bar{\lambda}_{0}})\). In order to simplify notations, define \[\mathcal{H}(n,\lambda) :=A_{n}y_{n}^{\lambda}+f_{\lambda}(n,y_{n}^{\lambda})-y_{n+1}^{ \lambda},\] \[\mathcal{M}(n,\lambda,\bar{\lambda}) :=A_{n}y_{n}^{\lambda}+f_{\lambda}(n,y_{n}^{\lambda}+z_{n}^{\bar{ \lambda}})-y_{n+1}^{\lambda}.\] Using the definition of \(\hat{\mathcal{T}}_{\lambda}\) and the first assumption of Theorem 4, one can show that for \(\mu\in\Sigma\) with sufficiently small \(|\mu|\), \[\left(\hat{\mathcal{T}}_{\lambda_{0}+\mu}\mathbf{z}^{\bar{\lambda }_{0}}\right)_{n}-\left(\hat{\mathcal{T}}_{\lambda_{0}}\mathbf{z}^{\bar{ \lambda}_{0}}\right)_{n} =\sum_{k\in\mathbb{Z}}\hat{\mathcal{G}}(n,k)\frac{\partial}{\partial \lambda}\mathcal{M}(k-1,\lambda_{0},\bar{\lambda}_{0})\mu\] \[\quad+\sum_{k\in\mathbb{Z}}\hat{\mathcal{G}}(n,k)\beta_{k-1}, \tag{59}\] where \[\beta_{k-1}=\int_{0}^{1}\frac{\partial}{\partial\lambda}\mathcal{M}(k-1, \lambda_{0}+\theta\mu,\bar{\lambda}_{0})\mu d\theta-\frac{\partial}{\partial \lambda}\mathcal{M}(k-1,\lambda_{0},\bar{\lambda}_{0})\mu.\] By the assumptions of Theorem 4 and Lemma 9 we can get that \[|\beta_{k-1}|\leq C\varepsilon_{k-1}(1+N\hat{M}_{\lambda_{0}}^{2}\|\mathbf{z}^ {\bar{\lambda}_{0}}\|_{\infty})|\mu|^{2}. \tag{60}\] Moreover, by (44) and (60), we have \[\frac{1}{|\mu|}\left|\sum_{k\in\mathbb{Z}}\hat{\mathcal{G}}(n,k)\beta _{k-1}\right| \leq C(1+N\hat{M}_{\lambda_{0}}^{2}\|\mathbf{z}^{\bar{\lambda}_{0} }\|_{\infty})\sum_{k\in\mathbb{Z}}\|\hat{\mathcal{G}}(n,k)\|\varepsilon_{k-1}|\mu|\] \[\leq C(1+N\hat{M}_{\lambda_{0}}^{2}\|\mathbf{z}^{\bar{\lambda}_ {0}}\|_{\infty})\hat{L}|\mu|,\] for every \(n\in\mathbb{Z}\), which implies that \[\frac{1}{|\mu|}\sup_{n\in\mathbb{Z}}\left|\sum_{k\in\mathbb{Z}}\hat{\mathcal{ G}}(n,k)\beta_{k-1}\right|\xrightarrow{|\mu|\to 0}0.\] This fact combined with (59) implies that \(\lambda\mapsto\hat{\mathcal{T}}_{\lambda}\mathbf{z}^{\bar{\lambda}}\) is differentiable at \((\lambda_{0},\mathbf{z}^{\bar{\lambda}_{0}})\) and that its derivative is given by \[\left(\left(\frac{\partial}{\partial\lambda}\hat{\mathcal{T}}_{\lambda_{0}} \mathbf{z}^{\bar{\lambda}_{0}}\right)\mu\right)_{n}=\sum_{k\in\mathbb{Z}}\hat {\mathcal{G}}(n,k)\frac{\partial}{\partial\lambda}\mathcal{M}(k-1,\lambda_{0},\bar{\lambda}_{0})\mu \tag{61}\] for \(n\in\mathbb{Z}\) and \(\mu\in\Sigma\). Similarly, we can show that \(\lambda\mapsto\hat{\mathcal{T}}_{\lambda}\mathbf{z}\) is also differentiable at every point in the neighborhood \(\hat{\mathcal{V}}\) of \((\lambda_{0},\mathbf{z}^{\bar{\lambda}_{0}})\) and the derivative has the same form as in (61). Assume that (58) holds for \(i=j\) with \(1\leq j\leq k-1\). Then, using the inductive assumption and the hypotheses of Theorem 4, we can show that for \(\mu\in\Sigma\) with sufficiently small \(|\mu|\), \[\left(\frac{\partial^{j}}{\partial\lambda^{j}}\hat{\mathcal{T}}_ {\lambda_{0}+\mu}\mathbf{z}^{\bar{\lambda}_{0}}(\mu_{1},\ldots,\mu_{j}) \right)_{n}-\left(\frac{\partial^{j}}{\partial\lambda^{j}}\hat{\mathcal{T}}_{ \lambda_{0}}\mathbf{z}^{\bar{\lambda}_{0}}(\mu_{1},\ldots,\mu_{j})\right)_{n}\] \[=\sum_{k\in\mathbb{Z}}\hat{\mathcal{G}}(n,k)\bigg{\{}\left(\frac{ \partial^{j+1}}{\partial\lambda^{j+1}}\mathcal{M}(k-1,\lambda_{0},\bar{ \lambda}_{0})\right)\!\!\mu\bigg{\}}(\mu_{1},\ldots,\mu_{j})\] \[\qquad+\sum_{k\in\mathbb{Z}}\hat{\mathcal{G}}(n,k)\left(\tilde{ \beta}_{k-1}\right)(\mu_{1},\ldots,\mu_{j}), \tag{62}\] where \[\tilde{\beta}_{k-1} =\int_{0}^{1}\bigg{(}\frac{\partial^{j+1}}{\partial\lambda^{j+1} }\mathcal{M}(k-1,\lambda_{0}+\theta\mu,\bar{\lambda}_{0})\bigg{)}\mu d\theta\] \[\quad-\bigg{(}\frac{\partial^{j+1}}{\partial\lambda^{j+1}} \mathcal{M}(k-1,\lambda_{0},\bar{\lambda}_{0})\bigg{)}\mu.\] Using the hypotheses of Theorem 4 and Lemma 9 we can show that \[|\tilde{\beta}_{k-1}|\leq C\varepsilon_{k-1}(1+N\hat{M}_{\lambda_{0}}^{j+2}\| \mathbf{z}^{\bar{\lambda}_{0}}\|_{\infty})|\mu|^{2}.\] Then, we have that \[\frac{1}{|\mu||\mu_{1}|\cdots|\mu_{j}|}\sup_{n\in\mathbb{Z}}\left| \sum_{k\in\mathbb{Z}}\hat{\mathcal{G}}(n,k)\left(\tilde{\beta}_{k-1}\right)( \mu_{1},\ldots,\mu_{j})\right|\] \[\leq C\hat{L}(1+N\hat{M}_{\lambda_{0}}^{j+2}\|\mathbf{z}^{\bar{ \lambda}_{0}}\|_{\infty})|\mu|.\] This together with (62) implies that (58) holds at point \((\lambda_{0},\mathbf{z}^{\bar{\lambda}_{0}})\) for \(i=j+1\). Similarly, by induction one can show that \(\lambda\mapsto\hat{\mathcal{T}}_{\lambda}\mathbf{z}\) is also differentiable of order \(j+1\) at every point in the neighborhood \(\hat{\mathcal{V}}\) of \((\lambda_{0},\mathbf{z}^{\bar{\lambda}_{0}})\) and that the \((j+1)\)-th derivative has the same form as in (58). Hence, by induction, (58) is proved. Now, we show that \(\frac{\partial^{i}}{\partial\lambda^{i}}\hat{\mathcal{T}}\) is continuous on \(\hat{\mathcal{V}}\) for all \(1\leq i\leq k\). Without loss of generality, we only show the continuity at \((\lambda_{0},\mathbf{z}^{\bar{\lambda}_{0}})\). In fact, fixing any \(1\leq i\leq k\), for every \(\boldsymbol{\eta}=(\eta_{n})_{n\in\mathbb{Z}}\in\hat{\mathcal{Y}}\) and \(\mu\in\Sigma\) sufficiently small, using (44) and the assumptions of Theorem 4, we can show that \[\left\|\frac{\partial^{i}}{\partial\lambda^{i}}\hat{\mathcal{T}}_{\lambda_{0} +\mu\mathbf{z}^{\bar{\lambda}_{0}}}-\frac{\partial^{i}}{\partial\lambda^{i}} \hat{\mathcal{T}}_{\lambda_{0}}\mathbf{z}^{\bar{\lambda}_{0}}\right\|\leq C \hat{L}(1+N\hat{M}_{\lambda_{0}}^{i+1}\|\mathbf{z}^{\bar{\lambda}_{0}}\|_{ \infty})|\mu|\] and \[\left\|\frac{\partial^{i}}{\partial\lambda^{i}}\hat{\mathcal{T}}_{\lambda_{0} }(\mathbf{z}^{\bar{\lambda}_{0}}+\boldsymbol{\eta})-\frac{\partial^{i}}{ \partial\lambda^{i}}\hat{\mathcal{T}}_{\lambda_{0}}\mathbf{z}^{\bar{\lambda}_ {0}}\right\|\leq C\hat{L}N\hat{M}_{\lambda_{0}}^{i+1}\|\boldsymbol{\eta}\|_{ \infty}.\] Hence, it is shown that \(\frac{\partial^{i}}{\partial\lambda^{i}}\hat{\mathcal{T}}\) is continuous at \((\lambda_{0},\mathbf{z}^{\bar{\lambda}_{0}})\). Therefore, it is proved that the map \(\lambda\to\hat{\mathcal{T}}_{\lambda}\mathbf{z}\) is \(C^{i}\) at \((\lambda_{0},\mathbf{z}^{\bar{\lambda}_{0}})\in\hat{\mathcal{W}}\) for all \(1\leq i\leq k\). The proof is completed. Finally, Theorem 4 follows readily from Lemmas 6, 7 and 8 and its proof is then completed. **Remark 3**.: _As in Subsection 3.2, one can interpret Theorem 4 in the case when (38) admits an exponential dichotomy. In particular, it is straightforward to formulate a discrete time version of Corollary 2. Moreover, one can easily build discrete time versions of the examples given in Sections 3.2 and 3.3. Therefore, we refrain from doing so._ ## Acknowledgements L. Backes was partially supported by a CNPq-Brazil PQ fellowship under Grant No. 307633/2021-7. D.D. was supported in part by Croatian Science Foundation under the Project IP-2019-04-1239 and by the University of Rijeka under the Projects uniri-prirod-18-9 and uniri-prprirod-19-16. X. Tang was supported by NSFC #12001537, the Start-up Funding of Chongqing Normal University 20XLB033, and the Research Project of Chongqing Education Commission CXQT21014.
2308.07533
The site frequency spectrum for coalescing Brownian motion
We consider an expanding population on the plane. The genealogy of a sample from the population is modelled by coalescing Brownian motion on the circle. We establish a weak law of large numbers for the site frequency spectrum in this model. A parallel result holds for a localized version where the genealogy is modelled by coalescing Brownian motion on the line.
Yubo Shuai
2023-08-15T02:32:10Z
http://arxiv.org/abs/2308.07533v1
# The site frequency spectrum for coalescing Brownian motion ###### Abstract We consider an expanding population on the plane. The genealogy of a sample from the population is modelled by coalescing Brownian motion on the circle. We establish a weak law of large numbers for the site frequency spectrum in this model. A parallel result holds for a localized version where the genealogy is modelled by coalescing Brownian motion on the line. + Footnote †: _Key words and phrases_. Coalescing Brownian motion, Site frequency spectrum + Footnote †: _Key words and phrases_. Coalescing Brownian motion, Site frequency spectrum ## 1 Introduction In population genetics, one is often interested in the mutations along the DNA sequences in a sample from a population. The site frequency spectrum is commonly used to summarize the mutational data. In a sample of size \(n\), the site frequency spectrum consists of \(M_{m}\) for \(m=1,2,\ldots,n-1\) where \(M_{m}\) is the number of mutations inherited by exactly \(m\) individuals in the sample. There is an extensive literature on the exact and asymptotic behavior of the site frequency spectrum for various population models. For models with fixed population size, Fu and Li [9] computed the expected site frequency spectrum for a population whose genealogy is given by Kingman's coalescent. This computation was generalized to \(\Lambda\)-coalescents by Birkner, Blath and Eldon [2]. In the special case of the Bolthausen-Sznitman coalescent, Diehl and Kersting obtained laws of large numbers for the site frequency spectrum [5]. The computation for expected site frequency was further generalized to \(\Xi\)-coalescents by Spence, Kamm and Song [15] and Blath et al. [3]. For models with exponentially growing population size, the asymptotics of the expected site frequency spectrum were obtained by Durrett [7] and the exact formula when the whole population is sampled was obtained by Gunnarsson, Leder and Foo [11]. Schweinsberg and Shuai established the asymptotic normality for the site frequency spectrum in [14] based on the methods developed in [12]. The results mentioned above assume a well-mixed population and no spatial constraint is imposed. De and Durrett [4] considered the stepping stone model and observed that the there are more high frequency mutations due to the spatial structure. In this paper, we consider a population whose genealogy is modelled by coalescing Brownian motion with Poissonian mutations along the branches and establish a weak law of large numbers for the site frequency spectrum. ### An expanding population model We think of a population on \(\mathbb{R}^{2}\). The ancestor is located at the origin and the \(k\)th generation live on the circle with radius \(k\) centered at the origin. To take the spatial structure into account, the offspring of an individual in the \(k\)th generation are located in a neighborhood of the parent, in the sense that the angular parts of the offspring and the parent are close. More formally, we fix some non-decreasing function \(f:\mathbb{R}\to\mathbb{R}\) satisfying the periodic condition \(f(x+1)=f(x)+1\) and let \(\Theta_{1},\Theta_{2},\dots\) be a sequence of i.i.d random variables, uniformly distributed on \([0,1)\). For an individual \(x\) in the \((k+1)\)st generation with angular part \(\theta_{x}\), its parent in the \(k\)th generation has radial part \(f_{\Theta_{k}}(\theta_{x})=f(\theta_{x}-\Theta_{k})+\Theta_{k}\). We are interested in the angular parts of the ancestral lineages of \(x\), namely, \(\Phi_{j,k}(\theta_{x})=f_{\Theta_{j+1}}\circ\dots\circ f_{\Theta_{k}}(\theta_ {x})\) for \(j\leq k\). It is worth noting that, if \(f\) is strictly increasing, then the angular parts of the ancestral lineages remain distinct. However, under proper scaling, these angular parts converge to the coalescing Brownian motion on the circle. Indeed, Norris and Turner [13] embedded \(\Phi_{j,k}\) in a continuous time setting. They showed that for any sequence of functions \(f_{n}\) that converges to the identity function appropriately, and for \((\theta_{x_{n}})_{n=1}^{\infty}\subset[0,1)\), the ancestral lineages of \((\theta_{x_{n}})_{n=1}^{\infty}\) converge weakly to the coalescing Brownian motion on the circle. Also, a localized version is shown to converge weakly to the coalescing Brownian motion on the line. We will therefore study the site frequency spectrum for population whose genealogy is given by coalescing Brownian motion on the circle or coalescing Brownian motion on the line. ### Coalescing Markov process and coalescing Brownian motion Coalescing Brownian motion was first studied by Arratia in his Ph.D. thesis [1] at the University of Wisconsin, Madison. More generally, coalescing Markov processes were introduced by Donnelly et al. in the study of the stepping stone model in [6]. Heuristically, we have \(n\) distinct particles located at \((e_{1},\dots,e_{n})\) in some state space \(E\). These particles evolve as independent Markov processes before two or more particles collide. When a collision occurs, those particles coalesce into one particle and then undergo the same dynamics. We will formalize this for coalescing Brownian motion on the real line or the circle in the next paragraph. For the general setting, we refer the reader to [6]. Throughout the rest of the paper, for each \(x\in\mathbb{R}\), we write \(W^{x}\) for the \(1\)-dimensional Brownian motion starting from \(x\). We also assume that \(\{W^{x}\}_{x\in\mathbb{R}}\) are independent. Let the state space \(E\) be either \(\mathbb{R}\) or \(\mathbb{S}^{1}=\{z\in\mathbb{C}:|z|=1\}\), which we call the linear and circular cases respectively. Using notation in Section 2 of [6], we take the initial positions \[\boldsymbol{e_{n}}=(e_{n,1},\dots,e_{n,n}):=\left\{\begin{array}{ll}(1/n, \dots,n/n),&\mbox{ if }E=\mathbb{R},\\ (\exp(2\pi i/n),\dots,\exp(2n\pi i/n)),&\mbox{ if }E=\mathbb{S}^{1}. \end{array}\right.\] If there is no collision, then the paths of these particles are \[\boldsymbol{Z^{e_{n}}}=(Z^{e_{n,1}},\dots,Z^{e_{n,n}}):=\left\{\begin{array}[ ]{ll}\left(W^{1/n},\dots,W^{n/n}\right),&\mbox{ if }E=\mathbb{R},\\ \left(\exp(2\pi iW^{1/n}),\dots,\exp(2\pi iW^{n/n})\right),&\mbox{ if }E=\mathbb{S}^{1}. \end{array}\right.\] To describe the dynamics with collisions, we introduce the coalescence times \(\tau_{n,k}\) and the partitions \(\Pi_{n,k}\) of \(\{1,\dots,n\}\), where \(i,j\) are in the same block of \(\Pi_{n,k}\) if the \(i\)th particle coalesces with the \(j\)th particle no later than \(\tau_{n,k}\). For each block of \(\Pi_{n,k}\), we use the particle with the smallest index as the representative. Formally, we define \(\tau_{n,k}\) and \(\Pi_{n,k}\) inductively for \(k=0,1,2,\dots,n-1\). We take \(\tau_{n,0}=0\) and \(\Pi_{n,0}=\{\{1\},\dots,\{n\}\}\). Given \(\tau_{n,k}\) and \(\Pi_{n,k}\), we define \[\tau_{n,k+1}:=\inf\left\{t>\tau_{k}:\exists A,A^{\prime}\in\Pi_{n,k},\ A\neq A^{ \prime},\ Z_{t}^{e_{n,\min(A)}}=Z_{t}^{e_{n,\min(A^{\prime})}}\right\}.\] Let \(A,A^{\prime}\in\Pi_{n,k}\) be the blocks coalescing at \(\tau_{n,k+1}\), i.e. \(Z^{e_{n,\min(A)}}_{\tau_{n,k+1}}=Z^{e_{n,\min(A^{\prime})}}_{\tau_{n,k+1}}\). Then the partition \(\Pi_{n,k+1}\) is obtained from \(\Pi_{n,k}\) by merging the blocks of \(A\) and \(A^{\prime}\): \[\Pi_{n,k+1}:=\left(\Pi_{n,k}\setminus\{A,A^{\prime}\}\right)\cup\{A\cup A^{ \prime}\}.\] The actual position of the \(i\)th particle at time \(t\), denoted by \(\tilde{Z}^{e_{n,i}}_{t}\), is \[\tilde{Z}^{e_{n,i}}_{t}=Z^{e_{n,j}}_{t},\text{ if }\tau_{n,k}\leq t<\tau_{n,k+1}, \text{ }i\in A\in\Pi_{n,k},\text{ and }\ j=\min(A).\] For any \(A\subseteq\{1,2,\ldots,n\}\) with \(|A|\geq 2\), we define the first coalescence time of \(A\) as \[\tau_{n,A}:=\inf\{t\geq 0:\exists i,j\in A,\ i\neq j,\ \tilde{Z}^{e_{i}}_{t}= \tilde{Z}^{e_{j}}_{t}\}.\] If \(A\subseteq\{1,2,\ldots,n\}\) and \(|A|=1\), we set \(\tau_{n,A}=0\) by convention. **Remark 1.1**.: _Note that for any \(A\subset\{1,2,\ldots,n\}\), the actual positions of the particles in \(A\) evolve as independent Brownian motions before the time \(\tau_{n,A}\). That is, if we set_ \[\tau_{A}:=\inf\{t\geq 0:\exists i,j\in A,\ i\neq j,\ Z^{e_{i}}_{t}=Z^{e_{j}}_{t }\},\] _then \(\left(\tilde{Z}^{e_{n,i}}_{t},0\leq t\leq\tau_{n,A}\right)_{i\in A}\) has the same distribution as \(\left(Z^{e_{n,i}}_{t},0\leq t\leq\tau_{A}\right)_{i\in A}\), although they may not be equal because of coalescence with with particles not in \(A\)._ ### The site frequency spectrum of coalescing Brownian motion Given the coalescing Brownian motion \((\tilde{Z}^{e_{n,i}})_{1\leq i\leq n}\), one can study the the corresponding genealogical tree on \(E\times[0,+\infty)\), where the branches of the tree correspond to the trajectories of \((\tilde{Z}^{e_{n,i}})_{1\leq i\leq n}\). See Figure 1 for an example. The number of mutations inherited by \(m\) individuals \(M_{m}\) is directly related to the length of the branches supporting \(m\) leaves in the genealogical tree \(L_{n,m}\) (See Figure 1). Indeed, a mutation along a branch supporting \(m\) leaves will be inherited by \(m\) individuals in the sample. If we assume that mutations occur with rate \(\nu\) along the branches, independently of the Brownian motion, then the conditional distribution of \(M_{m}\) given \(L_{n,m}\) is Poisson with mean \(\nu L_{n,m}\). For this reason, we will focus on \(L_{n,m}\) in this paper. In the linear case, we say that the \(i\)th branch supports \(m\) leaves at time \(t\in[\tau_{n,k},\tau_{n,k+1})\) if \(i\) is a representative of a block of size \(m\). That is, \[\exists A\in\Pi_{n,k}\text{ such that }|A|=m,\min(A)=i. \tag{1}\] Figure 1: The genealogical tree for coalescing Brownian motion on \(E=\mathbb{R}\) and \(n=6\). Brownian trajectories are represented by straight lines. The red branch supports 3 leaves, namely 1,2, and 3. The length of the portion of the \(i\)th branch that supports \(m\) leaves, denoted by \(L_{n,i,m}\), is the length of the time interval when (1) holds. Note that the only block \(A\) for which (1) could be true is \(\{i,i+1,\ldots,i+m-1\}\), provided \(i+m-1\leq n\). This block is in \(\Pi_{n,k}\) if and only if particle \(i\) coalesces with particle \(i+m-1\), and particles \(i\) and \(i+m-1\) do not coalesce with any other particles outside of the block. Writing \(x^{+}=\max(x,0)\), we have \[L_{n,i,m}=\left\{\begin{array}{ll}\left(\tau_{n,\{m,m+1\}}-\tau_{n,\{1,m\}} \right)^{+},&\qquad i=1,\\ \left(\tau_{n,\{i-1,i\}}\wedge\tau_{n,\{i+m-1,i+m\}}-\tau_{n,\{i,i+m-1\}} \right)^{+},&\qquad 2\leq i\leq n-m,\\ \left(\tau_{n,\{n-m+1\}}-\tau_{n,\{n-m+1,n\}}\right)^{+},&\qquad i=n-m+1,\\ 0,&\qquad i\geq n-m+2.\end{array}\right. \tag{2}\] In the circular case, we want to respect the symmetry of \(\mathbb{S}^{1}\) so that \(L_{n,i,m}\) has the same distribution for all \(i\). To do this, we identify an integer with its equivalence class in \(\{1,\ldots,n\}\) modulo \(n\) and define \(L_{n,i,m}\) to be the the time elapsed for which \(\{i,i+1,\ldots,i+m-1\}\) is in the partition. Formally, we have \[L_{n,i,m}=\left(\tau_{n,\{i-1,i\}}\wedge\tau_{n,\{i+m-1,i+m\}}-\tau_{n\{i,i+m -1\}}\right)^{+},\qquad 1\leq i\leq n. \tag{3}\] For both the linear and the circular cases, the total length of the branches that support \(m\), leaves, denoted by \(L_{n,m}\) is \[L_{n,m}:=\sum_{i=1}^{n}L_{n,i,m}. \tag{4}\] **Proposition 1.1**.: _In the linear case with \(2\leq i\leq n-m\), we have_ \[\mathbb{E}[L_{n,i,m}]=\frac{1}{n^{2}}.\] If we sum over \(2\leq i\leq n-m\), then we get \[\mathbb{E}\left[\sum_{i=2}^{n-m}L_{n,i,m}\right]=\frac{n-m}{n^{2}},\] which means we get a triangular shape for the expected site frequency spectrum if we ignore \(i=1\) and \(i=n-m+1\). For \(i=1\) and \(n-m+1\), the corresponding branch lengths have infinite mean and therefore \(E[L_{n,m}]=\infty\). However, the next theorem says these branches do not have a major effect when we consider the typical behavior of the total branch length. **Theorem 1.1**.: _In both the linear and circular cases, let \(m\) be a fixed positive integer. Let \(L_{n,m}\) be defined as in (4). Then \(nL_{n,m}\) converges to 1 in probability as \(n\) goes to infinity._ We will focus on the proof of Theorem 1.1 in the linear case and then deduce the result for the circular case from the linear case. Throughout the rest of the paper, unless otherwise specified, \(C=C_{m}\) will be some constant which may depend on \(m\) and vary from line to line. ## 2 Results for Brownian motion In this section we summarize some results about Brownian motion. For Lemmas 2.1 and 2.2, we refer the reader to Sections 7.4 and 7.5 of [8]. **Lemma 2.1** (Reflection Principle).: _For every \(x>0\) and \(t\geq 0\),_ \[\mathbb{P}\left(\max_{0\leq s\leq t}W^{0}_{t}\geq x\right)=\mathbb{P}(|W^{0}_{t}| \geq x)\leq C\frac{\sqrt{t}}{x}\exp\left(-\frac{x^{2}}{2t}\right).\] **Lemma 2.2**.: _For \(x\in\mathbb{R}\), let_ \[T_{x}=\inf\{t\geq 0:W^{0}_{t}=x\},\] _be the hitting time of \(x\) for a 1-dimensional Brownian motion. Then for \(a,b>0\),_ \[\mathbb{E}[T_{a}\wedge T_{-b}]=ab.\] In Example 1 of [10], Garbit and Raschel computed the asymptotics of the tail distribution of the exit time of a cone for a 2-dimensional Brownian motion: **Lemma 2.3**.: _For each nonzero \((x,y)\in\mathbb{R}^{2}\), let \(\phi(x,y)\) be the angle between \((x,y)\) and \((1,0)\), i.e. \(\phi(x,y)=arccos(x/\sqrt{x^{2}+y^{2}})\in[0,\pi]\). For \(\theta\in(0,2\pi]\), we define the cone \(\mathcal{C}_{\theta}\)_ \[\mathcal{C}_{\theta}:=\{(x,y)\in\mathbb{R}^{2}:\phi(x,y)<\theta/2\}.\] _For any \((a,b)\in\mathcal{C}_{\theta}\), let_ \[T_{a,b}=\inf\left\{t\geq 0:\phi\left(W^{a}_{t},W^{b}_{t}\right)=\theta/2\right\}\] _be the exit time of \(\mathcal{C}_{\theta}\) for the 2-dimensional Brownian motion starting from \((a,b)\). Then, using the notation \(f(t)\sim g(t)\) to mean that \(\lim_{t\to\infty}f(t)/g(t)=1\), we have_ \[\mathbb{P}(T_{a,b}>t)\sim C_{a,b}t^{-\pi/(2\theta)},\qquad\text{ for some constant $C_{a,b}$.}\] **Remark 2.1**.: _Using Brownian scaling, we have_ \[\mathbb{P}(T_{a/n,b/n}>t)\sim C_{a,b}(n^{2}t)^{-\pi/(2\theta)},\qquad\text{ for some constant $C_{a,b}$.}\] **Lemma 2.4**.: _For real numbers \(x<y<z\), let \(W^{x}\), \(W^{y}\), and \(W^{z}\) be independent one-dimensional Brownian motions starting from \(x\), \(y\), and \(z\) respectively. Let \(\tau_{\{x,y\}}:=\inf\{t\geq 0:W^{x}_{t}=W^{y}_{t}\}\) and \(\tau_{\{y,z\}}:=\inf\{t\geq 0:W^{y}_{t}=W^{z}_{t}\}\). Then_ \[\mathbb{E}[\tau_{\{x,y\}}\wedge\tau_{\{y,z\}}]=(z-y)(y-x).\] Proof.: Define \[B_{t}:=(B^{1}_{t},B^{2}_{t})=(W^{y}_{t}-W^{x}_{t},W^{z}_{t}-W^{y}_{t})\text{ for }t\geq 0,\] and \[T_{l}:=\tau_{\{x,y\}}\wedge\tau_{\{y,z\}}\wedge\inf\left\{t\geq 0:B^{1}_{t}+B^{2 }_{t}=l\right\}\text{ for }l>z-x.\] Applying Ito's formula to \(X_{t}=B^{1}_{t}B^{2}_{t}(B^{1}_{t}+B^{2}_{t}-l)\), and using the fact that \(\langle B^{1}\rangle_{t}=\langle B^{2}\rangle_{t}=2t\), and \(\langle B^{1},B^{2}\rangle_{t}=-t\), we have \[X_{t} =X_{0}+\int_{0}^{t}\left(2B^{1}_{s}B^{2}_{s}+(B^{2}_{s})^{2}-lB^ {2}_{s}\right)\ dB^{1}_{s}+\int_{0}^{t}\left(2B^{1}_{s}B^{2}_{s}+(B^{1}_{s})^{2 }-lB^{1}_{s}\right)\ dB^{2}_{s}\] \[\qquad+\frac{1}{2}\int_{0}^{t}2B^{2}_{s}\ d\langle B^{1}\rangle_{ s}+\frac{1}{2}\int_{0}^{t}2B^{1}_{s}\ d\langle B^{2}\rangle_{s}+\int_{0}^{t}\left(2B^{1}_{s} +2B^{2}_{s}-l\right)\ d\langle B^{1},B^{2}\rangle_{s}\] \[=X_{0}+\int_{0}^{t}\left(2B^{1}_{s}B^{2}_{s}+(B^{2}_{s})^{2}-lB^ {2}_{s}\right)\ dB^{1}_{s}+\int_{0}^{t}\left(2B^{1}_{s}B^{2}_{s}+(B^{1}_{s})^{2 }-lB^{1}_{s}\right)\ dB^{2}_{s}\] \[\qquad+\int_{0}^{t}2B^{2}_{s}\ ds+\int_{0}^{t}2B^{1}_{s}\ ds-\int _{0}^{t}\left(2B^{1}_{s}+2B^{2}_{s}-l\right)\ ds\] \[=X_{0}+\int_{0}^{t}\left(2B^{1}_{s}B^{2}_{s}+(B^{2}_{s})^{2}-lB^ {2}_{s}\right)\ dB^{1}_{s}+\int_{0}^{t}\left(2B^{1}_{s}B^{2}_{s}+(B^{1}_{s})^{ 2}-lB^{1}_{s}\right)\ dB^{2}_{s}+lt.\] Since \[\mathbb{E}\left[\int_{0}^{t}\left(2B_{s}^{1}B_{s}^{2}+(B_{s}^{2})^{2}-lB_{s}^{2} \right)^{2}\ ds+\int_{0}^{t}\left(2B_{s}^{1}B_{s}^{2}+(B_{s}^{1})^{2}-lB_{s}^{1 }\right)^{2}\ ds\right]<\infty\qquad\text{ for all }t\geq 0,\] the process \[\int_{0}^{t}\left(2B_{s}^{1}B_{s}^{2}+(B_{s}^{2})^{2}-lB_{s}^{2}\right)\ dB_{s}^{1}+ \int_{0}^{t}\left(2B_{s}^{1}B_{s}^{2}+(B_{s}^{1})^{2}-lB_{s}^{1}\right)\ dB_{s}^{2}, \qquad t\geq 0,\] is therefore a martingale. By stopping \((X_{t})_{t\geq 0}\) at \(t\wedge T_{l}\) and taking expectations, we have \[\mathbb{E}\left[X_{t\wedge T_{l}}\right]=\mathbb{E}[X_{0}]+l\mathbb{E}\left[t \wedge T_{l}\right].\] Since \(\lim_{t\to\infty}X_{t\wedge T_{l}}=X_{T_{l}}=0\) a.s. and \(|X_{t\wedge T_{l}}|\leq l^{3}\), it follows from the bounded convergence theorem that \[\mathbb{E}[T_{l}]=-\frac{\mathbb{E}[X_{0}]}{l}=(y-x)(z-y)\cdot\frac{l-(z-x)}{ l}.\] Taking the limit as \(l\) goes to infinity gives \(\mathbb{E}[\tau_{\{x,y\}}\wedge\tau_{\{y,z\}}]=(z-y)(y-x)\), which concludes the proof. ## 3 A single branch ### The tail distribution Recall from (2) that in the linear case with \(2\leq i\leq n-m\), we have \[L_{n,i,m}=\left(\tau_{n,\{i-1,i\}}\wedge\tau_{n,\{i+m-1,i+m\}}-\tau_{n,\{i,i+m -1\}}\right)^{+}\leq\tau_{n,\{i-1,i\}}\wedge\tau_{n,\{i+m-1,i+m\}}\leq\tau_{n,\{i-1,i,i+m\}}.\] We give a bound on the tail of the distribution of \(\tau_{n,\{i-1,i,i+m\}}\). **Lemma 3.1**.: _In the linear case with \(2\leq i\leq n-m\), there exists a constant \(C=C_{m}\) such that_ \[\mathbb{P}\left(\tau_{n,\{i-1,i,i+m\}}\geq t\right)\leq Cn^{-3}t^{-3/2}\qquad \text{ for all }t>0. \tag{5}\] _In particular,_ \[\mathbb{P}\left(\tau_{n,\{i-1,i\}}\wedge\tau_{n,\{i+m-1,i+m\}}\geq t\right) \leq Cn^{-3}t^{-3/2}\qquad\text{ for all }t>0.\] Proof.: We consider the \(2\)-dimensional (correlated) Brownian motion \[B_{t}=(B_{t}^{1},B_{t}^{2})=\left(W_{t}^{i/n}-W_{t}^{(i-1)/n},W_{t}^{(i+m)/n}- W_{t}^{i/n}\right),\qquad\text{ for }t\geq 0.\] By Remark 1.1, writing \(\frac{d}{=}\) for equivalence in distribution, we have, \[\begin{split}\tau_{n,\{i-1,i+m\}}&=\inf\left\{t \geq 0:\check{Z}_{t}^{e_{n,i-1}}=\check{Z}_{t}^{e_{n,i}}\text{ or }\check{Z}_{t}^{e_{n,i}}=\check{Z}_{t}^{e_{n,i+m}}\right\}\\ &\overset{d}{=}\inf\left\{t\geq 0:Z_{t}^{e_{n,i-1}}=Z_{t}^{e_{n,i} }\text{ or }Z_{t}^{e_{n,i}}=Z_{t}^{e_{n,i+m}}\right\}\\ &=\inf\left\{t\geq 0:W_{t}^{(i-1)/n}=W_{t}^{i/n}\text{ or }W_{t}^{i/n}=W_{t}^{(i+m)/n}\right\}\\ &=\inf\left\{t\geq 0:B_{t}^{1}=0\text{ or }B_{t}^{2}=0\right\}.\end{split} \tag{6}\] Consider the linear transformation of \(\mathbb{R}^{2}\) defined by \[\varphi(x,y):=\left(\sqrt{\frac{2}{3}}\left(x+\frac{1}{2}y\right),\sqrt{\frac{1}{ 2}}y\right).\] Note that the process \[\varphi(B_{t}^{1},B_{t}^{2})=\left(\sqrt{\frac{2}{3}}\left(\frac{1}{2}W_{t}^{( i+m)/n}+\frac{1}{2}W_{t}^{i/n}-W_{t}^{(i-1)/n}\right),\sqrt{\frac{1}{2}}\left(W_{t}^ {(i+m)/n}-W_{t}^{i/n}\right)\right),\qquad\text{ for }t\geq 0,\] is a two dimensional Brownian motion with independent components and unit variance in each component, starting from \(\varphi(B_{0}^{1},B_{0}^{2})=\varphi(1/n,m/n)\). Also, the image of the first quadrant under \(\varphi\) is \(\{(x,y)\in\mathbb{R}^{2}:x>0,0<y<\sqrt{3}x\}\), a cone with angle \(\pi/3\) up to a rotation. By Remark 2.1, there exists a constant \(C=C_{m}\) such that \[\mathbb{P}\left(\tau_{n,\{i-1,i,i+m\}}\geq t\right)\leq Cn^{-3}t^{-3/2}, \qquad\text{ for all }t>0,\] which proves (5). ### The expected value We now give the proof of Proposition 1.1. Proof of Proposition 1.1.: Recall from (2) that for \(2\leq i\leq n-m\), we have \[L_{n,i,m}=\left(\tau_{n,\{i-1,i\}}\wedge\tau_{n,\{i+m-1,i+m\}}-\tau_{n,\{i,i+m -1\}}\right)^{+}.\] For \(m=1\), since \(\tau_{n,\{i\}}=0\) by convention, we have \[L_{n,i,1}=\left(\tau_{n,\{i-1,i\}}\wedge\tau_{n,\{i,i+1\}}-\tau_{n,\{i\}} \right)^{+}=\tau_{n,\{i-1,i\}}\wedge\tau_{n,\{i,i+1\}}.\] Then we have \(\mathbb{E}[L_{n,i,1}]=1/n^{2}\) by Lemma 2.4 with \(x=(i-1)/n\), \(y=i/n\) and \(z=(i+1)/n\). For \(m>1\), we write \(A_{m}=\{i-1,i,i+m-1,i+m\}\). If the first coalescent event in \(A_{m}\) is the coalescence of \(i-1\) and \(i\) or the coalescence of \(i+m-1\) and \(i+m\), then \(L_{n,i,m}=0\). Otherwise, the first coalescent event is the coalescence of \(i\) and \(i+m-1\). Starting from \(\tau_{n,A_{m}}=\tau_{n,\{i,i+m-1\}}\), the positions of the particles \(\hat{Z}^{e_{n,i-1}}\), \(\hat{Z}^{e_{n,i}}=\hat{Z}^{e_{n,i+m-1}}\), and \(\hat{Z}^{e_{n,i+m}}\) evolve as independent Brownian motions before the next coalescent event among them, which is the same dynamics as for the case \(m=1\). By Lemma 2.4 with \(x=\tilde{Z}^{e_{n,i-1}}_{\tau_{n,A_{m}}}\), \(y=\tilde{Z}^{e_{n,i}}_{\tau_{n,A_{m}}}\) and \(z=\tilde{Z}^{e_{n,i+m-1}}_{\tau_{n,A_{m}}}\), and the observation that one of the two factors is zero if the indicator fails to hold in the second equality, we have \[\mathbb{E}\left[L_{n,i,m}|\sigma\left(\mathbf{Z}^{e_{n}}_{t}:0\leq t \leq\tau_{n,A_{m}}\right)\right] =(\tilde{Z}^{e_{n,i}}_{\tau_{n,A_{m}}}-\tilde{Z}^{e_{n,i-1}}_{\tau _{n,A_{m}}})(\tilde{Z}^{e_{n,i+m}}_{\tau_{n,A_{m}}}-\tilde{Z}^{e_{n,i+m-1}}_{ \tau_{n,A_{m}}})\mathbbm{1}_{\tau_{n,A_{m}}=\tau_{n,\{i,i+m-1\}}}\] \[=(\tilde{Z}^{e_{n,i}}_{\tau_{n,A_{m}}}-\tilde{Z}^{e_{n,i-1}}_{\tau _{n,A_{m}}})(\tilde{Z}^{e_{n,i+m}}_{\tau_{n,A_{m}}}-\tilde{Z}^{e_{n,i+m-1}}_{ \tau_{n,A_{m}}}).\] Taking the expectation, we have \[\mathbb{E}\left[L_{n,i,m}\right]=\mathbb{E}\left[(\tilde{Z}^{e_{n,i}}_{\tau_{ n,A_{m}}}-\tilde{Z}^{e_{n,i-1}}_{\tau_{n,A_{m}}})(\tilde{Z}^{e_{n,i+m}}_{\tau_{n,A_{m}}}- \tilde{Z}^{e_{n,i+m-1}}_{\tau_{n,A_{m}}})\right].\] In view of Remark 1.1 with \(A=A_{m}\), we consider the \(3\)-dimensional Brownian motion \[B_{t}=(B_{t}^{1},B_{t}^{2},B_{t}^{3})=\left(W_{t}^{i/n}-W_{t}^{(i-1)/n},W_{t}^ {(i+m-1)/n}-W_{t}^{i/n},W_{t}^{(i+m)/n}-W_{t}^{(i+m-1)/n}\right),\qquad\text{ for }t\geq 0,\] and define \[T:=\inf\left\{t\geq 0:B_{t}^{1}=0\text{ or }B_{t}^{2}=0\text{ or }B_{t}^{3}=0\right\}.\] Then we have \[\mathbb{E}[L_{n,i,m}]=\mathbb{E}\left[B_{T}^{1}B_{T}^{3}\right].\] Applying Ito's formula to \(X_{t}=B_{t}^{1}B_{t}^{3}\), and using the fact that \(B_{t}^{1}\) and \(B_{t}^{3}\) are independent, we have \[X_{t}=X_{0}+\int_{0}^{t}B_{s}^{3}\ dB_{s}^{1}+\int_{0}^{t}B_{s}^{1}\ dB_{s}^{3}.\] Since \[\mathbb{E}\left[\int_{0}^{t}\left(B_{s}^{3}\right)^{2}\ ds+\int_{0}^{t}\left( B_{s}^{1}\right)^{2}\ ds\right]<\infty\qquad\text{ for all }t<\infty,\] the process \((X_{t})_{t\geq 0}\) is therefore a martingale. Stopping \((X_{t})_{t\geq 0}\) at \(t\wedge T\) and taking expectations, we have \[\mathbb{E}[X_{t\wedge T}]=\mathbb{E}[X_{0}]=\frac{1}{n^{2}}.\] It remains to show that \(\mathbb{E}[X_{T}]=\lim_{t\to\infty}\mathbb{E}[X_{t\wedge T}]\). We have \[|\mathbb{E}[X_{T}]-E[X_{t\wedge T}]|\leq E[|X_{T}|\mathbbm{1}_{T>t}]+E[|X_{t}| \mathbbm{1}_{T>t}]. \tag{7}\] Note that \(B_{t}^{1}=W_{t}^{i/n}-W_{t}^{(i-1)/n}\) and \(B_{t}^{3}=W_{t}^{(i+m)/n}-W_{t}^{(i+m-1)/n}\) are nonnegative for \(t\leq T\), so the process \((X_{T\wedge t})_{t\geq 0}\) is nonnegative. Applying Fatou's lemma, we have \[\mathbb{E}[X_{T}]\leq\liminf_{t\to\infty}\mathbb{E}[X_{T\wedge t}]=\frac{1}{n ^{2}}<\infty.\] Then, by the dominated convergence theorem, \(\mathbb{E}[X_{T}\mathbbm{1}_{T>t}]\) goes to \(0\) as \(t\) goes to infinity. For \(E[|X_{t}|\mathbbm{1}_{T>t}]\), since \[T\leq\inf\left\{t\geq 0:B_{t}^{1}=0\text{ or }B_{t}^{2}+B_{t}^{3}=0\right\} \overset{d}{=}\tau_{n,\{i-1,i,i+m\}},\] it follows from Lemma 3.1 that there exists a constant \(C=C_{m}\) such that \[\mathbb{P}(T>t)\leq Cn^{-3}t^{-3/2}\leq Ct^{-3/2}.\] With the constant \(C\) fixed, we show that for any event \(A\), \[\mathbb{P}(A)\leq Ct^{-3/2}\implies\mathbb{E}(|X_{t}|\mathbbm{1}_{A})\leq(3C/ 2)t^{-1/2}\log t+(C-C\log C)t^{-1/2}, \tag{8}\] which proves that the second term of (7) goes to \(0\) as \(t\) goes to infinity. Since \[\mathbb{P}(|X_{t}|>x)=\mathbb{P}(|B_{t}^{1}B_{t}^{3}|>x)\leq\mathbb{P}\left(( B_{t}^{1})^{2}+(B_{t}^{3})^{2}>2x\right)=\exp\left(-\frac{x}{t}\right),\] it follows that \(|X_{t}|\) is stochastically dominated by a random variable \(Y_{t}\) whose tail probability is \(P(Y_{t}>x)=\exp(-x/t)\) for all \(x\geq 0\). For the random variable \(Y_{t}\), we choose \(x_{0}\) such that \(P(Y_{t}>x_{0})=Ct^{-3/2}\), i.e. \(x_{0}=(3t\log t)/2-t\log C\). We have \[\mathbb{E}(Y_{t}\mathbbm{1}_{Y_{t}>x_{0}}) =\int_{0}^{\infty}\mathbb{P}(Y_{t}\mathbbm{1}_{\{Y_{t}>x_{0}\}}>x )\ dx\] \[=\int_{0}^{x_{0}}\mathbb{P}(Y_{t}>x_{0})\ dx+\int_{x_{0}}^{\infty} \mathbb{P}(Y_{t}>x)\ dx\] \[=x_{0}\mathbb{P}(Y_{t}>x_{0})+t\exp\left(-\frac{x_{0}}{t}\right)\] \[=(3C/2)t^{-1/2}\log t+(C-C\log C)t^{-1/2},\] which proves (8) because \(Y_{t}\) stochastically dominates \(|X_{t}|\). ## 4 Law of Large Numbers ### External branch lengths in sub-systems Now we consider the branch lengths in sub-systems. The reason for doing this is to exploit the independence of branch lengths in the sub-systems. Also, the lengths in the sub-systems agree with those in the whole system with sufficiently high probability. More precisely, let \(d_{\mathbb{S}^{1}}(\cdot,\cdot)\) be the metric of \(\mathbb{S}^{1}\) given by the arc length. For the coalescing Brownian motion with \(n\) particles and \(i,j\in\{1,2,\ldots,n\}\), we define \[d_{n}(i,j):=\left\{\begin{array}{cl}|e_{n,i}-e_{n,j}|=|i-j|/n,&\text{ if }E=\mathbb{R},\\ \frac{1}{2\pi}d_{\mathbb{S}^{1}}(e_{n,i},e_{n,j}),&\text{ if }E=\mathbb{S}^{1}. \end{array}\right.\] Fix some \(\epsilon>0\) sufficiently small, for example, \(\epsilon=0.01\). We define the neighborhood of \(i\) as \[N_{n}(i):=\left\{j\in\{1,\ldots,n\}:d_{n}(i,j)\leq n^{-2/3+\epsilon}\right\}.\] The coalescing Brownian motion in \(N_{n}(i)\) is obtained by considering only \(\mathbf{Z^{e}}_{N_{n}(i)}=(Z^{e_{n,k}})_{k\in N_{n}(i)}\). Quantities in this system are subscripted with \(N_{n}(i)\) instead of \(n\). For example, the length of the portion of the \(i\)th branch that supports \(m\) leaves in this system is denoted by \(L_{N_{n}(i),i,m}\). We can recover the coalescing Brownian motion with \(n\) particles from the coalescing Brownian motion in \(N_{n}(i)\) by taking the Brownian motions \((Z^{e_{n,j}})_{j\notin N_{n}(i)}\) into account. Note that the distribution of the positions of particles in \(N_{n}(i)\) remains invariant, i.e. \[\left(\tilde{Z}^{e_{N_{n}(i),k}}\right)_{k\in N_{n}(i)}\overset{d}{=}\left( \tilde{Z}^{e_{n,k}}\right)_{k\in N_{n}(i)}.\] In particular, we have \(L_{N_{n}(i),i,m}\overset{d}{=}L_{n,i,m}\). **Lemma 4.1**.: _Let \(L_{N_{n}(i),i}\) be defined as above. In the linear case with \(2\leq i\leq n-m\), we have_ 1. \(\mathbf{Z^{e_{N_{n}(i)}}}\) _and_ \(\mathbf{Z^{e_{N_{n}(j)}}}\) _are independent if_ \(N_{n}(i)\cap N_{n}(j)=\emptyset\)_._ 2. _For all_ \(\epsilon>0\)_, there exists a constant_ \(C=C_{\epsilon}\) _such that_ \(\mathbb{P}(L_{N_{n}(i),i,m}\neq L_{n,i,m})\leq Cn^{-1-\epsilon/2}\) Proof.: The first claim is straightforward. For the second claim, we define \[\underline{i}:=\inf\{j\in N_{n}(i):\tau_{N_{n}(i),\{i,j\}}\leq\tau_{N_{n}(i),\{i-1,i\}}\wedge\tau_{N_{n}(i),\{i+m-1,i+m\}}\},\] the leftmost particle in \(N_{n}(i)\) that coalesces with the \(i\)th particle before time \(\tau_{N_{n}(i),\{i-1,i\}}\wedge\tau_{N_{n}(i),\{i+m-1,i+m\}}\). Then \(L_{n,i,m}\neq L_{N_{n}(i),i,m}\) only if \(\tilde{Z}^{e_{\underline{i}}}\) coalesces with \(Z^{e_{j}}\), for some \(j\notin N_{n}(i)\) before \(\tau_{N_{n}(i),\{i-1,i\}}\wedge\tau_{N_{n}(i),\{i+m-1,i+m\}}\), and we have \[\mathbb{P}(L_{n,i,m}\neq L_{N_{n}(i),i,m})\\ \leq\mathbb{P}\left(\exists j,t:\ j\notin N_{n}(i),j<i,\ t<\tau_{ N_{n}(i),\{i-1,i\}}\wedge\tau_{N_{n}(i),\{i+m-1,i+m\}},\ \tilde{Z}^{e_{n,\underline{i}}}_{t}=Z^{e_{n,j}}_{t}\right). \tag{9}\] We now bound the right hand side of (9). Let \(i_{0}=\lfloor i-n^{1/3+\epsilon}/2\rfloor\), and let \(j\notin N_{n}(i),j<i\). If \(\underline{i}>i_{0}\), then \(Z^{e_{n,j}}_{t}\) coalesces with \(\tilde{Z}^{e_{n,j}}\) before \(Z^{e_{n,j}}_{t}\) coalesces with \(\tilde{Z}^{e_{n,\underline{i}}}\). It follows that \[\mathbb{P}\left(\exists j,t:\ j\notin N_{n}(i),j<i,\ t<\tau_{N_{n} (i),\{i-1,i\}}\wedge\tau_{N_{n}(i),\{i+m-1,i+m\}},\ \tilde{Z}^{e_{n,\underline{i}}}_{t}=Z^{e_{n,j}}_{t}\right)\] \[\qquad\leq\mathbb{P}(\tau_{N_{n}(i),\{i-1,i\}}\wedge\tau_{N_{n}(i ),\{i+m-1,i+m\}}\geq n^{-4/3+\epsilon})\] \[\qquad\qquad+\mathbb{P}(\exists j,t:\ j\notin N_{n}(i),j<i,t\leq n ^{-4/3+\epsilon},\tilde{Z}^{e_{n,\underline{i}}}_{t}=Z^{e_{n,j}}_{t})\] \[\qquad\leq\mathbb{P}(\tau_{N_{n}(i),\{i-1,i\}}\wedge\tau_{N_{n}(i ),\{i+m-1,i+m\}}\geq n^{-4/3+\epsilon})\] \[\qquad\qquad+\sum_{j\notin N_{n}(i),j<i}\mathbb{P}(\exists t\leq n ^{-4/3+\epsilon}:\tilde{Z}^{e_{n,i_{0}}}_{t}=Z^{e_{n,j}}_{t})+\mathbb{P}\left( \underline{i}\leq i_{0}\right). \tag{10}\] By Lemma 3.1, we have \[\mathbb{P}(\tau_{N_{n}(i),\{i-1,i\}}\wedge\tau_{N_{n}(i),\{i+m-1, i+m\}}\geq n^{-4/3+\epsilon}) =\mathbb{P}(\tau_{n,\{i-1,i\}}\wedge\tau_{n,\{i+m-1,i+m\}}\geq n ^{-4/3+\epsilon})\] \[\leq Cn^{-1-3\epsilon/2}. \tag{11}\] For \(j\notin N_{n}(i)\), by Lemma 2.1, we have \[\mathbb{P}\left(\exists t\leq n^{-4/3+\epsilon}:\tilde{Z}^{e_{n, i_{0}}}_{t}=Z^{e_{n,j}}_{t}\right) =\mathbb{P}\left(\exists t\leq n^{-4/3+\epsilon}:\sqrt{2}W^{0}_{t }=d_{n}(i_{0},j)\right)\] \[\leq\mathbb{P}\left(\exists t\leq n^{-4/3+\epsilon}:W^{0}_{t}=n^{ -2/3+\epsilon}/\sqrt{2}\right)\] \[=\mathbb{P}\left(|W^{0}_{n^{-4/3+\epsilon}}|\geq n^{-2/3+\epsilon }/\sqrt{2}\right)\] \[\leq Cn^{-\epsilon/2}\exp(-n^{\epsilon}/4)\] \[\leq Cn^{-2-3\epsilon/2}. \tag{12}\] Also, by (11) and the proof of (12), we have \[\mathbb{P}(\underline{i}\leq i_{0}) =\mathbb{P}(\tau_{N_{n}(i),\{i_{0},i\}}\leq\tau_{N_{n}(i),\{i-1,i\} }\wedge\tau_{N_{n}(i),\{i+m-1,i+m\}})\] \[\leq\mathbb{P}(\tau_{N_{n}(i),\{i-1,i\}}\wedge\tau_{N_{n}(i),\{ i+m-1,i+m\}}\geq n^{-4/3+\epsilon})+\mathbb{P}(\tau_{N_{n}(i),\{i_{0},i\}}\leq n ^{-4/3+\epsilon})\] \[\leq Cn^{-1-3\epsilon/2}+Cn^{-2-3\epsilon/2}\] \[\leq Cn^{-1-3\epsilon/2}. \tag{13}\] Combining (10), (11), (12) and (13) gives us \[\mathbb{P}\left(\exists j,t:\ j\notin N_{n}(i),j<i,\ t<L_{N_{n}(i),\{i-1,i\}},\ \tilde{Z}_{t}^{e_{n,i}}=Z_{t}^{e_{n,j}}\right)\leq Cn^{-1-3\epsilon/2},\] which completes the proof. ### Proof of the main results in the linear case We now proceed to the proof of Theorem 1.1 in the linear case. Proof.: Recall from (2) that \[L_{n,1,m}=\left(\tau_{n,\{m,m+1\}}-\tau_{n,\{1,m\}}\right)^{+}\leq\tau_{n,\{m, m+1\}}.\] Since \(\tau_{n,\{m,m+1\}}\) is the hitting time of two Brownian particles which begin a distance \(1/n\) apart, it follows that \(n\tau_{n,\{m,m+1\}}\) converges to \(0\) in probability. Hence, \(nL_{n,1,m}\) converges to \(0\) in probability. Similarly, \(nL_{n,n-m+1,m}\) converges to \(0\) in probability. Therefore, it suffices to show that \[n\sum_{i=2}^{n-m}L_{n,i,m}\stackrel{{\mathbb{P}}}{{\longrightarrow }}1, \tag{14}\] where \(\stackrel{{\mathbb{P}}}{{\longrightarrow}}\) denotes convergence in probability. By Lemma 4.1, using the union bound, we have \[\mathbb{P}\left(\sum_{i=2}^{n-m}L_{n,i,m}\neq\sum_{i=2}^{n-m}L_{N_{n}(i),i,m} \right)\leq Cn^{-1-\epsilon/2}\cdot n=Cn^{-\epsilon/2}.\] Define a truncated versions of \(L_{n,i,m}\) as \[\widetilde{L}_{n,i,m}:=\left(\tau_{n,\{i-1,i\}}\wedge\tau_{n,\{i+m-1,i+m\}} \wedge n^{-4/3+\epsilon}-\tau_{n(i),\{i,i-m+1\}}\right)^{+},\] and of \(L_{N_{n}(i),i,m}\) as \[\widetilde{L}_{N_{n}(i),i,m}:=\left(\tau_{N_{n}(i),\{i-1,i\}}\wedge\tau_{N_{n }(i),\{i+m-1,i+m\}}\wedge n^{-4/3+\epsilon}-\tau_{N_{n}(i),\{i,i-m+1\}}\right) ^{+}.\] By Lemma 3.1, we have \[\mathbb{P}\left(\sum_{i=2}^{n-m}L_{N_{n}(i),i,m}\neq\sum_{i=2}^{ n-m}\widetilde{L}_{N_{n}(i),i,m}\right) \leq\sum_{i=2}^{n-m}\mathbb{P}(L_{N_{n}(i),i,m}\neq\widetilde{L}_{ N_{n}(i),i,m})\] \[\leq\sum_{i=2}^{n-m}\mathbb{P}\left(\tau_{N_{n}(i),\{i-1,i\}} \wedge\tau_{N_{n}(i),\{i+m-1,i+m\}}>n^{-4/3+\epsilon}\right)\] \[=\sum_{i=2}^{n-m}\mathbb{P}\left(\tau_{n,\{i-1,i\}}\wedge\tau_{n, \{i+m-1,i+m\}}>n^{-4/3+\epsilon}\right)\] \[\leq Cn\cdot n^{-1-3\epsilon/2}\] \[=Cn^{-3\epsilon/2}.\] Therefore, it suffices to show that \[n\sum_{i=2}^{n-m}\widetilde{L}_{N_{n}(i),i,m}\stackrel{{\mathbb{P}}}{{ \longrightarrow}}1. \tag{15}\] For the expected value of (15), by Proposition 1.1, we have \[\mathbb{E}[L_{N_{n}(i),i,m}]=\mathbb{E}[L_{n,i,m}]=\frac{1}{n^{2}},\] and by Lemma 3.1, we have \[\mathbb{E}\left[L_{N_{n}(i),i,m}-\widetilde{L}_{N_{n}(i),i,m}\right] \leq\mathbb{E}\left[\left(\tau_{N_{n}(i),\{i-1,i\}}\wedge\tau_{N_ {n}(i),\{i+m-1,i+m\}}-n^{-4/3+\epsilon}\right)^{+}\right]\] \[=\mathbb{E}\left[\left(\tau_{n,\{i-1,i\}}\wedge\tau_{n,\{i+m-1,i+ m\}}-n^{-4/3+\epsilon}\right)^{+}\right]\] \[=\int_{n^{-4/3+\epsilon}}^{\infty}\mathbb{P}\left(\tau_{n,\{i-1, i\}}\wedge\tau_{n,\{i+m-1,i+m\}}>t\right)\ dt\] \[\leq C\int_{n^{-4/3+\epsilon}}^{\infty}n^{-3}t^{-3/2}\ dt\] \[=Cn^{-7/3-\epsilon/2}.\] It follows that \[\lim_{n\to\infty}\mathbb{E}\left[n\sum_{i=2}^{n-m}\widetilde{L}_{N_{n}(i),i,m }\right]=1. \tag{16}\] For the variance of (15), by the first claim of Lemma 4.1, \(\widetilde{L}_{N_{n}(i),i,m}\) and \(\widetilde{L}_{N_{n}(i),j,m}\) are independent if \(N_{n}(i)\) and \(N_{n}(j)\) are disjoint. Therefore, we have \[\begin{split}\operatorname{Var}\left(n\sum_{i=2}^{n-m}\widetilde{ L}_{N_{n}(i),i,m}\right)&=n^{2}\sum_{i=2}^{n-m}\sum_{j:N_{n}(j) \cap N_{n}(i)\neq\emptyset}\operatorname{Cov}\left(\widetilde{L}_{N_{n}(i),i, m},\widetilde{L}_{N_{n}(j),j,m}\right)\\ &\leq n^{2}\sum_{i=2}^{n-m}\sum_{j:N_{n}(j)\cap N_{n}(i)\neq \emptyset}\mathbb{E}\left[\left(\widetilde{L}_{N_{n}(i),i,m}\right)^{2}\right] \\ &=n^{2}\sum_{i=2}^{n-m}\sum_{j:N_{n}(j)\cap N_{n}(i)\neq \emptyset}\mathbb{E}\left[\left(\widetilde{L}_{n,i,m}\right)^{2}\right]\\ &\leq Cn^{2}\cdot n\cdot n^{1/3+\epsilon}\mathbb{E}\left[\left( \widetilde{L}_{n,i,m}\right)^{2}\right].\end{split} \tag{17}\] Using Lemma 3.1, we have, \[\mathbb{E}\left[\left(\widetilde{L}_{n,i,m}\right)^{2}\right] =\int_{0}^{\infty}2t\mathbb{P}\left(\widetilde{L}_{n,i,m}\geq t \right)\ dt\] \[=\int_{0}^{n^{-4/3+\epsilon}}2t\mathbb{P}\left(L_{n,i,m}\geq t \right)\ dt\] \[\leq\int_{0}^{n^{-4/3+\epsilon}}2t\mathbb{P}\left(\tau_{n,\{i-1,i \}}\wedge\tau_{n,\{i+m-1,i+m\}}\geq t\right)\ dt\] \[\leq\int_{0}^{n^{-4/3+\epsilon}}2t\cdot Cn^{-3}t^{-3/2}\ dt\] \[\leq Cn^{-11/3+\epsilon/2}.\] Combining this with (17), we have \[\mathrm{Var}\left(n\sum_{i=2}^{n-m}\widetilde{L}_{N_{n}(i),i,m}\right)\leq Cn ^{2}\cdot n\cdot n^{1/3+\epsilon}\cdot n^{-11/3+\epsilon/2}=Cn^{-1/3+3\epsilon /2}.\] It follows from Chebyshev's inequality that \[n\sum_{i=2}^{n-m}\widetilde{L}_{N_{n}(i),i,m}-\mathbb{E}\left[n\sum_{i=2}^{n- m}\widetilde{L}_{N_{n}(i),i,m}\right]\stackrel{{\mathbb{P}}}{{ \longrightarrow}}0.\] Combining this with (16) gives the result. **Remark 4.1**.: _The same argument used to prove (14) implies that_ \[n\sum_{i=\lfloor n^{1/3+\epsilon}\rfloor}^{\lceil n-n^{1/3+\epsilon}\rceil}L_ {n,i,m}\stackrel{{\mathbb{P}}}{{\longrightarrow}}1,\] _which we will use in the proof in the circular case._ ### Proof of the main result in the circular case Now, we deduce the result in the circular case from the linear case. In this section, quantities in the circular case are subscript with \(\mathbb{S}^{1}\). For example, the length of the portion of the \(i\)th branch that supports \(m\) leaves is denoted by \(L_{\mathbb{S}^{1},n,i,m}\). The following lemma bounds the probability that the branch lengths differ in the linear and circular cases. **Lemma 4.2**.: _Consider \(n^{1/3+\epsilon}<i<n-n^{1/3+\epsilon}\) so that the neighborhood of \(N_{n}(i)\) of \(i\) consists of the same particles in the linear and circular case, we have_ \[\mathbb{P}\left(L_{N_{n}(i),i,m}\neq L_{\mathbb{S}^{1},N_{n}(i),i,m}\right) \leq Cn^{-1-3\epsilon/2} \tag{18}\] _and_ \[\mathbb{P}\left(L_{\mathbb{S}^{1},N_{n}(i),i,m}\neq L_{\mathbb{S}^{1},n,i,m} \right)\leq Cn^{-1-3\epsilon/2}. \tag{19}\] Proof.: For the proof of (18), let \(i^{\prime}\) and \(i^{\prime\prime}\) be the smallest and largest indices in \(N_{n}(i)\) respectively. Note that \(|i^{\prime\prime}-i^{\prime}|\leq Cn^{1/3+\epsilon}\). Suppose \(\check{Z}_{t}^{e_{n,i^{\prime\prime}}}-\check{Z}_{t}^{e_{n,i^{\prime}}}<1\) for all \(t\leq\tau_{N_{n}(i),\{i-1,i\}}\wedge\tau_{N_{n}(i),\{i+m-1,i+m\}}\). Then the \(i^{\prime}\)th particle and the \(i^{\prime\prime}\)th particle do not coalesce, and we have \[L_{N_{n}(i),i,m}=L_{\mathbb{S}^{1},N_{n}(i),i,m}\] and \[\tau_{N_{n}(i),\{i-1,i\}}\wedge\tau_{N_{n}(i),\{i+m-1,i+m\}}=\tau_{\mathbb{S}^ {1},N_{n}(i),\{i-1,i\}}\wedge\tau_{\mathbb{S}^{1},N_{n}(i),\{i+m-1,i+m\}}.\] Using Lemma 2.1 in the fourth line and Lemma 3.1 in the last line, we have \[\mathbb{P}\left(L_{N_{n}(i),i,m}\neq L_{\mathbb{S}^{1},N_{n}(i),i,m}\right)\] \[\qquad\leq\mathbb{P}\left(\exists t\leq\tau_{N_{n}(i),\{i-1,i\}} \wedge\tau_{N_{n}(i),\{i+m-1,i+m\}}:\check{Z}_{t}^{e_{n,i^{\prime}}}=\check{Z }_{t}^{e_{n,i^{\prime\prime}}}-1\right)\] \[\qquad\leq\mathbb{P}\left(\tau_{N_{n}(i),\{i-1,i\}}\wedge\tau_{N _{n}(i),\{i+m-1,i+m\}}>n^{-4/3+\epsilon}\right)+\mathbb{P}\left(\exists t\leq n ^{-4/3+\epsilon}:\check{Z}_{t}^{e_{n,i^{\prime}}}=\check{Z}_{t}^{e_{n,i^{ \prime\prime}}}-1\right)\] \[\qquad=\mathbb{P}\left(\tau_{n,\{i-1,i\}}\wedge\tau_{n,\{i+m-1,i+ m\}}>n^{-4/3+\epsilon}\right)+\mathbb{P}\left(\sqrt{2}|W_{n^{-4/3+\epsilon}} ^{0}|\geq 1-\frac{|i^{\prime\prime}-i^{\prime}|}{n}\right)\] \[\qquad\leq Cn^{-1-3\epsilon/2},\] which proves (18). The same reasoning gives the next equation, which we will use in the proof of (19): \[\mathbb{P}\left(\tau_{N_{n}(i),\{i-1,i\}}\wedge\tau_{N_{n}(i),\{i+m-1,i+m\}} \neq\tau_{\mathbb{S}^{1},N_{n}(i),\{i-1,i\}}\wedge\tau_{\mathbb{S}^{1},N_{n} (i),\{i+m-1,i+m\}}\right)\leq Cn^{-1-3\epsilon/2}. \tag{20}\] The proof of (19) is similar to the proof of Lemma 4.1. We define \[\underline{i}:=\inf\{j\in N_{n}(i):\tau_{\mathbb{S}^{1},N_{n}(i),\{i,j\}} \leq\tau_{\mathbb{S}^{1},N_{n}(i),\{i-1,i\}}\wedge\tau_{\mathbb{S}^{1},N_{n}( i),\{i+m-1,i+m\}}\},\] and \[\overline{i}:=\sup\{j\in N_{n}(i):\tau_{\mathbb{S}^{1},N_{n}(i),\{i+m,j\}} \leq\tau_{\mathbb{S}^{1},N_{n}(i),\{i-1,i\}}\wedge\tau_{\mathbb{S}^{1},N_{n}( i),\{i+m-1,i+m\}}\}.\] Then \(L_{\mathbb{S}^{1},n,i,m}\neq L_{\mathbb{S}^{1},N_{n}(i),i,m}\) only if \(\check{Z}^{e_{\mathbb{S}^{1},n}}\) or \(\check{Z}^{e_{\mathbb{S}^{1},n,\overline{i}}}\) coalesces with \(Z^{e_{\mathbb{S}^{1},n,j}}\), for some \(j\notin N_{n}(i)\), before time \(\tau_{\mathbb{S}^{1},N_{n}(i),\{i-1,i\}}\wedge\tau_{\mathbb{S}^{1},N_{n}(i),\{ i+m-1,i+m\}}\). Therefore, we have \[\mathbb{P}\left(L_{\mathbb{S}^{1},N_{n}(i),i,m}\neq L_{\mathbb{S} ^{1},n,i,m}\right)\] \[\qquad\leq\mathbb{P}\left(\exists j,t:\ j\notin N_{n}(i),\ t<\tau_ {\mathbb{S}^{1},N_{n}(i),\{i-1,i\}}\wedge\tau_{\mathbb{S}^{1},N_{n}(i),\{i+m-1,i+m\}},\ \check{Z}_{t}^{e_{\mathbb{S}^{1},n,\underline{i}}}=Z_{t}^{e_{\mathbb{S}^{1},n, j}}\right)\] \[\qquad\qquad+\mathbb{P}\left(\exists j,t:\ j\notin N_{n}(i),\ t<\tau_{\mathbb{S}^{1},N_{n}(i),\{i-1,i\}}\wedge\tau_{\mathbb{S}^{1},N_{n}(i),\{i+m-1,i+m\}},\ \check{Z}_{t}^{e_{\mathbb{S}^{1},n,\overline{i}}}=Z_{t}^{e_{\mathbb{S}^{1},n, j}}\right)\] We bound the first term on the right hand side and the same argument can be applied to the second term. Writing \(i_{0}=\left\lfloor i-n^{1/3+\epsilon}/2\right\rfloor\), we have \[\mathbb{P}\left(\exists j,t:\ j\notin N_{n}(i),\ t<\tau_{\mathbb{S}^{1},N_{n} (i),\{i-1,i\}}\wedge\tau_{\mathbb{S}^{1},N_{n}(i),\{i+m-1,i+m\}},\ \check{Z}_{t}^{e_{\mathbb{S}^{1},n,\underline{i}}}=Z_{t}^{e_{\mathbb{S}^{1},n, j}}\right)\] \[\qquad\leq\mathbb{P}(\tau_{\mathbb{S}^{1},N_{n}(i),\{i-1,i\}} \wedge\tau_{\mathbb{S}^{1},N_{n}(i),\{i+m-1,i+m\}}\geq n^{-4/3+\epsilon})\] \[\qquad\qquad+\mathbb{P}(\exists j,t:\ j\notin N_{n}(i),t\leq n^{-4 /3+\epsilon},\check{Z}_{t}^{e_{\mathbb{S}^{1},n,\underline{i}}}=Z_{t}^{e_{ \mathbb{S}^{1},n,j}})\] \[\qquad\leq\mathbb{P}(\tau_{\mathbb{S}^{1},N_{n}(i),\{i-1,i\}} \wedge\tau_{\mathbb{S}^{1},N_{n}(i),\{i+m-1,i+m\}}\geq n^{-4/3+\epsilon})\] \[\qquad\qquad+\mathbb{P}(\exists j,t:\ j\notin N_{n}(i),t\leq n^{-4 /3+\epsilon},\check{Z}_{t}^{e_{\mathbb{S}^{1},n,i_{0}}}=Z_{t}^{e_{\mathbb{S}^{1},n,j}})+\mathbb{P}\left(\underline{i}\leq i_{0}\right)\] \[\qquad\leq\mathbb{P}(\tau_{\mathbb{S}^{1},N_{n}(i),\{i-1,i\}} \wedge\tau_{\mathbb{S}^{1},N_{n}(i),\{i+m-1,i+m\}}\geq n^{-4/3+\epsilon})\] \[\qquad\qquad+\sum_{j\notin N_{n}(i)}\mathbb{P}(\exists t\leq n^{-4 /3+\epsilon}:\check{Z}_{t}^{e_{\mathbb{S}^{1},n,i_{0}}}=Z_{t}^{e_{\mathbb{S}^{1},n,j}})+\mathbb{P}\left(\underline{i}\leq i_{0}\right). \tag{21}\] To bound the first term on the right hand side of (21), we use (20) to get \[\mathbb{P}(\tau_{\mathbb{S}^{1},N_{n}(i),\{i-1,i\}}\wedge\tau_{ \mathbb{S}^{1},N_{n}(i),\{i+m-1,i+m\}}>n^{-4/3+\epsilon})\] \[\leq\mathbb{P}\left(\tau_{N_{n}(i),\{i-1,i\}}\wedge\tau_{N_{n}(i), \{i+m-1,i+m\}}\neq\tau_{\mathbb{S}^{1},N_{n}(i),\{i-1,i\}}\wedge\tau_{ \mathbb{S}^{1},N_{n}(i),\{i+m-1,i+m\}}\right)\] \[\qquad\quad+\mathbb{P}(\tau_{N_{n}(i),\{i-1,i\}}\wedge\tau_{N_{n} (i),\{i+m-1,i+m\}}\geq n^{-4/3+\epsilon})\] \[\leq\mathbb{P}\left(\tau_{N_{n}(i),\{i-1,i\}}\wedge\tau_{N_{n}(i),\{i+m-1,i+m\}}\neq\tau_{\mathbb{S}^{1},N_{n}(i),\{i-1,i\}}\wedge\tau_{ \mathbb{S}^{1},N_{n}(i),\{i+m-1,i+m\}}\right)\] \[\qquad\quad+\mathbb{P}(\tau_{n,\{i-1,i\}}\wedge\tau_{n,\{i+m-1,i+ m\}}\geq n^{-4/3+\epsilon})\] \[\leq Cn^{-1-3\epsilon/2}. \tag{22}\] To bound the second term on the right hand side of (21), note that for two Brownian particles on the circle, it is more likely that they will coalesce through the smaller arc. Therefore, the same argument used to prove (12) gives \[\mathbb{P}(\exists t\leq n^{-4/3+\epsilon}:Z_{t}^{e_{\mathbb{S}^{ 1},n,i_{0}}}=Z_{t}^{e_{\mathbb{S}^{1},n,j}}) =\mathbb{P}(\exists t\leq n^{-4/3+\epsilon}:Z_{t}^{e_{\mathbb{S}^ {1},n,i_{0}}}=Z_{t}^{e_{\mathbb{S}^{1},n,j}})\] \[\leq 2\mathbb{P}(\exists t\leq n^{-4/3+\epsilon}:Z_{t}^{e_{n,i_{0 }}}=Z_{t}^{e_{n,j}})\] \[\leq Cn^{-2-3\epsilon/2}. \tag{23}\] To bound the last term on the right hand side of (21), again, since two Brownian particles on the circle are more likely to coalesce through the smaller arc, using (22) in the third line and the proof of (23) in the last line, we have \[\mathbb{P}(\underline{i}\leq i_{0}) \leq 2\mathbb{P}(\tau_{\mathbb{S}^{1},N_{n}(i),\{i_{0},i\}}\leq \tau_{\mathbb{S}^{1},N_{n}(i),\{i-1,i\}}\wedge\tau_{\mathbb{S}^{1},N_{n}(i),\{ i+m-1,i+m\}})\] \[\leq 2\mathbb{P}(\tau_{\mathbb{S}^{1},N_{n}(i),\{i-1,i\}}\wedge \tau_{\mathbb{S}^{1},N_{n}(i),\{i+m-1,i+m\}}\geq n^{-4/3+\epsilon})+2\mathbb{P }(\tau_{\mathbb{S}^{1},N_{n}(i),\{i_{0},i\}}\leq n^{-4/3+\epsilon})\] \[\leq Cn^{-1-3\epsilon/2}+2\mathbb{P}(\tau_{N_{n}(i),\{i_{0},i\}} \leq n^{-4/3+\epsilon})\] \[\leq Cn^{-1-3\epsilon/2}. \tag{24}\] Then equation (19) follows from (21), (22), (23) and (24). We now give the proof for Theorem 1.1 in the circular case. Proof.: By Lemmas 4.1 and 4.2, we have \[\mathbb{P}\left(\sum_{i=\lfloor n^{1/3+\epsilon}\rfloor}^{\lceil n-n^{1/3+ \epsilon}\rceil}L_{\mathbb{S}^{1},n,i,m}\neq\sum_{i=\lfloor n^{1/3+\epsilon} \rfloor}^{\lceil n-n^{1/3+\epsilon}\rceil}L_{n,i,m}\right)\leq Cn^{-3 \epsilon/2}.\] By Remark 4.1, it suffices to show that \[n\left(\sum_{i=1}^{\lfloor n^{1/3+\epsilon}\rfloor-1}L_{\mathbb{S}^{1},n,i,m} +\sum_{i=\lceil n-n^{1/3+\epsilon}\rceil+1}^{n}L_{\mathbb{S}^{1},n,i,m}\right) \stackrel{{\mathbb{P}}}{{\longrightarrow}}0.\] Since \(L_{\mathbb{S}^{1},n,i,m}\) has the same distribution for all \(1\leq i\leq n\), by Lemmas 4.1, 4.2 and Proposition 1.1, there exist random variables \(X_{n,1},\ldots,X_{n,\lfloor n^{1/3+\epsilon}\rfloor-1},X_{n,\lceil n-n^{1/3+ \epsilon}\rceil+1},\ldots,X_{n,n}\) such that \(\mathbb{P}(L_{\mathbb{S}^{1},n,i,m}\neq X_{n,i})\leq Cn^{-1-3\epsilon/2}\) and \(\mathbb{E}[X_{n,i}]=1/n^{2}\) for all \(i\). Then we have \[\mathbb{E}\left[n\left(\sum_{i=1}^{\lfloor n^{1/3+\epsilon}\rfloor-1}X_{n,i}+ \sum_{i=\lceil n-n^{1/3+\epsilon}\rceil+1}^{n}X_{n,i}\right)\right]=2n^{-2/3+ \epsilon},\] which implies that \[n\left(\sum_{i=1}^{\lfloor n^{1/3+\epsilon}\rfloor-1}X_{n,i}+\sum_{i=\lceil n -n^{1/3+\epsilon}\rceil-1}^{n}X_{n,i}\right)\stackrel{{\mathbb{P }}}{{\longrightarrow}}0.\] Also, \[\mathbb{P}\left(\sum_{i=1}^{\lfloor n^{1/3+\epsilon}\rfloor-1}L_{\mathbb{S}^{ 1},n,i,m}+\sum_{i=\lceil n-n^{1/3+\epsilon}\rceil+1}^{n}L_{\mathbb{S}^{1},n,i,m}\neq\sum_{i=1}^{\lfloor n^{1/3+\epsilon}\rfloor-1}X_{n,i}+\sum_{i=\lceil n -n^{1/3+\epsilon}\rceil+1}^{n}X_{n,i}\right)\leq Cn^{-2/3-\epsilon/2}.\] The proof is completed. ## Acknowledgments The author thanks Professor Jason Schweinsberg for his patient guidance and helpful advice during the planning and development of this article.
2306.11825
DiNADO: Norm-Disentangled Neurally-Decomposed Oracles for Controlling Language Models
NeurAlly-Decomposed Oracle (NADO) is a powerful approach for controllable generation with large language models. It is designed to avoid catastrophic forgetting while achieving guaranteed convergence to an entropy-maximized closed-form optimal solution with reasonable modeling capacity. Despite the success, several challenges arise when apply NADO to a wide range of scenarios. Vanilla NADO suffers from gradient vanishing for low-probability control signals and is highly reliant on a regularization to satisfy the stochastic version of Bellman equation. In addition, the vanilla implementation of NADO introduces a few additional transformer layers, suffering from a limited capacity especially compared to other finetune-based model adaptation methods like LoRA. In this paper, we propose a improved version of the NADO algorithm, namely DiNADO (norm-Disentangled NeurAlly-Decomposed Oracles), which improves the performance of the NADO algorithm through disentangling the step-wise global norm over the approximated oracle $R$-value for all potential next-tokens, allowing DiNADO to be combined with finetuning methods like LoRA. We discuss in depth how DiNADO achieves better capacity, stability and flexibility with both empirical and theoretical results. Experiments on formality control in machine translation and the lexically constrained generation task CommonGen demonstrates the significance of the improvements.
Sidi Lu, Wenbo Zhao, Chenyang Tao, Arpit Gupta, Shanchan Wu, Tagyoung Chung, Nanyun Peng
2023-06-20T18:36:52Z
http://arxiv.org/abs/2306.11825v2
# On Compositionality and Improved Training of NADO ###### Abstract NeurAlly-Decomposed Oracle (NADO) is a powerful approach for controllable generation with large language models. Differentiating from finetuning/prompt tuning, it has the potential to avoid catastrophic forgetting of the large base model and achieve guaranteed convergence to an entropy-maximized closed-form solution without significantly limiting the model capacity. Despite its success, several challenges arise when applying NADO to more complex scenarios. First, the best practice of using NADO for the composition of multiple control signals is under-explored. Second, vanilla NADO suffers from gradient vanishing for low-probability control signals and is highly reliant on the forward-consistency regularization. In this paper, we study the aforementioned challenges when using NADO theoretically and empirically. We show we can achieve guaranteed compositional generalization of NADO with a certain practice, and propose a novel alternative parameterization of NADO to perfectly guarantee the forward-consistency. We evaluate the improved training of NADO, i.e. NADO++, on CommonGen. Results show that NADO++ improves the effectiveness of the algorithm on multiple aspects. ## 1 Introduction Large pretrained generative transformers (Radford et al., 2019; Brown et al., 2020; Raffel et al., 2020) have achieved remarkable success in a wide range of natural language generation tasks, such as story generation, text summarization, and question answering. Such models benefit from their vast amount of training data, allowing them to learn powerful distributions that contain rich information about the underlying logic of human languages. One typical way to adapt such models for specific application scenarios is through fine-tuning. Fine-tuning refers to a second-stage training of pre-trained models, usually on a smaller, task-specific dataset that defines the target domain. However, there are a few problems associated with fine-tuning: 1) The computational efficiency of fine-tuning is highly dependent on the number of parameters of the model it uses. In that sense, fine-tuning some extremely large models (provided as services rather than open-sourced checkpoints) can be hardly affordable for the majority of the community. 2) Fine-tuning on smaller datasets risks causing the catastrophic forgetting problem. A pretrained model can overfit to an under-represented task domain, while forgetting important knowledge it once learned during the pre-training stage. This is in particular a problem when the model is examined for some reasoning capabilities like compositional generalization and/or commonsense reasoning. Prompt-tuning and in-context learning (Dong et al., 2022) are recent approaches to addressing the challenges associated with fine-tuning large pretrained models. These approaches involve adding a few tokens, which can be discrete natural language tokens or continuous trainable vectors, to the input of a task in order to format it. Instead of modifying the parameters of the model, the model is typically fixed during the process, and the selection or embeddings of the added tokens are changed to maximize the probability of the model producing a specific desired output. This allows for the model to adapt to unseen tasks and domains with minimal data, and can also help to avoid the catastrophic forgetting problem associated with traditional fine-tuning methods. However, with very limited capacity, its flexibility is also pretty much narrowed to specific scenarios. In fact, successful in-context models like InstructGPT/ChatGPT usually require much more effort than people originally expect for prompt-based model adaptation. Also, such paradigms require reading the prompt/instructive example thoroughly every time the model is executed. On one hand, this causes computational concerns when a significantly long prompt/instructive example is needed for a complex task. On the other hand, the added tokens or embeddings may not always be able to capture the nuances and complexities of a given task or domain, leading to suboptimal performance. The NADO algorithm is a unique approach that lies between fine-tuning and prompt-tuning/in-context learning. It adapts pretrained generative transformers by projecting the base distribution to the control signal space. To achieve this, NADO is usually implemented with a smaller transformer model, which helps to preserve some necessary model capacity while still avoiding direct modification of the original model. In the ideal case, NADO adapts the original distribution to the entropy-maximized optimal solution for the target control. It has shown initial success in various scenarios such as formal machine translation and lexically constrained text generation. However, there are still a few remaining questions when applying such models to solving the controlled generation problem in a compositional manner: 1) when the discrepancy between the controlled distribution and the original distribution is large, what is the best practice to train the NADO layers 2) when can we expect the compositionality of different control signals. These questions illustrate the challenges that still need to be addressed when working with the NADO algorithm to achieve optimal results in controlled generation. In this paper, we address the previous problems related to the NADO algorithm for better training of NADO and its generalization to the composition of multiple controls. Our main contributions are as follows: * We propose an improved parameterization of NADO that now guarantees the algorithmic consistency for the resulting (edited) distribution \(q(x)\) that improves the training stability. * We demonstrate that it is possible to improve the sample/gradient estimation efficiency when training NADO by further exploiting the likelihood predictions from the base model \(p\). * We show that it is possible to achieve guaranteed compositional generalization with NADO under the assumption of _unitarily i.i.d._ We will explain this in detail in the methodology section. ## 2 Background Controllable Generation with Autoregressive ModelsThere are multiple paradigms to achieve controllable generation with autoregressive models. According to Zhang et al. (2022), these paradigms can be classified into three general types: fine-tuning, refact/retraining, and post-processing. Most previous attempts to achieve controllable generation have focused on the first two paradigms, including methods such as CTRL (Keskar et al., 2019) and prompt-based learning methods (Shin et al., 2020; Lester et al., 2021; Li and Liang, 2021). The post-processing paradigm includes methods such as constrained decoding (CD) (Anderson et al., 2017; Lu et al., 2021) and auxiliary model guided generation (AG) (Dathathri et al., 2020; Krause et al., 2021; Liu et al., 2021; Lin and Riedl, 2021; Yang and Klein, 2021). These methods have shown some success in controlling the base model using signals like lexical constraints, but each of them has its own limitations. CD methods fail in directly editing the model distribution, and they may struggle to handle sequence-level control signals that are not trivially factorizable into the token/step level. AG methods have the potential to handle sequence-level, abstract control signals, but they often require additional data or annotation. Moreover, most AG methods don't consider the distribution of the base model, causing distribution discrepancy and degenerated performance in the decoding process. NeurAlly-Decomposed Oracle (NADO)NeurAlly-Decomposed Oracle (NADO) is a novel post-processing approach for controllable generation. Given the base distribution \(p(\mathbf{x})\) of the large pretrained model NADO controls and the target control signal function \(C(\mathbf{x})\), NADO aims to project the base distribution to the probability space with successful control _i.e._\(C(\mathbf{x})=1\). Formally, the target distribution \(q(\mathbf{x})\) NADO produces can be written as: \[q(x)=\begin{cases}\alpha p(\mathbf{x})&\text{if }C(\mathbf{x})=1\\ 0&\text{if }C(\mathbf{x})=0\end{cases} \tag{1}\] where \(\alpha\) is the re-normalizing factor that does not need to be calculated explicitly. NADO finds \(q(x)\) through learning the step-wise expected satisfaction ratio \(R^{C}(\mathbf{x}_{<t})\), which can be defined as: \[R^{C}(\mathbf{x}_{<t})=\mathbb{E}_{x\sim p(x|\mathbf{x}_{<t})}[C(\mathbf{x})] \tag{2}\] where \(p(x|\mathbf{x}_{<t})\) means the distribution of all sequences with \(\mathbf{x}_{<t}\) as the prefix. Importance Samplingtackles the following problem when calculating an integral over an under-sampled distribution. The basic idea behind importance sampling is to reweight the samples generated from a different distribution (known as the _proxy_ distribution) so that they are consistent with the target distribution. This reweighting is done using the ratio of the target distribution to the proxy distribution. The formulation of importance sampling can be written as follows: \[\mathbb{E}_{\mathbf{x}\sim p}[f(\mathbf{x})]=\mathbb{E}_{\mathbf{x}\sim q}[ \frac{p(\mathbf{x})}{q(\mathbf{x})}f(\mathbf{x})] \tag{3}\] where p(x) is the target distribution, q(x) is the proxy distribution, and f(x) is the function we want to estimate the expected value of. In reinforcement learning, we use importance sampling to estimate the value of a policy under the target distribution by reweighting the returns generated by the proxy policy. ## 3 Methodology We discuss the challenges of applying the vanilla version of NADO in various scenarios. First, under the previous parameterization of NADO, the solution to a particular \(q(x)\) is not unique. This means that when using \(q(x)\) as a likelihood function to warm up the model, the optimization target is ill-defined. To address this issue, we propose a new parameterization that guarantees algorithmic consistency for the resulting distribution during warmup. Second, the original random sampling strategy of NADO is inefficient for training a model to tackle control signals with low satisfaction rate. To address this, we introduce importance sampling and discuss how our proxy distribution is constructed. Third, we analyze the challenges of directly using NADO in a compositional manner. To address this, we re-adjust the dependencies between different hierarchies of NADOs. With some necessary approximations and assumptions, we can achieve guaranteed compositional generalization. ### Notations General formulationFollowing the notations in the original NADO paper, we use \(\mathbf{x}\in\mathcal{X}\) and \(\mathbf{y}\in\mathcal{Y}\) to denote the input and generated sequence, respectively. We assume the distributions are defined on the set of sequences of tokens in \(\Sigma\). We denote the \(i-\)th token in \(\mathbf{y}\) as \(y_{i}\) and the sequence prefix from the beginning to the \((i-1)-\)th token as \(\mathbf{y}_{<i}\). Thus, for the base AR language model, the step-wise distribution can be written as \(p(y_{i}|\mathbf{x},\mathbf{y}_{<i})\), and the joint distribution as \(p(\mathbf{y}|\mathbf{x})=\prod_{i}p(y_{i}|\mathbf{x},\mathbf{y}_{<i})\). Formulation in NADO HierarchiesWe hereby consider potentially multiple NADO hierarchies. The sequence-level oracle in layer-\(i\) can be defined as a boolean function \(C_{i}:\mathcal{X}\times\mathcal{Y}\rightarrow\{0,1\}\). We also interchangeably use the notation \(C_{i}(\mathbf{x})\) or \(C_{i}(\mathbf{x},\mathbf{y})\). The resulting step-wise density ratio function can be written as \(R^{C}(\mathbf{x},\mathbf{y}_{<t})\) or simply \(R^{C}(\mathbf{y}_{<t})\). When we do a one-step enumeration for the next-step likelihood over the vocabulary, we also use the notation \(R^{C}_{\theta_{i}}(y_{t}|\mathbf{y}_{<t})=\{R^{C}_{\theta_{i}}(\mathbf{y}_{ \leq t})\}_{\forall y_{t}\in\Sigma}\). ### Consistent parameterization of NADO In the original parameterization of NADO, \(R^{C}_{\phi}(\mathbf{x}_{<t})\) reflects the expected value of the decomposed oracle function \(C(\mathbf{x})\). For finding the optimal \(q(y_{i}|\mathbf{x},\mathbf{y}_{<i})\), it would be natural to intuitively assume that \(R^{C}(\mathbf{x}_{<t})\) is unique for a specific \(C(\mathbf{x})\). However, we have the following lemma that disproves it: **Lemma 1** (Ambiguity of the vanilla parameterization): Given the base distribution \(p(y_{i}|\mathbf{x},\mathbf{y}_{<i})\),, there are infinite numbers of different unnormalized \(R^{C}(\mathbf{x}_{<t})\) (_i.e._\(C(\mathbf{x}_{<t})\)) that consequent to the same \(q(y_{i}|\mathbf{x},\mathbf{y}_{<i})\),.. **Proof.** For an arbitrary real number \(0<\tau<1\), we can construct a new \(C^{\tau}(\mathbf{x}_{<t})=\tau C(\mathbf{x}_{<t})\) and the corresponding \(R^{\tau C}(\mathbf{x}_{<t})\). We now concern the modified distribution \(q^{\tau}(y_{i}|\mathbf{x},\mathbf{y}_{<i})\) it induces. By definition, obviously: \[R^{\tau C}(\mathbf{x}_{<t})=\tau R^{C}(\mathbf{x}_{<t}) \tag{4}\] Since \[q(y_{i}|\mathbf{x},\mathbf{y}_{<i})\propto\frac{R^{C}(\mathbf{x},\mathbf{y}_ {\leq i})}{R^{C}(\mathbf{x},\mathbf{y}_{\leq i-1})}p(y_{i}|\mathbf{x},\mathbf{ y}_{<i}), \tag{5}\] and \[q^{\tau}(y_{i}|\mathbf{x},\mathbf{y}_{<i})\propto \frac{R^{\tau}(\mathbf{x},\mathbf{y}_{\leq i})}{R^{\tau}(\mathbf{ x},\mathbf{y}_{\leq i-1})}p(y_{i}|\mathbf{x},\mathbf{y}_{<i}) \tag{6}\] \[= \frac{\tau R^{C}(\mathbf{x},\mathbf{y}_{\leq i})}{\tau R^{C}( \mathbf{x},\mathbf{y}_{\leq i-1})}p(y_{i}|\mathbf{x},\mathbf{y}_{<i})\] (7) \[= \frac{R^{C}(\mathbf{x},\mathbf{y}_{\leq i})}{R^{C}(\mathbf{x}, \mathbf{y}_{\leq i-1})}p(y_{i}|\mathbf{x},\mathbf{y}_{<i}) \tag{8}\] This implies \(q^{\tau}(y_{i}|\mathbf{x},\mathbf{y}_{<i})=q(y_{i}|\mathbf{x},\mathbf{y}_{<i})\). Since there are infinite numbers of \(\tau\), this successfully disproves the uniqueness of the original unnormalized \(R^{C}(\mathbf{x},\mathbf{y}_{\leq i})\) if we concern a certain \(q(y_{i}|\mathbf{x},\mathbf{y}_{<i})\). While this does not affect the major part of the original NADO algorithm (\(R^{C}(\mathbf{x},\mathbf{y}_{\leq i})\) is still unique given \(C\)), it leads to an inconsistent objective (and thus sub-optimal performance) for the warmup process with NADO. We hereby proposes a new parameterization of NADO that tackles this problem. In the original NADO algorithm, we directly estimate the expected oracle with a parameterized model \(R^{C}_{\theta}(\mathbf{x},\mathbf{y}_{\leq i})\). In the new parameterization, we try to eliminate the effect of different scaling (_i.e._\(\tau\) in the previous formulation). Without loss of generality, by assuming that \(R^{C}(\mathbf{x},\emptyset)>0\) (otherwise it would be meaningless to control the base distribution anyway), we instead parameterize a probabilistic distribution \(r_{\theta}(\mathbf{x},\mathbf{y}_{\leq i})\) that is in proportion to \(R^{C}(\mathbf{x},\mathbf{y}_{\leq i})\) in each step. Formally: \[\forall\mathbf{x},\mathbf{y}_{<i},y_{i}:R^{C}_{\theta}(\mathbf{x},\mathbf{y}_ {\leq i})=\beta(\mathbf{x},\mathbf{y}_{<i})r_{\theta}(\mathbf{x},\mathbf{y}_{ \leq i})\ \ s.t.\sum_{y_{i}\in\Sigma}r_{\theta}(\mathbf{x},\mathbf{y}_{\leq i})=1.0\] For simplicity of formulation, we assume \(r_{\theta}(\mathbf{x},\emptyset)=1.0\) and model \(\beta(\mathbf{x},\emptyset)\) accordingly. With this assumption, we now show that we don't need to parameterize \(\beta\) in a sequential manner, but can instead compute it through induction that would eventually result in a more sound formulation of NADO. Consider the original regularization used in NADO to ensure the forward consistency: \[L_{reg}(\mathbf{x},\mathbf{y},R^{C}_{\theta})=f_{KL}\left(\sum_{y_{i}}R^{C}_{ \theta}(\mathbf{x},\mathbf{y}_{\leq i})p(y_{i}|\mathbf{x},\mathbf{y}_{<i}),R^ {C}_{\theta}(\mathbf{x},\mathbf{y}_{\leq i-1})\right),\] it will only be perfectly satisfied, if and only if: \[\sum_{y_{i}}R^{C}_{\theta}(\mathbf{x},\mathbf{y}_{\leq i})p(y_{i}|\mathbf{x}, \mathbf{y}_{<i})=R^{C}_{\theta}(\mathbf{x},\mathbf{y}_{\leq i-1})\] Substitute it with our new parameterization, we have: \[\beta(\mathbf{x},\mathbf{y}_{<i})\sum_{y_{i}\in\Sigma}r_{\theta}( \mathbf{x},\mathbf{y}_{\leq i})p(y_{i}|\mathbf{x},\mathbf{y}_{<i})=\beta( \mathbf{x},\mathbf{y}_{<i-1})r_{\theta}(\mathbf{x},\mathbf{y}_{\leq i-1})\] Hence, \[\beta(\mathbf{x},\mathbf{y}_{<i})=\beta(\mathbf{x},\mathbf{y}_{ <i-1})\frac{r_{\theta}(\mathbf{x},\mathbf{y}_{\leq i-1})}{\sum_{y_{i}}r_{ \theta}(\mathbf{x},\mathbf{y}_{\leq i})p(y_{i}|\mathbf{x},\mathbf{y}_{<i})}\] At inference time, for each step, we can omit \(\beta\) and only consider the following modified distribution: \[q_{\theta}(y_{i}|\mathbf{x},\mathbf{y}_{<i})\propto p(y_{i}| \mathbf{x},\mathbf{y}_{<i})r_{\theta}(\mathbf{x},\mathbf{y}_{\leq i})\] Any likelihood-based warmup on this composed distribution will not have an impact on \(\beta(\mathbf{x},\mathbf{y}_{<i-1})\). It's trivial to prove that, given \(p(y_{i}|\mathbf{x},\mathbf{y}_{<i})\), \(q_{\theta}(y_{i}|\mathbf{x},\mathbf{y}_{<i})\) and \(r_{\theta}(\mathbf{x},\mathbf{y}_{\leq i})\) forms a bi-jection. Note that, without loss of generality, we can assume in practice \(\forall\mathbf{x},\mathbf{y}_{<i},p(y_{i}|\mathbf{x},\mathbf{y}_{<i})>0,r_{ \theta}(\mathbf{x},\mathbf{y}_{\leq i})>0,q_{\theta}(\mathbf{x},\mathbf{y}_{ \leq i})>0\) since they're composed from outputs of neural networks which always predict finite numbers on the log-scale. ### Distributional-Discrepancy Induces Difficulties for a Successful Training of NADO We now concern the variance of gradient w.r.t. \(R_{\theta}^{C}(\mathbf{x},\mathbf{y}_{<i})\). Suppose we are sampling from the original distribution \(p\): \[\text{Var}(\nabla\mathcal{L}(C;R_{\theta};\mathbf{x},\mathbf{y}_{ \leq i}))\] \[= \mathbb{E}_{\mathbf{y}\sim p(\mathbf{y}|\mathbf{x},\mathbf{y}_{ \leq i})}[(\nabla\mathcal{L}(C;R_{\theta};\mathbf{x},\mathbf{y}_{\leq i}))^{2 }]-\mathbb{E}_{\mathbf{y}\sim p(\mathbf{y}|\mathbf{x},\mathbf{y}_{\leq i})}[ \nabla\mathcal{L}(C;R_{\theta};\mathbf{x},\mathbf{y}_{\leq i})]^{2}\] \[> \mathbb{E}_{\mathbf{y}\sim p(\mathbf{y}|\mathbf{x},\mathbf{y}_{ \leq i})}\left[\left(\frac{(R_{\theta}(\mathbf{x},\mathbf{y}_{\leq i})-C( \mathbf{x},\mathbf{y}))}{R_{\theta}(\mathbf{x},\mathbf{y}_{\leq i})}\right)^{2 }\right]-\mathbb{E}_{\mathbf{y}\sim p(\mathbf{y}|\mathbf{x},\mathbf{y}_{\leq i })}\left[\frac{(R_{\theta}(\mathbf{x},\mathbf{y}_{\leq i})-C(\mathbf{x}, \mathbf{y}))}{R_{\theta}(\mathbf{x},\mathbf{y}_{\leq i})(R_{\theta}(\mathbf{x},\mathbf{y}_{\leq i})-1)}\right]^{2}\] \[= \mathbb{E}_{\mathbf{y}\sim p(\mathbf{y}|\mathbf{x},\mathbf{y}_{ \leq i})}\left[\left(1-\frac{C(\mathbf{x},\mathbf{y})}{R_{\theta}(\mathbf{x},\mathbf{y}_{\leq i})}\right)^{2}\right]-\mathbb{E}_{\mathbf{y}\sim p(\mathbf{ y}|\mathbf{x},\mathbf{y}_{\leq i})}\left[\frac{(1-\frac{C(\mathbf{x}, \mathbf{y})}{R_{\theta}(\mathbf{x},\mathbf{y}_{\leq i})})}{(R_{\theta}(\mathbf{x },\mathbf{y}_{\leq i})-1)}\right]^{2}\] \[= \sum_{\mathbf{y}}\left[p(\mathbf{y}|\mathbf{x},\mathbf{y}_{\leq i })\left(1-\frac{C(\mathbf{x},\mathbf{y})}{R_{\theta}(\mathbf{x},\mathbf{y}_{ \leq i})}\right)^{2}\right]-\left[\sum_{\mathbf{y}}p(\mathbf{y}|\mathbf{x}, \mathbf{y}_{\leq i})\left(\frac{1-\frac{C(\mathbf{x},\mathbf{y})}{R_{\theta}( \mathbf{x},\mathbf{y}_{\leq i})}}{R_{\theta}(\mathbf{x},\mathbf{y}_{\leq i})-1 }\right)\right]^{2}\] For cases where with a high probability \(C(\mathbf{x},\mathbf{y})>>R_{\theta}(\mathbf{x},\mathbf{y}_{\leq i})\), this term in the vanilla parameterization of NADO causes instability due to vanishing/explosive gradient during training. .3.1 Tackling Insufficient Presentation of Distribution: Likelihood Re-weighting Importance Sampling We now consider a better strategy for choosing a proxy distribution and perform importance sampling to reduce the variance of gradient. Recall that, in practice we can only collect a very limited number of samples that are much less than sufficient to express potentially significant likelihood differences by using the empirical distribution of which. See Figure 1. Specifically, we collect a _set_ (_i.e._ the collection of **unique** elements) of decoded data as the truncation basis, and use the original distribution \(p\) to assign a normalized weight for each of the unique basis samples. In practice, this can be achieved by either running random sampling multiple times until having collected a sufficient number of unique samples or simply doing a beam search to approximately select the top-\(K\) (\(K\) is the sample size limit) of \(p\) as the truncation basis. It it trivial to prove that this minimizes the \(f_{KL}\) between the truncated distribution and original distribution. ### Compositional Generalization of Different Constraints In this subsection we concern using NADO to tackle the composition of a collection of constraints \(\{C_{i}(\mathbf{x},\mathbf{y})\}\). One straightforward approach is to directly construct a new constraint, denoted as \(C\), which is equivalent to taking the logical conjunction (or multiplication of values) of all \(C_{i}\). We can then train a single-layered NADO model to handle this composite constraint. This approach is intuitive and avoids the introduction of new assumptions and theoretical analyses. However, its major drawback lies in the implicit assumption of independent and identically distributed (_i.i.d._) constraints. In other words, the combination of constraints during the testing phase must exhibit high consistency with the combination of constraints during the training phase. Our primary focus is instead on individually training a NADO layer for each constraint \(C_{i}\) in the composition and effectively cascading them to form a structure that ensures a certain level of generalization. 4.1 _Unitary i.i.d._ Constraints and Theoretical Guarantee for Statistically-Independent Constraints We begin by relaxing the stringent assumption of independent and identically distributed (i.i.d.) constraints, which imposes significant limitations. Instead, we consider a relatively relaxed assumption that we refer to as _unitarily i.i.d._ Given a combination of constraints, we no longer require the constraint combination during the test phase to be fully _i.i.d._ with respect to the constraint combination during the training phase. However, we do expect that within each composition, the distribution of each **individual** constraint maintains a degree of statistical _i.i.d._ characteristics as observed during the training phase. We can interpret the impact of each constraint \(C_{i}\) on the data distribution as a mapping performed by a single-layered NADO module, which maps the constraint to a corrective term for the original distribution, denoted as \(R^{C_{i}}(\mathbf{x},\mathbf{y}_{<t})\). If each constraint within the composition is statistically independent, we can train an individual density ratio function \(R\) for each constraint using the base distribution \(p\), until we can bound the generalization error of \(R\) under the assumption of _unitary i.i.d._ by \(\delta\) on the log-scale. Then, during the test phase, we can directly multiply the corresponding corrective terms induced by the constraints \(C_{i}\) with the base distribution \(p\). Mathematically, this can be expressed as: \[q(\mathbf{y}_{<t}|\mathbf{x})=p(\mathbf{y}_{<t}|\mathbf{x})\cdot\prod_{i=1}^{N }R^{C_{i}}(\mathbf{x},\mathbf{y}_{<t})\] where \(q(\mathbf{y}_{<t}|\mathbf{x})\) denotes the distribution of the partial sequence \(\mathbf{y}_{<t}\) given input \(\mathbf{x}\) during the test phase, and \(N\) represents the number of constraints in the composition. It is trivial to bound the error of likelihood from the resulting composed distribution \(q\) by \(N\cdot\delta\) on the log-scale. This bound is effective for compositionally varied test samples. Figure 1: Illustration of the original example distribution, the truncated distribution using the likelihood re-weighting trick and directly approximating the distribution empirically using the same number of random samples. Work-around for Weakly-Dependent ConstraintsIn datasets like CommonGen, although we still assume and aim for compositional generalization, it is important to note that for each set of constraints, the constraints themselves may not be completely statistically independent. Therefore, the aforementioned bound may not hold directly without adjustments. However, considering that in practice, we often use the embedding outputs from the previous NADO layer as inputs for the current layer, we can train each NADO layer to adapt under the following three conditions: * To accept modified embeddings from the previous NADO layer as input and correctly contextualize them * To provide an effective embedding for the subsequent NADO layer's output * To ensure that the error across the entire embedding distribution can be bounded by \(\delta\) By satisfying these conditions, we can still roughly maintain the aforementioned bound. Although this analysis may not be entirely rigorous or comprehensive, given the highly data-specific nature of this problem, extensive assumptions and analyses could be intractable naturally. Nonetheless, this approach provides a significant improvement in terms of generalization guarantees compared to the naive solution of directly multiplying all constraints together. ## 4 Experiments We evaluate NADO++ on the supervised Lexically Constrained Generation (LCG) task using the CommonGen dataset(Lin et al., 2020). CommonGen is designed to evaluate the commonsense reasoning ability of neural text generation models, as well as examining their compositional generalization ability. The training set consists of 32,651 unique key concepts, which serve as constraints, and a total of 67,389 annotated description sequences. Additionally, a validation set containing 993 concepts and 4,018 description sequences is provided. To ensure a comprehensive evaluation, the dataset maintains an open leaderboard for benchmarking different approaches on a withheld test set. We closely followed previous paper's data configurations to ensure consistency with prior work. We consider the problem as a composition of constraints under two setups: 1) _static_: lexical + commensensensical 2) _dynamic_ the composition of each _concept_ as a independent constraint. For better comparison, we also report the results under a _limited_ setup, where we follow the data setups in vanilla NADO and only concern the algorithmic part of improvements. See Figure 2 In the following subsections, we would introduce the motivations and technical details for each variant of NADO++. NADO++Static: Using an Additional NADO Hierarchy to Remedy Uncommonsensical GenerationsThe original version of NADO treats CommonGen as a lexically constrained generation Figure 2: Illustration of the two variants of NADO++ models for CommonGen. problem; however, this perspective is not entirely accurate. Given a set of concepts, for a model that is not finetuned from a sufficiently large checkpoint, ensuring that all lexical conditions are satisfied is insufficient to guarantee that the generated sentences are fully commonsensical. Therefore, we introduce an additional NADO hierarchy to strengthen or, in other words, remedy the flaws of the lexical NADO. To some extent, the commonsensicality of a sentence and the presence of a specific set of lexical constraints are independent of each other. However, considering that we only involve a maximum of two layers in the NADO++ cascaded structure, the aforementioned bound accurately describes the generalization ability between these two layers but does not significantly assist in evaluating the compositional generalization capabilities that CommonGen aims to test. To provide the \(C(\mathbf{x},\mathbf{y})\) for the commonsense layer, we use an semantic role label extractor (the AllenNLP(Gardner et al., 2018) implementation of Shi and Lin (2019)) to parse the given sentence into a series of actions. By using the training set as the positive data and using random replacement of the central verb in the extracted SRLs to construct negative data, we then finetune another t5 model to serve as an approximation of the golden commonsense predictor \(C_{\phi}(\mathbf{x},\mathbf{y})\). During training, we use 0/1-quantized version of \(C_{\phi}(\mathbf{x},\mathbf{y})\) to serve as commonsense judgements. See Figure3. For each generated sequence \(\mathbf{y}\), the soft-conjunction (_i.e._ multiplication/AND) of the commonsense prediction on all its extracted SRLs is taken as the approximated \(C(\mathbf{x},\mathbf{y})\). NADO++-Dynamic: A Model that Scales up in Complexity with the TaskRecall that in CommonGen, the difficulty of the task actually varies across instances, largely influenced by the number of constraint words involved. While there is a limit to the number of constraints provided during the test phase, attempting to address a task with varying levels of complexity using a fixed-size model is inherently less practical. This observation forms the basis of NADO++-Dynamic. In this approach, our base model becomes an unsupervised model responsible for learning the overall distribution of commonsensical sequences without any input concepts provided. Each NADO layer aims to address the control of a specific lexical constraint, building upon the weak contextualization introduced by the previous layer's modified distribution. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & BLEU & CIDEr & Coverage \\ \hline T5-Large Finetune & 32.81 & 16.4 & 92.3\% \\ \hline NADO & 34.12 & 17.4 & 96.7\% \\ \hline NADO++-Limited & 36.71 & 17.58 & 97.1\% \\ NADO++-Static & 38.42 & 17.59 & 96.7\% \\ NADO++-Dynamic (full T5-large NADO layers) & 39.91 & 17.71 & 98.2\% \\ \hline Naive Bayes & 16.4 & 9.81 & 45.7\% \\ Naive Bayes w/ Joint Prob. Aug. & 29.98 & 15.9 & 75.8\% \\ \hline \hline \end{tabular} \end{table} Table 1: Results on CommonGen Figure 3: Illustration of the creation of the approximated commonsense \(C_{\phi}(\mathbf{x},\mathbf{y})\) model. **Ablation Study: Has NADO++Dynamic capture the weakly-dependent nature of constraints?** While we implicitly injected the weak-dependency between layers in a typical NADO structure via passing the hidden output of each hierarchy in the cascading structure, it is unclear whether this practice is sufficient for an effective learning of the mutual correlation of constraints. We add two degraded versions of the model as ablation study towards this matter. The quantitative results are shown in Table 1. ## 5 Conclusion This paper presents an improved training of the NADO algorithm, namely NADO++, and discusses associated theoretical insights. In addition, we discuss the achievable guarantees when using NADO to handle the composition of a set of constraints. The improved training involves a novel parameterization and better sampling/gradient estimation strategy. We demonstrate that NADO++ in general resolves the aforementioned challenges in robustness of NADO, and is able to achieve promising improvements on CommonGen.
2304.00189
Investigation of giant dipole resonance in Mo isotopes within TDHF theory
The isovector giant dipole resonance (IVGDR) in the chain of even-even Mo isotopes is investigated within the time-dependent Hartree-Fock (TDHF) using the Skyrme force Sly6. The GDR calculated in $ ^{92-108}\text{Mo}$ are presented, and compared with the available experimental data. An overall agreement between them is obtained. Moreover, the dipole strength in $ ^{102-108}\text{Mo}$ is predicted. Shape phase transition from spherical to oblate as well as shape coexistence (A $\sim$ 100) in Mo isotopes are also investigated in this work. In addition, the correlation between the deformation splitting $\Delta E$ and the quadrupole deformation parameter $\beta_{2}$ is studied. The results confirm that $\Delta E$ is proportional to the deformation of nucleus. We also discuss the dependence of GDR strength on some nuclear properties of Skyrme forces, such as the asymmetry energy $a_{s}$. We find that $a_{s} = 32 MeV$, corresponding to SLy6, is quite consistent with experimental data.
A. Ait Ben Mennana, M. Oulne
2023-04-01T00:58:09Z
http://arxiv.org/abs/2304.00189v1
# Investigation of giant dipole resonance in Mo isotopes within TDHF theory ###### Abstract The isovector giant dipole resonance (IVGDR) in the chain of even-even Mo isotopes is investigated within the time-dependent Hartree-Fock (TDHF) using the Skyrme force Sly6. The GDR calculated in \({}^{92-108}\)Mo are presented, and compared with the available experimental data. An overall agreement between them is obtained. Moreover, the dipole strength in \({}^{102-108}\)Mo is predicted. Shape phase transition from spherical to oblate as well as shape coexistence (A \(\sim\) 100) in Mo isotopes are also investigated in this work. In addition, the correlation between the deformation splitting \(\Delta E\) and the quadrupole deformation parameter \(\beta_{2}\) is studied. The results confirm that \(\Delta E\) is proportional to the deformation of nucleus. We also discuss the dependence of GDR strength on some nuclear properties of Skyrme forces, such as the asymmetry energy \(a_{s}\). We find that \(a_{s}=32MeV\), corresponding to SLy6, is quite consistent with experimental data. keywords: + Footnote †: journal: Nuclear Physics A ## 1 Introduction Collective excitations in quantum many-body systems such as atomic nuclei are a common phenomenon. An example of theses excitations is nuclear giant resonances (GRs) which are considered as a collective motion of many, if not all, particles in the nucleus [1]. Among all the possible modes of collective excitations, the isovector giant dipole resonance (IVGDR) is the first collective excitation observed in nuclei [2]. It has been explained as a collective vibration mode of neutrons oscillating against protons in the nucleus [3]. The GDR is a most pronounced characteristic of the excitation spectrum of all nuclei (except deuterons) in the nuclide chart, giving crucial clues for understanding nuclear structure and collective dynamics. It has been widely studied in spherical, transitional and deformed nuclei with many experimental investigations [4; 5; 6; 7; 8; 9] and theoretical methods [10; 11; 12; 13; 14]. The evolution of the GDR properties (centroid energy E, shape, width \(\Gamma\)...) as a function of the mass number A of nuclei has been extensively studied in many experimental and theoretical works ( see for example Refs. [6; 15; 11; 16]). The centroid energy E been inversely proportional to A, was approximated as \(E\simeq 80A^{-1/3}\) Mev for spherical nuclei [17]. Therefore, the centroid energy of GDR can provide direct information about nuclear size. On the other hand, the width \(\Gamma\) represents an important characteristic of any giant resonance, as it provides valuable information about the excitation and decay of giant resonance. Based on model, it is often expressed by the sum of three terms [1]: \(\Gamma=\Delta\Gamma+\Gamma^{\uparrow}+\Gamma^{\downarrow}\), where \(\Delta\Gamma\), called Landau damping, arises from fragmentation of the initial dipole excitation into one particle-one hole(1p-1h) states, \(\Gamma^{\uparrow}\), called escape width, results from the coupling of the correlated 1p-1h state to the continuum, and \(\Gamma^{\downarrow}\) represents the spreading width arising from the coupling with 2p-2h, 3p-3h,..., np-nh configurations. The correlation between the width \(\Gamma\) of GDR and the deformation parameter \(\beta_{2}\) can be used as a direct experimental probe to measure the nuclear deformation at finite temperature and angular momentum over the entire mass region [18; 19]. The GDR strength can provide an idea about the shape of nucleus. It has a single peak for heavier spherical nuclei, while it is more fragmented in light ones due to configuration splitting [20; 1]. For axially symmetric deformed nuclei, the GDR strength splits into two components, one corresponding to an oscillation along the symmetry axis (the K = 0 mode) and one to an oscillation perpendicular to it (the \(|\)K\(|\)= 1 mode) [21; 1; 22]. The strength ratio for the two components provides the sense of the deformation. For a prolate nucleus, 2:1 in favor of the high-energy resonance component, and 1:2 for an oblate nucleus. The deformed nuclei are of special interest in the GDR because the statically deformed ground-state gives rise to a very splitting of GDR in these nuclei. This correlation between nuclear deformation and splitting was first predicted theoretically by Okamoto [23] and Danos [24], and then detected experimentally by Fuller and Weiss [25]. The GDR in deformed nuclei has been studied extensively using different microscopic theoretical approaches based on Skyrme forces [26], such as time-dependent Hartree-Fock (TDHF) [27; 10], Quasi-particle Random Phase Approximation (QRPA) [13], and Separable Random-Phase-Approximation (SRPA) [28]. Experimentally, the GDR is induced by various ways such as photo-absorption [7; 29] inelastic scattering [30], and \(\gamma\)-decay [31]. The time-dependent Hartree-Fock (TDHF) [32] has become, in recent years, a sufficiently mature technique to be used in realistic situations describing a range of nuclear dynamics. In particular, it was applied to giant resonance, giving a satisfactory results (see for example Refs. [14; 10; 33]). In our previous studies [27], we have studied the GDR in Sm isotopes with the Sky3d code [34] based on TDHF method using four Skyrme forces. All these forces gave an accurate description of the GDR in spherical and deformed nuclei with a slightly advantage for the parametrization SLy6 [35]. Many previous works have studied the GDR in Molybdenum (Mo) isotopes, where Z=42. From the experimental point of view one can see for example Refs.[15; 36; 37]) and from the theoretical one Ref.[38]. Beil et al.[15] are studied the GDR in five stable even-even \({}^{92-100}\)Mo isotopes using a variable monochromatic photon beam. they observed a broadening of GDR as the number mass A increases. The present work is aimed to study, some properties of the GDR in even even \({}^{92-108}\)Mo nuclei. We extended this study to \({}^{108}\)Mo nucleus, i.e, nine Mo isotopes are considered in this work. This study is done with a fully-fledged TDHF approximation based on Skyrme effective interactions [26], without any symmetry restrictions. Due to the open-shell nature of these nuclei, one should take account of pairing and deformation properties in this study. In the static step, the ground states of these isotopes are calculated with the static Hartree-Fock (HF) [39] plus Bardeen-cooper-Schrieffer (BCS) approach [40], where pairing is treated properly. In the dynamic step, the strength of GDR is calculated by boosting the ground state with a dipole excitation. The structure of this paper is as follows. Section 2 gives a brief summary of nuclear giant dipole resonance (GDR) in deformed nuclei. Section 3 describes briefly the TDHF approximation, and details of the numerical calculations. Our results and discussion are presented in Section 4. Finally, a summary is given in Section 5. **2. Giant dipole resonance in deformed nuclei** For the ground state GDR, it is well known that for a deformed nucleus the resonance splits into several components. The centroid energy (frequency) of each component is inversely proportional to the radius \(R_{i}\) of the axis along which the vibration occurs [22]. \[E_{i}=\hbar\omega\sim R_{i}^{-1}\sim A^{-1/3}, \tag{1}\] since \(R\) is proportional to \(A^{1/3}\), where A is the mass number of the nucleus. This splitting has been observed experimentally [7; 8] and treated theoretically by different models [10; 11; 13]. The GDR spectra depend on the shape of the nucleus. Thus, for deformed prolate nuclei, the dipole strength splits into two components where one is associated to oscillations of neutrons against protons along the symmetry axis, and another, at higher energy and carrying twice the strength of the first, associated to oscillations perpendicular to the symmetry axis. The opposite situation occurs in the case of oblate nuclei. For triaxial nuclei, the GDR splits into three components with different oscillation,i.e., \(\omega_{x}\neq\omega_{y}\neq\omega_{z}\). In spherical nuclei, the GDR has a single peak especially for medium and heavy ones, which represents a superposition of three resonances with the same frequencies \(\omega_{x}=\omega_{y}=\omega_{z}\). The energy splitting \(\Delta E\) is related to the nuclear deformation, in which the size of this splitting is proportional to the quadrupole deformation parameter \(\beta_{2}\)[23; 24]: \[\Delta E\propto\beta_{2}, \tag{2}\] where \(\beta_{2}\) defined as [34] \[\beta_{2}=\sqrt{a_{0}^{2}+2a_{2}^{2}},\qquad\text{with}\qquad a_{m}=\frac{4 \pi}{5}\frac{Q_{2m}}{AR^{2}}. \tag{3}\] where \[Q_{2m}=\int d\vec{r}\rho(\vec{r})r^{2}Y_{2m} \tag{4}\] are the quadrupole moments, and \(Y_{2m}\) are the spherical harmonics. On the other hand, there is a correlation between the GDR width \(\Gamma\) and \(\beta_{2}\), especially for medium and heavy nuclei [41]. They are minimal for magic nuclei, and simultaneously increase as soon as the number of neutrons increases [6; 27]. For light nuclei (\(A<60\)), the nuclear deformation is not the only reason behind GDR broadening. Many factors can affect the shape of the GDR, such as configuration and isospin splitting[20; 42]. For more details see for example Ref.[43] ## 3 Theoretical framework and numerical details ### Time-dependent Hartree-Fock method The time-dependent Hartree-Fock (TDHF) is a self-consistent mean field theory which was originally proposed by Dirac in 1930 [32]. It has been extensively applied in nuclear physics calculations like heavy-ion reactions[44; 45] and nuclear giant resonances[46; 47]. A detailed discussion of the TDHF approximation can be found in many papers (see for example Refs.[48; 49; 50]). Here, a brief introduction of the TDHF theory is presented as follows. The TDHF equations are obtained from the variation of the action S defined as: \[S = \int_{t_{1}}^{t_{2}}dt\Big{(}\left\langle\Psi(t)\right|\left(i \hbar\frac{\partial}{\partial t}-\hat{H}\right)\left|\Psi(t)\right\rangle\Big{)} \tag{5}\] \[= \int_{t_{1}}^{t_{2}}dt\Big{(}i\hbar\sum_{i=1}^{N}\left\langle \psi_{i}\right|\frac{\partial}{\partial t}\left|\psi_{i}\right\rangle-E[\psi_{ i}]\Big{)}\] with respect to the wave functions \(\psi_{i}^{*}\), where \(\left|\psi_{i}\right\rangle\) are the occupied single-particle states, \(t_{1}\) and \(t_{2}\) define the time interval, where the action S is stationary between them, and \(E=\left\langle\Psi\right|\hat{H}\left|\Psi\right\rangle\) is the total energy of the system. The variation of Eq. (5) leads to the coupled TDHF equations [51] \[i\hbar\frac{\partial\psi_{i}(t)}{\partial t}=\hat{h}[\rho(t)]\psi_{i}(t), \qquad i=1,.....,A, \tag{6}\] where \(\hat{h}\) is the single-particle Hartree-Fock Hamiltonian which depends on the nucleon density \(\rho\), A is the total number of nucleons. Since these equations (6) are nonlinear, they are solved _iteratively_ in time with a small time step increment \(\Delta t\). Our calculations were performed with \(\Delta t=0.2\) fm/c \(\simeq 6\times 10^{-25}s\). Over the small intervals [t, t + \(\Delta t\)], the HF Hamiltonian is assumed to be constant. To conserve the total energy E, it is necessary to apply a symmetric algorithm by time reversal, and therefore to estimate the Hamiltonian at time \(t+\frac{\Delta t}{2}\) to evolve the system between time \(t\) and \(t+\Delta t\)[48] \[\left|\psi(t+\Delta t)\right\rangle\simeq e^{-i\frac{\Delta t}{\hbar}\hat{h}(t+ \frac{\Delta t}{2})}\left|\psi(t)\right\rangle. \tag{7}\] More details regarding numerical procedures for solving the TDHF equations can be found in Refs.[48; 51; 52; 50]. ### Details of Calculations In this work, all calculations were performed with the Sky3D code[34], which solves the static HF as well as the TDHF equations. These calculations are performed on the three-dimensional (3D) Cartesian coordinate grid space, with no symmetry restrictions and the full Skyrme energy functional is used including the spin-orbit and most important time-odd terms. The Skyrme force SLy6[35] was used in this study, which was found to give a satisfying description of the GDR for medium and heavy nuclei[53; 28; 14]. In order to perform a TDHF calculation, an initial state is required. This state is obtained by iteratively solving the static HF + BCS equations \[\hat{h}\psi_{i}(r)=\epsilon_{i}\psi_{i}(r),\qquad i=1,....,\Omega, \tag{8}\] where \(\hat{h}\) is the single-particle Hamiltonian, \(\epsilon_{i}\) is the single-particle energy of the state \(\psi_{i}(\mathbf{r})\) with the \(\mathbf{r}\) coordinate is chosen to represent all the spatial, spin and isospin coordinates for brevity, and \(\Omega\) denotes the size of the pairing-active space. In this work, we took the number of single-nucleon wave functions from the approximate relation [34; 54]: \[N_{q}+\frac{5}{3}N_{q}^{2/3}, \tag{9}\] where \(N_{q}\) refers to the number of protons (Z) or neutrons (N) in the nucleus under study. The static calculations are stopped when sufficient convergence is achieved, i.e., we have reached minimum energy which represents a local minimum (isomeric state). In order to find global minimum, we redo the calculation several times with different initial configurations. The state with lowest energy of all these minima is the global minimum. We consider as convergence criterion the average single particle energy fluctuation given by the expression [34] \[\sqrt{\left\langle\psi\right|\hat{h}^{2}\left|\psi\right\rangle-\left\langle \psi\right|\hat{h}\left|\psi\right\rangle^{2}}, \tag{10}\] we took \(10^{-5}\) as a convergence value for isotopes under study in this work. We used grids with 24 \(\times\) 24 \(\times\) 24 points and a grid spacing of \(\Delta x=\Delta y=\Delta z\) \(=1\) fm in the x, y and z directions. The pairing is treated in the static calculation, it is frozen in the dynamic calculation i.e, the BCS occupation numbers are frozen at their initial values during time evolution. In dynamic calculations, The nucleus is excited by multiplying the ground-state wave functions obtained from the HF calculation by a an instantaneous initial dipole boost operator applied in the same manner to each single-nucleon state, i.e., \[\psi_{i}(r)\longrightarrow\psi_{i}(r,t=0)=\mathrm{e}^{ib\hat{D}}\psi_{i}(r), \tag{11}\] where \(\psi_{i}(r)\) represents the stationary wave function before boost, b is the boost amplitude of the studied mode, and \(\hat{D}\) the dipole operator in our case, defined as \[\hat{D} = \frac{NZ}{A}\Big{(}\frac{1}{Z}\sum_{p=1}^{Z}\vec{r}_{p}-\frac{1}{ N}\sum_{n=1}^{N}\vec{r}_{n}\Big{)} \tag{12}\] \[= \frac{NZ}{A}\Big{(}\vec{R}_{p}-\vec{R}_{n}\Big{)},\] where \(\vec{r}_{p}\) and \(\vec{r}_{n}\) are the position vectors of each proton and neutron respectively, and where \(\vec{R}_{p}\) and \(\vec{R}_{n}\) are the position vectors of the centers of mass of the protons and neutrons respectively. In order to obtain the three dipole modes (x, y, z) at the same time, we apply a diagonal excitation. For one mode (x-mode for example), the excitation is proportional to \(e^{x}\). During the propagation of the TDHF equations in the time [51], the dipole moment \(D(t)=\sum_{i}\left\langle\psi_{i}(t)\right|\hat{D}\left|\psi_{i}(t)\right\rangle\) is recorded in the time domain. Finally, the Fourier transform of the signal D (t) in the frequency domain gives the spectral distribution of the dipole strength \(S_{D}(\omega)\) given by [17] \[S_{D}(\omega) = \sum_{\nu}\delta(E-E_{\nu})|\left\langle\nu\right|\hat{D}\left|0 \right\rangle|^{2}. \tag{13}\] In order to the signal vanishes at the end of the simulation time, some filtering is necessary to avoid artifacts in the spectra by cutting the signal at a certain time [55]. In practice we use windowing in the time domain by damping the signal \(D(t)\) at the final time with \(cos(\frac{\pi t}{2T_{f}})^{n}\)[34]. \[D(t)\longrightarrow D_{fil}=D(t).cos\Big{(}\frac{\pi t}{2T_{f}}\Big{)}^{n}, \tag{14}\] where n represents the strength of filtering and \(T_{f}\) is the final time of the simulation. In this work, we chose n = 6 which is sufficient to suppress the artifacts safely. More details can be found in Refs. [34; 56]. We kept the same number of grid points (24) and grid spacing (1 fm) as in static calculation. We chose nt= 4000 as a number of time steps to be run, and dt = 0.2 fm/c is the time step, so T\({}_{f}\) = 800 fm/c. ## 4 Results and Discussion ### Ground-state properties of \({}^{92-108}\)Mo isotopes The isotopic chain of the double-even \({}^{92-108}\)Mo under investigation lies in the region that knows a transition between spherical, where the neutrons number N is close to the magic number 50, and deformed nuclei when N increases [7; 13; 10]. The deformation in this region is mainly due to the filling of the N = 50 shell gap. The static calculations, described previously, were performed with SLy6 parametrization [35]. BCS pairing is included using the volume-\(\delta\) interaction (VDI) [34], with pairing strengths \(V_{p}\) = 298.760 MeV and \(V_{n}\) = 288.523 MeV for protons and neutrons respectively. The two deformation parameters \(\beta_{2}\), and \(\gamma\) are among the most important ground-state properties, which give us an idea about the shape of the nucleus [17; 57]. They are treated as a probe to select ground-states of all nuclei under study. In Table 2, we summarize the numerical results for the couple (\(\beta_{2}\),\(\gamma\)) of \({}^{92-108}\)Mo isotopes, including the available experimental data from Ref.[58] and the HFB calculations based on the D1S Gogny force [59] for comparison. In Fig.1, we plotted the variation of \(\beta_{2}\) as a function of neutrons number (N). It should be noted that Fig.1 does not indicate the sign of the quadrupole deformation \(\beta_{2}\). From Fig.1, we can clearly see an agreement between our calculations within the HF plus BCS theory based on Skyrme force (SLy6 in our case) and those of the HFB theory based on D1S Gogny force [59]. In recent publication, G.Colo et al.[60] have published a work on ISGMR and ISGQR in \({}^{92-100}\)Mo using QRPA approach based on Skyrme interactions. This study includes static constrained calculations with different Skyrme forces (SLy6 and others) shown in Fig.1. We limit our comparison to Figure 1(c) for SLy6 and others) shown in Fig.1. We limit our comparison to Figure 1(c) for SLy6 and SLy6. We limit our comparison to Figure 1(c) for SLy6 and SLy6. that we used in this work. They have found, in overall, an increasing of softness as well as the spherical minimum becomes more and more shallow as the number of neutrons N increases. In \({}^{92}\)Mo, a spherical minimum is observed which confirms that it has an approximately spherical shape as we have predicted (\(\beta_{2}=0\)). In \({}^{94,96}\)Mo, the spherical minimum becomes shallow, especially in \({}^{96}\)Mo, which is roughly in agreement with our results (\({}^{94}\)Mo is nearly spherical and \({}^{96}\)Mo is weakly deformed). In the neutron-rich \({}^{94,96}\)Mo isotopes, the potential energy curve (PEC) \({}^{94,96}\)Mo is very shallow. In particular, \({}^{100}\)Mo presents a kind of convex curve with a rather modest barrier at \(\beta_{2}\simeq 0\). In our calculations, we found two local minima in \({}^{100}\)Mo (one oblate and the other triaxial) as we will see later. On the other hand, the experimental results [58] are a bit different from our calculations, but we reproduce the overall trend of increasing the quadrupole distortion \(\beta_{2}\) as the neutrons number N increases from the magic number N = 50. We point out that one should be careful in direct comparison between theoretical \(\beta_{2}\) and "experimental" one, especially for spherical nuclei, which is extracted from actual experimental data ( usually B(E2) transition strength) under certain model assumptions including that the nucleus is not spherical. According to the values of the couple (\(\beta_{2}\),\(\gamma\)), we can predict the shape of nuclei. From Table 1, the semi-magic nucleus \({}^{92}\)Mo (N=50) has a spherical shape where \(\beta_{2}\simeq 0\). For the three nuclei \({}^{94}\)Mo, \({}^{96}\)Mo and \({}^{98}\)Mo, the value of \(\beta_{2}\) deviates slightly from 0, so they have an "approximate" spherical form. As we reach N = 58 (\({}^{100}\)Mo), there is transition from spherical to permanently deformed ground state with oblate shape (\(\gamma=60^{\circ}\)). We point out that we have not indicated the triaxiality parameter \(\gamma\) for spherical nuclei \({}^{92-94}\)Mo because it is not defined for \(\beta_{2}=0\) and therefore his inclusion is meaningless. The neutron-rich nuclei with A\(\sim\)100 are known by instability of shapes, which can lead to a shape coexistence [61; 62]. For the nucleus \({}^{100}\)Mo, we found two minima in energy of the ground state as indicated in Table 2. The difference in energy \(\Delta E\) between oblate and triaxial minima is very small of the order of 0.05 MeV. This is a clear indication of a shape coexistence for this nucleus. We will treat the two minima in dynamic calculation in the next section. ### GDR in \({}^{92-108}\)Mo isotopes The response of the nucleus under study to the isovector dipole boost (Eq.11) is expressed by the evolution of the dipole moment \(D_{m}(t)\) (Eq.12) in time domain. The three components \(D_{m}^{i}(t)\), where i = x, y, z, can inform us on the collective motions of nucleons along the three directions x, y and z. Figures 2-3 display the time evolution of \(D_{m}^{i}(t)\) for the nuclei \({}^{92}\)Mo, \({}^{100}\)Mo and \({}^{108}\)Mo. We chose only three nuclei among nine ones (one spherical, one oblate, and one triaxial). In Fig.2a, due to spherical shape for \({}^{92}\)Mo as predicted in static calculation, the three components of dipole moment (\(D_{m}^{x},D_{m}^{y},D_{m}^{z}\)) are identical, i.e., the oscillation frequencies along the three axes are equal \(\omega_{x}=\omega_{y}=\omega_{z}\). The periodicity of \(D_{m}^{i}(t)\) allows us to estimate the excitation energies E\({}_{i}\) of the oscillations along each of the three axes x, y and z. For each \(D_{m}^{i}(t)\), the period of T \(\approx\) 75.5 fm/c gives an excitation energy of about E\({}_{i}\approx\) 16.42 MeV, which is very close to the experimental value \(E_{GDR}^{exp.}\) = 16.9\(\pm\)0.1 MeV [15]. In Fig.2b, the \(D_{m}^{z}(t)\) values differ from \(D_{m}^{x}(t)\) and \(D_{m}^{y}(t)\) values which are identical, i.e., \(\omega_{x}=\omega_{y}\neq\omega_{z}\). This was expected for an axially deformed nucleus \({}^{108}\)Mo. For \(D_{m}^{z}(t)\) (resp. \(D_{m}^{x}(t)\)), the period is of the order of 71.64 fm/c (resp. 87.62 fm/c), which corresponds to resonance energy of E\({}_{z}\approx\) 17.30 MeV (resp. E\({}_{x}\approx\) 14.15 MeV). Since E\({}_{z}\)\(>\) E\({}_{x}\) = E\({}_{y}\) and due to relation (1), the radius \(R_{z}<R_{x}=R_{y}\) shows that \({}^{108}\)Mo has an oblate shape as expected in static calculation (\(\gamma=60^{\circ}\) ) Fig.3 shows the time evolution of D(t) for the \({}^{100}\)Mo nucleus. Fig.3a presents the same case as in Fig.2a, where \({}^{100}\)Mo has an oblate shape with resonance energies along the axes z and x been respectively E\({}_{z}\approx\) 17.70 MeV and E\({}_{x}\approx\) 14.70 MeV. In Fig.3b, we notice that the oscillation frequencies \(\omega_{i}\) along the three axes are different from each other \(\omega_{x}\neq\omega_{y}\neq\omega_{z}\),which is expected for triaxial nucleus. The estimated resonance energies E\({}_{i}\) along the three axes x, y, z are respectively E\({}_{x}\approx\) 15.84 MeV, E\({}_{y}\approx\) 13.90 MeV, E\({}_{z}\approx\) 17.58 MeV. Fig.4 presents the calculated GDR spectra corresponding to two minima (triaxial, oblate) together with the available experimental data. It confirms \begin{table} \begin{tabular}{c c c} \hline \hline Properties & Oblate minimum & Triaxial minimum \\ \hline Binding energy (B.E) & -857.52 MeV & -857.47 MeV \\ Root mean square (r.m.s) & 4.467 fm & 4.470 fm \\ Quadrupole deformation \(\beta_{2}\) & 0.219 & 0.242 \\ Triaxiality parameter \(\gamma\) & 60\({}^{\circ}\) & 23\({}^{\circ}\) \\ \hline \hline \end{tabular} \end{table} Table 2: The ground-state properties of two minima for \({}^{100}\)Mo nucleus. this suggestion: the upper panel (Fig.4a) shows an oblate shape for \({}^{100}\)Mo due to oscillations along the shorter axis z (K=0 mode) which are characterized by higher energies than the oscillations along the longer axis x and y (\(|K|=1\) mode) perpendicular to it, i.e, \(E_{x}=E_{y}<E_{z}\), while the lower panel (Fig.4b) shows a triaxial shape characterized by three different GDR peaks that correspond to the three different principal axes x, y and z with resonance energies \(E_{x}\neq E_{y}\neq E_{z}\). On the other hand, we can see an overall agreement between the both total strengths (oblate and triaxial) and experimental data, with a slight advantage in the case of triaxial shape. Thus, one can predict a static triaxial deformation in the ground state of \({}^{100}\)Mo. In recent publication [63], the authors have found, in the light of studying ISGMR in \({}^{100}\)Mo, that it is triaxially deformed in the ground state. In order to obtain the GDR strengths S(E) (13), we calculated the Fourier Figure 2: (Color online) The three components of dipole moment \(D_{m}^{i}(t)\) plotted as function of the simulation time t(fm/c), are calculated with SLy6 parametrization for: (a) \({}^{92}\)Mo, and (b) \({}^{108}\)Mo nuclei. transform of the isovector signal D(t) which is simply the imaginary part of the Fourier transform of D(t)[44]. Fig.5 displays the GDR strengths in \({}^{92-108}\)Mo isotopes computed with the Skyrme forces SLy6, and compared with the available experimental data from Ref.[15]. In the left part of Fig.5, we can generally see an agreement between our calculations (TDHF) and the experimental data. The agreement is better for the deformed nucleus \({}^{100}\)Mo. We notice also that the GDR width \(\Gamma\) calculated with TDHF is always underestimated because the contribution \(\Gamma^{\downarrow}\) of the total width \(\Gamma\) caused by 2p-2h (and higher) collisions is not covered under TDHF theory, this requires to go beyond mean-field [10]. The right side of Fig.5 shows the GDR strengths predicted in \({}^{102-108}\)Mo nuclei. We plotted also the k = 0 and k = 1 modes, along z-axis and x+y axes respectively, to illustrate the magnitude and oblate character of the deformation. It is seen that \({}^{102-108}\)Mo nuclei have GDR spectra corresponding to an oblate shape, Figure 3: (Color online) The same as Fig.2 for \({}^{100}\)Mo nucleus: (a) oblate, and (b) triaxial shape. Figure 4: (Color online) The calculated GDR spectra for \({}^{100}\)Mo with the Skyrme force SLy6: oblate (a) and triaxial (b). The experimental data are extracted from [15] where the component (K = 0) along z-axis has a high resonance energy compared to that (K = 1) along the x- and y-axes, i.e, \(E_{z}>E_{x}=E_{y}\), but the intensity of the lower component is about twice that of the upper component due to the fact that the vibrations along equivalent axes (x and y in this case) are degenerated [29]. On the other hand, as the neutron number increases from spherical \({}^{92}\)Mo to \({}^{108}\)Mo deformed nucleus, we notice that the width \(\Gamma\) of GDR is related to deformation parameter \(\beta_{2}\) as indicated on the right of two panels in Fig.5. From the calculated strengths GDR, we can estimate some resonance characteristics like centroid energy \(E\), deformation splitting \(\Delta E\) and width \(\Gamma\), and the trends of these characteristics with the mass number A. In deformed nuclei, GDR strength splits into two components (k=0, k=1), which each of them has resonance energy \(E_{i},i=1,2\). An estimate of the energy centroid \(E\) is given by [64] \[E=\frac{\int_{0}^{+\infty}S(E)EdE}{\int_{0}^{+\infty}S(E)dE}, \tag{15}\] where S(E) (13) represents the GDR strength function. Table 3 shows the calculated resonance energy \(E\) with SLy6 parametrization, compared with the available experimental data from Ref.[15], and the empirical estimates based on Berman-Fultz (BF) model defined as [8, 21] \[E=(31.2A^{-1/3}+20.6A^{-1/6})MeV \tag{16}\] The Berman-Fultz model takes into account both surface and volume contributions and treats GDR as a combination of Goldhaber-Teller[3] (\(E\sim A^{-1/6}\)) and Steinwedel-Jensen[65] (\(E\sim A^{-1/3}\)) models. In Fig.6, we plotted the resonance energy as a function of the neutron number N. It can be seen that our results generally agree well with experimental data, and we reproduce the overall trend of decreasing resonance energy with increasing mass number A as shown by the sloped line obtained from the BF formula (Eq.16). The deformation splitting \(\Delta E=E_{2}-E_{1}\) represents the "distance" in MeV between two peak positions of GDR strength in deformed nuclei. It can give an idea about the magnitude of deformation. Fig.7 displays the deformation splitting \(\Delta E\) as a function of the mass number A of Mo isotopes. It should be noted that the resonance energies \(E_{1}\) and \(E_{2}\) are calculated by using the relation (15). It is seen that \(\Delta E\) is too small for \({}^{94-98}\)Mo nuclei, which confirms that these nuclei are very weakly deformed. It vanishes (\(\Delta E=0\)) for the semi-magic nucleus \({}^{92}\)Mo (N = 50), which is not surprising because for a spherical nucleus the three peaks of GDR coincide, i.e, \(E_{x}=E_{y}=E_{z}\). From \({}^{98}\)Mo to \({}^{100}\)Mo, there is a shape transition from an approximate spherical to permanently deformed nuclei, where the deformation splitting \(\Delta E\) increases suddenly to \(\Delta E\approx 3MeV\). It confirms that GDR Figure 5: (Color online) GDR strengths in the chain of \({}^{92-108}\)Mo calculated with SLy6 parametrization. Left side: our calculations (red line) are compared with experimental data (blue square) from Ref.[15]. Right side: The solid(red), dashed(blue) and dotted-dashed(green) lines denote the dipole strengths: total, along the long axis and the short axis(multiplied by 2) respectively. splitting is caused by the deformation structure of nuclei, i.e., \(\Delta E\sim\beta_{2}\)[24]. In order to study the GDR dependence on nuclear matter properties of Skyrme forces such as asymmetry energy \(a_{s}\), we chose three Skyrme interactions with \(a_{s}\) different as shown in Table 4. Fig.8 displays GDR strength for \({}^{92}\)Mo and \({}^{100}\)Mo calculated with the Skyrme forces SLy6, SkI3 and SVbas, including experimental data [15] for comparison. We can see that there is a slight difference regarding the peak position between these forces, with a shift of the GDR strength towards the higher energy for SVbas and the opposite occurs for Ski3. The Skyrme force SLy6 is the best among these forces which reproduces the experimental data. This can be clearly seen in Fig.9 showing the resonance energy as a function of the neutron number N. One can see that all the forces reproduce the overall trend of decreasing energy with increasing N (so mass number A), with an advantage for SLy6 where their results are very closer to experimental data. Fig.10 displays the variation of resonance energy, for \({}^{92}\)Mo and \({}^{100}\)Mo plotted in Fig.8, as a function of the neutron number N. \begin{table} \begin{tabular}{l c} \hline \hline Forces & \(a_{s}(MeV)\) \\ \hline SVbas [66] & 30.00 \\ SLy6 [35] & 32.00 \\ SkI3[67] & 34.80 \\ \hline \hline \end{tabular} \end{table} Table 4: The asymmetry energy \(a_{s}\) for different Skyrme forces. \begin{table} \begin{tabular}{c c c c} \hline \hline Nucleus & \(E_{sly6}(MeV)\) & \(E_{BF}(\)MeV)[Eq.16] & \(E_{exp.}(\)MeV)[15] \\ \hline \({}^{92}\)Mo & 16.63 & 16.60 & 16.90 \(\pm\) 0.1 \\ \({}^{94}\)Mo & 16.35 & 16.52 & 16.40 \(\pm\) 0.1 \\ \({}^{96}\)Mo & 16.10 & 16.44 & 16.20 \(\pm\) 0.1 \\ \({}^{98}\)Mo & 15.88 & 16.36 & 15.80 \(\pm\) 0.1 \\ \({}^{100}\)Mo & 15.82 & 16.28 & 15.70 \(\pm\) 0.1 \\ \({}^{102}\)Mo & 15.68 & 16.20 & — \\ \({}^{104}\)Mo & 15.89 & 16.13 & — \\ \({}^{106}\)Mo & 15.44 & 16.06 & — \\ \({}^{108}\)Mo & 15.34 & 15.99 & — \\ \hline \hline \end{tabular} \end{table} Table 3: The calculated resonance energies (\(E_{sly6}\)) for \({}^{92-108}\)Mo nuclei are compared with the experimental data (\(E_{exp.}\)) extracted from Ref.[15], and the estimates (\(E_{BF}\)) [Eq.16]. of \(\mathbf{a_{s}}\). It is clearly seen that with decreasing the asymmetry energy \(\mathbf{a_{s}}\) from 34.80 MeV (for SkI3) to 30 MeV (for SVbas), the peak position is moving towards the high energy region. Thus, there is a clear correlation between the energy of the IVGDR and the asymmetry energy \(\mathbf{a_{s}}\). In addition, the value \(a_{s}=32MeV\) is well consistent with the experimental data to study GDR in deformed nuclei in the framework of TDHF method. It should be taken care for this correlation because our study is restricted only to three Skyrme forces. We point out that other nuclear matter properties of the Skyrme forces affect the GDR energy, such as the sum rule enhancement factor \(\kappa\)[66; 68] and the isovector effective mass \(m_{1}^{*}/m\) which are related by \(m_{1}^{*}/m=1/(1+\kappa)\) (For more details see Refs.[28; 27]). In Ref. [12], Colo and al. have found,for a set of 20 Skyrme interactions, a strong linear correlation between the value of mean energies and the square root of S(\(\rho\))(1+\(\kappa\)) at \(\rho_{0}=1/(1+\kappa)\). Figure 6: (Color online) The IVGDR energies as a function of the neutron number N for \({}^{92-108}\)Mo nuclei. Comparison between our calculations (TDHF), experimental data [15] and the semi-empirical BF formula [Eq.16]. 0.1 fm\({}^{-3}\) where S(\(\rho\)) is asymmetry energy and \(\kappa\) is the enhancement factor. But they did not find a direct correlation between \(E_{GDR}\) and S(\(\rho\))\(\equiv a_{s}\). ## 5 Conclusion The isovector giant dipole resonance IVGDR has been investigated in \({}^{92-108}\)Mo nuclei. The investigations have been done within the framework of TDHF method based on the Skyrme functional. All calculations (static and dynamic) were performed with Skyrme force SLy6. We have calculated ground-state properties of these nuclei, especially deformation parameters (\(\beta_{2}\), \(\gamma\)), and compared with available experimental data[15] and theoretical calculations[59]. We have found a shape phase transition from spherical \({}^{92-98}\)Mo nuclei to axially deformed \({}^{100-108}\)Mo ones. For \({}^{100}\)Mo isotope, we have predicted a shape coexistence between oblate (\(\gamma=60^{\circ}\)) and triaxial Figure 7: (Color online) The deformation splitting \(\Delta E\) of \({}^{92-108}\)Mo nuclei as a function of the mass number A calculated with SLy6 parametrization. (\(\gamma=24^{\circ}\)) shape. In dynamic calculations, we have calculated some properties of GDR in nuclei under study. The evolution of dipole moment D\({}_{m}\)(t) in time domain showed that oscillation frequencies along the three axes are the same for spherical \({}^{92}\)Mo nucleus, \(\omega_{x}=\omega_{y}=\omega_{z}\). For deformed nuclei \({}^{100-108}\)Mo, the oscillation frequency along the major axis (\(\omega_{z}\)) is higher than that along the minor axis (\(\omega_{x}=\omega_{y}\)). Also, from the curve of D\({}_{m}\)(t) we have calculated the resonance energies \(E_{i}\) and compared with available experimental results. In addition, the GDR strength calculated is compared with the available experimental data. The results showed that TDHF method can perfectly reproduce the shape of GDR strengths. The calculated resonance energies \(E_{i}\) agree well with the experimental data and reproduce the general trend of decrease with mass number A. Furthermore, the correlation between \(\Delta E\) and \(\Delta E\) is shown in Fig. 7. Figure 8: (Color online) The GDR strengths for \({}^{92,100}\)Mo nuclei calculated with SLy6 (red line), SkI3 (green line), SVbas (blue line) parametrization, compared with the experimental data[15]. and deformation parameter \(\beta_{2}\) is considered. It confirms that the deformation splitting \(\Delta E\) is proportional to deformation of nucleus. Finally, we discussed the effect of asymmetry energy \(a_{s}\) on GDR strength. Three forces SkI3, SVbas and SLy6 are considered. The results showed that the calculation is well consistent with experimental data for the value \(a_{s}=32MeV\) which corresponds to the force SLy6. This confirms that the SLy6 parametrization is the best among these forces, which is characterized by an intermediate value of \(a_{s}\). ## Acknowledgments Azzeddine Ait Ben Mennana would like to thank P. D. Stevenson from university of Surrey (Uk) and V. O. Nesterenko from University "Dubna", Moscow (Rus) for helpful discussions. This work was supported by High Figure 9: (Color online) The resonance energies for \({}^{92-108}\)Mo nuclei calculated with the Skyrme forces SLy6, SkI3, SVbas, compared with the available experimental data[15]. Energy Physics and Astrophysics Laboratory, faculty of Science Semlalia University Marrakesh, Morocco.
2306.16192
Phase diagram of the chiral SU(3) antiferromagnet on the kagome lattice
Motivated by the search for chiral spin liquids (CSL), we consider a simple model defined on the kagome lattice of interacting SU(3) spins (in the fundamental representation) including two-site and three-site permutations between nearest neighbor sites and on triangles, respectively. By combining analytical developments and various numerical techniques, namely exact Lanczos diagonalizations and tensor network variational approaches, we find a rich phase diagram with non-topological (``trivial") and topological (possibly chiral) gapped spin liquids (SLs). Trivial spin liquids include an Affleck-Kennedy-Lieb-Tasaki (AKLT)-like phase and a trimerized phase, the latter breaking the inversion center between the up and down triangles of the kagome lattice. A topological SL is stabilized in a restricted part of the phase diagram by the time-reversal symmetry breaking (complex) 3-site permutation term. Analyzing the chiral edge modes of this topological SL on long cylinders or on finite disks, we have come up with two competing scenarios, either a CSL or a double Chern-Simon SL characterized by a single or by two counter-propagating Wess-Zumino-Witten SU(3)$_1$ chiral mode(s), respectively. In the vicinity of the extended ferromagnetic region we have found a magnetic phase corresponding either to a modulated canted ferromagnet or to a uniform partially magnetized ferromagnet.
Yi Xu, Sylvain Capponi, Ji-Yao Chen, Laurens Vanderstraeten, Juraj Hasik, Andriy H. Nevidomskyy, Matthieu Mambrini, Karlo Penc, Didier Poilblanc
2023-06-28T13:16:32Z
http://arxiv.org/abs/2306.16192v1
# Phase diagram of the chiral SU(3) antiferromagnet on the kagome lattice ###### Abstract Motivated by the search for chiral spin liquids (CSL), we consider a simple model defined on the kagome lattice of interacting SU(3) spins (in the fundamental representation) including two-site and three-site permutations between nearest neighbor sites and on triangles, respectively. By combining analytical developments and various numerical techniques, namely exact Lanczos diagonalizations and tensor network variational approaches, we find a rich phase diagram with non-topological ("trivial") and topological (possibly chiral) gapped spin liquids (SLs). Trivial spin liquids include an Affleck-Kennedy-Lieb-Tasaki (AKLT)-like phase and a trimerized phase, the latter breaking the inversion center between the up and down triangles of the kagome lattice. A topological SL is stabilized in a restricted part of the phase diagram by the time-reversal symmetry breaking (complex) 3-site permutation term. Analyzing the chiral edge modes of this topological SL on long cylinders or on finite disks, we have come up with two competing scenarios, either a CSL or a double Chern-Simon SL characterized by a single or by two counter-propagating Wess-Zumino-Witten SU(3)\({}_{1}\) chiral mode(s), respectively. In the vicinity of the extended ferromagnetic region we have found a magnetic phase corresponding either to a modulated canted ferromagnet or to a uniform partially magnetized ferromagnet. + Footnote †: These authors contributed equally to this work. + Footnote †: These authors contributed equally to this work. ## I Introduction The electronic and magnetic properties of materials in Nature arise from the interactions between SU(2)-symmetric fermions - the electrons, on the background of nuclei. When the interactions are strong, such materials can be well described by the electronic Hubbard model, which has been extensively studied in the field of condensed matter physics. Recent developments venture beyond this SU(2) paradigm. In one form, through emergent SU(4) symmetry conjectured for instance in strong spin-orbit materials [1] or twisted bilayer graphene [2]. Yet a more direct approach, facilitated by the continuous progress in ultracold atom platforms, is to emulate the physics of the SU(N)-symmetric Hubbard model by loading N-color atoms onto optical lattices [3; 4]. These experimental platforms provide an ideal environment for exploring and engineering exotic phases that have yet to be discovered in real materials. Among exotic phases of matter, topological spin liquids (TSL) have been the subject of intense experimental and theoretical research activities since the seminal SU(2)-symmetric Resonating Valence Bond (RVB) proposal by Anderson and Fazekas [5; 6]. Later on, the topological nature [7] of the RVB state on non-bipartite lattices has been noticed, and turned out to be transparent within the tensor network framework [8]. However, parent Hamiltonians for such states, although local, are not physical involving complicated multi-site interactions [9]. Hence it is not clear whether (non-chiral) TSL can be hosted in simple physical models. On the experimental side, platforms of Rydberg atoms [10; 11; 12] offer beautiful implementations e.g. of the RVB physics of (hardcore) dimers. The realisation of synthetic gauge fields in cold atom platforms sets the stage to experimental observations of fractional Chern insulators or chiral spin liquids (CSL) [13; 14; 15]. CSL define a broad class of TSL that break (spontaneously or not) time reversal and reflection (R) symmetries while preserving their product. Simple constructions of CSLs by Wen [16] and Kalmeyer and Laughlin [17] established a more precise connection to the physics of the fractional quantum Hall (FQH) effect. Evidence for spin-1/2 CSL has been provided in (frustrated) Heisenberg models in the presence of an additional chiral 3-site interaction, e.g. on the kagome lattice [18; 19], on the triangular lattice [20; 21; 22] or on the square lattice [23; 24], and also in Hubbard insulators [25; 26]. Preliminary investigations by one of the authors (K.P.) [27] have shown that a simple time reversal symmetric Hamiltonian on the kagome lattice consisting of (real) permutations of SU(3) spins (in the fundamental **3** irreducible representation) on the triangular units admits an Affleck-Kennedy-Lieb-Tasaki (AKLT)-like [28] state as the exact ground state. Interestingly, such a state bears a particularly simple tensor network repre sentation (typical of AKLT states) involving \(\overline{\mathbf{3}}\) virtual particles fusing into singlets on every triangle. Also, the gapped AKLT phase has been shown to be stable under the addition of a nearest-neighbor Heisenberg coupling (two-site permutation of either positive or negative amplitude), limited by two critical points defined by equal amplitudes of the two and three site permutations [27]. Motivated by the recent quest for TSL and, in particular, for novel CSL we have extended KP's model by including time reversal symmetry-breaking (pure imaginary) triangular permutations providing a two-dimensional parameter manifold. A thorough investigation of the phase diagram of this model has been undertaken. Note that the same model has been studied using parton wavefunctions in a restricted domain of parameter space [29] claiming the existence of an Abelian CSL. The paper is organized as follow; first, the model and the numerical methods are described in Sec. II. Then, the overall phase diagram is depicted in Sec. III with a description of the various phases coming into play. In a second step, the ground states and low-energy excitations of interesting phases of the phase diagram - obtained by complementary numerical techniques - are analysed in Sec. IV. Interestingly, the model is shown to host a topological spin liquid phase in an extended domain of parameters. To characterize the nature of this TSL we have investigated the physics of the edge modes by different means, in particular using an accurate tensor network ansatz of the ground state. Details on analytical methods and numerical techniques such as Lanczos exact diagonalisations (ED), Matrix Product States (MPS) on cylinders and tensor network techniques using Projected Entangled Pair States (PEPS) and Projected Entangled Simplex States (PESS) are provided in Appendix A, Appendix B, Appendix C and Appendix D, respectively. ## II Model and numerical tools The local Hilbert space on each site \(i\) of the two-dimensional kagome lattice consists of the three states \(|\alpha\rangle_{i}=\{A,B,C\}\) representing the fundamental (defining) representation \(\mathbf{3}\) of SU(3) (see Appendix A.2 for details on the SU(3) group). The interaction between these SU(3) "spins" is described by the SU(3)-symmetric Hamiltonian as follows: \[H = J\sum_{\langle i,j\rangle}P_{ij}+K_{R}\sum_{\triangle ijk}(P_{ ijk}+{P_{ijk}}^{-1})\] \[+ iK_{I}\sum_{\triangle ijk}(P_{ijk}-P_{ijk}^{-1}),\] where the first term corresponds to two-site permutations over all nearest-neighbor bonds, and the second and third terms are the three-site permutations on all triangles, clockwise (\(P_{ijk}\)) and counterclockwise (\(P_{ijk}^{-1}\)). Written explicitly, \(P_{ij}\) and \(P_{ijk}\) are defined through their action on the local basis states, \(P_{ij}|\alpha\rangle_{i}|\beta\rangle_{j}=|\beta\rangle_{i}|\alpha\rangle_{j}\) and \(P_{ijk}|\alpha\rangle_{i}|\beta\rangle_{j}|\gamma\rangle_{k}=|\gamma\rangle_{i} |\alpha\rangle_{j}|\beta\rangle_{k}\), for a fixed orientation of the triangle \(i\), \(j\), \(k\), say clockwise. The \(K_{I}\) term is "chiral", in the sense that it breaks time reversal and reflection symmetries without breaking their product. For convenience, in the following we use the parametrization on a sphere: \[J = \cos\theta\cos\phi,\] \[K_{R} = \cos\theta\sin\phi, \tag{2}\] \[K_{I} = \sin\theta,\] where \(0\leq\phi<2\pi\) and it is sufficient to consider \(\theta\) in the interval \([0,\pi/2]\) because of the symmetry \(K_{I}\leftrightarrow-K_{I}\). Hence only the upper hemisphere of the parameter space is considered. We have addressed the phase diagram of this model by complementary numerical tools. Lanczos ED on small periodic 21-site and 27-site clusters (torus geometry) have been performed to obtain the low-energy spectrum from which useful information on the nature of the phase can be extracted. Such clusters accommodate the SU(3) singlet subspace (and hence can describe spin liquids) and all available space group symmetries are used to label the many-body eigenstates. MPS calculations with explicit SU(3) symmetry on infinitely-long cylinders with finite circumference have also been performed to compute ground state properties [30], entanglement spectra [31] and construct excitation ansatze [32]. PEPS [33] and PESS [9; 34] tensor networks have been considered and contracted on the infinite lattice using Corner Transfer Matrix Renormalization Group (CTMRG) [35; 36]. Both unconstrained and symmetric PEPS/PESS have been employed and variationally optimized e.g. using conjugate gradient optimization schemes [37; 38]. While fully unconstrained or Abelian-symmetric (employing just \(\mathrm{U}(1)\times\mathrm{U}(1)\) subgroup of SU(3) symmetry group) ansatze are less "biased" by construction, their optimization is more difficult and greatly benefits from an automatic differentiation procedure [39; 40]. In contrast, SU(3)-symmetric PEPS/PESS [41] depends on a much smaller number of variational parameters which can be optimized by numerically estimating the gradient vector [42; 24]. Symmetric PEPS/PESS encoding SU(3)-rotation symmetry and translation invariance are particularly well-suited to describe singlet phases of matter [43; 44], where the SU(3) symmetry (implemented with the QSpace library [45; 46]) also allows us to reach unusually large bond dimensions. ## III Phase diagram ### Preliminaries The model studied here exhibits a very rich phase diagram as shown in Fig. 1. The parameter space defined on a hemisphere is conveniently mapped onto a two-dimensional (2D) disk using a stereographic projection (see Appendix A.1). In this section we will describe qualitatively the phase diagram and the various phases we have encountered. More thorough descriptions, reports of the numerical results and discussions will be given in the subsequent sections. To start with, it is useful to distinguish two types of phases: (i) the spin liquids (SL) whose ground states preserve both SU(3) rotational invariance (SU(3) singlets) and invariance under lattice translations - like the Affleck-Kennedy-Lieb-Tasaki (AKLT) or chiral SL (CSL) phases - but may break point group symmetry (like the trimer phase); and (ii) the magnetically-ordered phases breaking the SU(3) rotational symmetry. The uniform fully-polarized ferromagnet is a trivial example of the latter type, but more complicated magnetic phases breaking lattice translation symmetry are also realised here. Since the unit cell of the kagome lattice contains three sites and on each site there is a \(\mathbf{3}\) spin, the Lieb-Schultz-Mattis (LSM) theorem [47], extended to higher spatial dimensions by Oshikawa [48] and Hastings [49], and its generalization to SU(N) [50] does not apply to Hamiltonian (1) so that spin liquids in the phase diagram may or may not possess topological order. Generically, the SL ground states can be conveniently defined/represented by a simple tensor network [41; 8; 9]. On the kagome lattice, the simplest version is a Projected Entangled Simplex State (PESS) (see Fig. 2) involving a rank-3 tensor in each triangle (green sphere in Fig. 2) and a rank-3 tensor with two virtual and one physical leg (blue sphere) on each site. Each bond connecting a site to the center of a triangle carries virtual SU(3) particles. The corresponding virtual Hilbert space \(\mathcal{V}\) is therefore a direct sum of a certain number (labelled throughout by \(D^{*}\)) of SU(3) irreducible representations (irreps). On all triangles, the three virtual particles fuse into a singlet, and the trivalent tensor enforces the projection \(\mathcal{V}^{\otimes 3}\to\mathbf{1}\). On the sites, two virtual particles fuse to the physical state, and the site tensor enforces the projection \(\mathcal{V}^{\otimes 2}\to\mathbf{3}\). Here and throughout, we use the standard notation for the SU(3) irreps labeled by their dimension or, equivalently, by their Dynkin labels - see Table 1 in Appendix C for details. Besides the representation of spin liquids, the PESS formalism (associated to PEPS) turns out to be also extremely useful to investigate magnetic phases and phases breaking translation symmetry - as will be discussed later on. Details about the PESS/PEPS constructions are given in Appendix D. ### AKLT phase It has been shown by one of the authors (K.P.) that the non-chiral Hamiltonian defined by \(K_{I}=0\) (i.e. \(\theta=0\)) in Eq. (1) has an exact ground state (GS) of the AKLT type in the range \(\pi/4\leq\phi\leq 3\pi/4\)[27]. It is closely related to the simplex solid of Arovas constructed in Ref. [51] which breaks no discrete lattice symmetry, but we write the sin Figure 2: PESS construction on the kagome lattice of corner-sharing triangles. Site-centered rank-3 tensors carrying the physical leg (in red) are represented in blue, while triangle-centered tensors represented in green fuse three virtual legs into an SU(3) singlet. In the case of SLs, the tensor network is uniform. PESS can also describe all other phases in the phase diagram with proper modifications to be discussed in the text and Appendix D. Figure 1: Semi-quantitative phase diagram of the SU(3) chiral antiferromagnet on the kagome lattice using a stereographic projection (mapping the \((\theta,\phi)\) hemisphere onto a planar disc – see Appendix A.1). Shaded areas of different colors represent the various phases discussed in the text. The center of the plot (\(\theta=\pi/2\)) defines the “North Pole” and the outer circle (\(\theta=0\)) the “equator” parametrized by the azimuthal angle \(\phi\), with the corresponding model parameters defined in Eq. 2. The dashed (dash-dotted) line corresponds to the exact single-magnon (two-magnon) instability of the ferromagnetic phase. It is unclear whether the two-magnon instability gives the onset of the trimer phase. glet creation operators using fermions, not bosons. This state can be represented by the simplest possible PESS representation just involving the unique irrep (\(D^{*}=1\)) \(\mathcal{V}=\bar{\mathbf{3}}\) on all virtual bonds. Hence, on all triangles three virtual \(\mathbf{\overline{3}}\) particles fuse into a singlet, \(\mathbf{\overline{3}}^{\otimes 3}\to\mathbf{1}\) while, on the sites, two virtual \(\bar{\mathbf{3}}\) particles fuse into the physical irrep, \(\mathbf{\overline{3}}^{\otimes 2}\to\mathbf{3}\). This construction provides a unique ground state and, since the AKLT phase is likely to be gapped (see later), we deduce that the AKLT phase is a featureless (or "trivial") SL with no topological order. Being gapped, the AKLT phase survives when a sufficiently small chiral perturbation is introduced (i.e. \(\theta<\theta_{\text{crit}}(\phi)\), see later). To describe the AKLT ground state in these new regions of the phase diagram, one has to extend the previous PESS construction by increasing the (Hilbert space) dimension \(D=\dim(\mathcal{V})\) of the virtual space. The singlet character of the GS implies that \(\mathcal{V}\) is a direct sum of SU(3) irreps. The irreps appearing in this direct sum must fulfill a strict requirement to keep the same featureless (i.e., non-topological) character of the ground state. In fact, each irrep \(\mathbf{I}\) of SU(3) is characterized by its \(\mathbb{Z}_{3}\) charge \(Q(\mathbf{I})\) defined by the number of boxes of its Young tableau modulo 3 (e.g. \(Q=2\) is equivalent to \(Q=-1\)). The AKLT PESS can only contain irreps \(\mathbf{I}\) with the same charge as \(\mathbf{\overline{3}}\), i.e. \(Q(\mathbf{I})=Q(\mathbf{\overline{3}})=2\). Of course, the optimal choice of those irreps can only be determined numerically by a variational optimization scheme. Restricting to \(D^{*}\leq 4\) irreps in the virtual space, we have found that \(\mathcal{V}=\mathbf{\overline{3}}+\mathbf{\overline{3}}+\mathbf{6}+\mathbf{ \overline{15}}\) is the best choice. ### Trimer phase The SU(3) Heisenberg antiferromagnet on the kagome lattice (i.e. the \(\phi=\theta=0\) point in our phase diagram) exhibits a trimer phase [52], i.e. a simplex solid [51] with different energy densities on the two types of up-pointing and down-pointing triangles (named _up_ and _down_ triangles hereafter). Hence, such a phase spontaneously breaks the (site) inversion center, resulting in a doubly-degenerate SU(3) singlet groundstate manifold. We have shown that this phase survives in a rather extended region of our phase diagram. Similar to the AKLT phase, a particularly simple prescription exists for constructing PESS ansatze of the trimer phase by using different virtual spaces \(\mathcal{V}_{\text{up}}\) and \(\mathcal{V}_{\text{down}}\) for the up and down triangles, respectively. Let us start with the extreme case of decoupled SU(3) singlets on, say, up triangles. An exact PEPS representation is given by \(\mathcal{V}_{\text{up}}=\mathbf{3}\) and \(\mathcal{V}_{\text{down}}=\mathbf{1}\) and the corresponding (unique) \(C_{3\text{v}}\)-symmetric trivalent tensors on the up and down triangles encode the fusion rules \(\mathbf{3}^{\otimes 3}\to\mathbf{1}\) and \(\mathbf{1}^{\otimes 3}\to\mathbf{1}\), respectively. Also, the site tensors encode the trivial fusion \(\mathbf{3}\otimes\mathbf{1}\to\mathbf{3}\). We note that the two irreps of the up and down virtual spaces have different \(\mathbb{Z}_{3}\) charges, \(Q_{\text{up}}=1\) and \(Q_{\text{down}}=0\), respectively. This suggests that the PESS ansatz of a generic trimer state in which the up triangles are entangled can be constructed by simply adding more irreps of the same \(\mathbb{Z}_{3}\) charge in \(\mathcal{V}_{\text{up}}\) and \(\mathcal{V}_{\text{down}}\). Restricting to \(D^{*}=2\) irreps we found that \(\mathcal{V}_{\text{up}}=\mathbf{3}+\mathbf{\overline{6}}\) and \(\mathcal{V}_{\text{down}}=\mathbf{1}+\mathbf{8}\) provide the best ansatz. Note that, in such a construction, the inversion center is obviously broken and a pair of ground states is obtained by simply switching the virtual spaces between the up and down triangles. ### Topological spin liquid We have also found a gapped topological spin liquid (TSL) stabilized in a significant region of the phase diagram provided the chiral \(K_{I}\) term is present (\(\theta\neq 0\)) in Eq. (1), as shown in Fig. 1. The region of stability of this phase includes the parameters \(K_{R}/J\simeq 0.6\), \(K_{I}/J\simeq 0.45\) (i.e \(\theta\sim 0.13\pi\) and \(\phi\sim 0.17\pi\)) proposed in [29] as the optimal parameters for the stability of an Abelian CSL. Such a phase does not break any lattice symmetry (it is perfectly uniform), nor does it break the SU(3) symmetry. Moreover, it possesses topological order as defined by Wen [7; 53]. Interestingly, the existence of topological order is not _a priori_ guaranteed in SU(3) kagome spin liquids since the LSM theorem does not apply here, as already noted above. Then, the ground state degeneracy is expected to be associated to the 3 possible values of the \(\mathbb{Z}_{3}\) charge. Indeed, ED (MPS) on a torus (on an infinite cylinder) reveals 3 quasi-degenerate ground states in some extended region of the phase diagram. A faithful description of such a phase in terms of PESS is in fact possible. A necessary (but not always sufficient) condition for the existence of (at least) 3 topological sectors on the infinite cylinder is that the virtual space should contain at least one irrep within each of the 3 \(\mathbb{Z}_{3}\) charge sectors. A minimal choice would then be \(\mathcal{V}=\mathbf{1}+\mathbf{3}+\mathbf{\overline{3}}\). Below we show that increasing the virtual space to \(\mathcal{V}=\mathbf{1}+\mathbf{3}+\mathbf{\overline{3}}+\mathbf{6}+\mathbf{8}\), with additional irreps of charge \(Q=2\) and \(Q=0\), provides an optimal low-energy ansatz of the TSL. As reported below in Section IV, we find that this TSL phase exhibits chiral edge modes, as revealed by its entanglement spectrum (ES). The content of the edge modes is described in terms of a SU(3)\({}_{1}\) Wess-Zumino-Witten (WZW) Conformal Field Theory (CFT), and should fully characterize the nature of this Abelian TSL. The results obtained with our PESS ansatz show edge modes of both right- and left-moving chiralities (and different velocities) consistent with a SU(3)\({}_{1}\) doubled Chern-Simons (DCS) Topological Field Theory (TFT) [54; 43]. On the other hand, the ED and MPS results rather point towards a chiral spin liquid (CSL) phase exhibiting a single _chiral_ edge mode. Later in Section IV we shall discuss further the pros and cons for the CSL or for the DCS phase. ### Ferromagnetic phase The ferromagnet is a lattice-symmetric state which spontaneously breaks the internal SU(3) symmetry, where both the two-site and the three-site permutations act trivially with eigenvalues \(+1\). Hence the ground state energy per site is simply \(e_{F}=2J+4K_{R}/3\). To determine the phase boundary of the ferromagnetic state, we calculate the dispersion of a single magnon. By this, we mean a configuration such that the flavors are \(A\) on all sites except one, which is, say, a \(B\). The Hamiltonian then hops the \(B\) flavor to neighboring sites, giving it a dispersion determined by the eigenvalues of the three-by-three matrix (11) in Appendix A.3. The matrix (11) describes three magnon branches. If the dispersion of the magnon, measured from the energy of the ferromagnetic state, is negative, the ferromagnetic phase becomes unstable and ceases to exist. Scrutinizing the dispersions, it turns out that they are functions of \(J+K_{R}\) and \(K_{I}^{2}\) only. The maximum and minimum of the dispersions are always at momentum \(\mathbf{q}=0\), where the energies are \(0\) and \(-6(J+K_{R})\pm 2\sqrt{3}K_{I}\). We get a positive dispersion for \[J+K_{R}<-|K_{I}|/\sqrt{3}. \tag{3}\] Conversely, the ferromagnetic state is unstable for \(J+K_{R}>-|K_{I}|/\sqrt{3}\). On the boundary, when \(J+K_{R}=-|K_{I}|/\sqrt{3}\), the \(0\) energy band becomes flat. Localized modes on hexagons common to kagome lattice appear, but with amplitudes \(e^{ij\pi/3}\) as we go around the hexagon - the complex amplitudes reflect the chiral nature of the model. The one-magnon instability line is shown as a dashed line in Fig. 1. Interestingly, we find numerically that, within the one-magnon stability region, the ferromagnetic state is nevertheless unstable with respect to the trimer phase. We have also envisioned the possibility of a two-magnon instability by considering a two-magnon excitation of the ferromagnetic state, where two spins are flipped with different flavors. Details of the calculations are given in section A.4 of Appendix A. The two-magnon calculation aims to reveal whether the interaction between the magnons could be attractive and lead to a bound state. In that case, the boundary of the ferromagnetic phase would shrink further. This is indeed what has been found as shown in the phase diagram of Fig. 1. Numerically, we found that this two-magnon instability line marks (approximately) the instability to the trimer phase. ### SU(3)-broken phase When crossing the one-magnon instability line in the region roughly \(\pi/4<\phi<3\pi/4\) (magenta color in Fig. 1), the ferromagnetic state becomes unstable and gives rise to a partially magnetized phase (with magnetization \(0<m<1\)), hence breaking SU(3) symmetry. Such spontaneous SU(3)-breaking may occur while preserving or (spontaneously) breaking translation symmetry. The translation symmetry-broken phase is characterized by a spin canting occurring on three triangular sublattices separately, which requires a 9-site unit cell. The canted spins (all of the same length) on each sub-lattice form either a stripy pattern or a C3-rotation symmetric pattern, with all three sub-lattices having the same overall spin direction (see Appendix D). In our calculations, the so-called C3-2 \(\sqrt{3}\times\sqrt{3}\) C3-rotation symmetric pattern in which the site SU(3)-color occupations are identical in each (let say) up triangle - see Fig. 3 - seem to be energetically favored over the other C3-rotation symmetric or stripe patterns discussed in section D.3 of Appendix D. The second competing magnetic phase can be described by a uniform translationally-invariant (3-site) PEPS, as discussed in section D.2 of Appendix D. After energy optimization, the magnetization in such _a priori_ un-restricted ansatz turns out to be uniform and significantly lower than in the modulated C3-2 phase. Note also, the magnetizations on the three sites within the unit cell are not canted but rather collinear. Interestingly, the numerics point towards a jump of the magnetization at the boundary to the fully polarized phase. This is indeed expected from the analytic calculation of the one-magnon instability in section A.3 of Appendix A predicting infinite compressibility at the transition. ## IV Ground states and low-energy excitations A crude determination of the phase diagram in Fig. 1 and of the boundaries of the various phases was first obtained by inspecting the low-energy ED spectra on a pe Figure 3: 9-site \(\sqrt{3}\times\sqrt{3}\) unit cells tiled in C3-rotation symmetric patterns. The colors indicate which one of the three SU(3) colors has the dominant weight. Note that the colors in each, say, up triangle are identical and have the same dominant weight magnitude. riodic 21-site torus (see Appendix B for details). These results were then extended to a 27-site torus (for a much smaller set of parameters) and compared against the results obtained by tensor network methods (MPS, iPESS, iPEPS) to refine the phase diagram. For simplicity, we shall here focus on three different cuts - the \(\phi=0\) [mod \(\pi\)] and the \(\phi=\pi/2\) [mod \(\pi\)] meridians, together with a portion of the \(\theta=\pi/8\) latitude - which contain all the phases we have encountered. ### Energetics The top panels in Figs. 4, 5 and 6 show comparisons of the energy per site obtained by ED, iMPS, iPESS and iPEPS, along the aforementioned vertical, horizontal and circular cuts, respectively. The ED ground state energies have been all obtained from the same periodic 21-site cluster preserving all the symmetries of the infinite lattice. In Figs. 4 and 5 the iMPS energy has been obtained on a finite width (\(L_{y}=4\)) cylinder and SU(3) symmetry. We believe the ED and iMPS energies provide a (non-rigorous) lower bound of the energy due to strong finite-size effects. We have used translationally invariant SU(3)-symmetric iPESS calculations to target SU(3) singlet phases, like the AKLT phase (virtual spaces \(\mathcal{V}=\overline{\mathbf{3}},\overline{\mathbf{3}}+\mathbf{6}\), \(\overline{\mathbf{3}}+\overline{\mathbf{3}}+\mathbf{6}\), \(\overline{\mathbf{3}}+\overline{\mathbf{3}}+\mathbf{6}+\overline{\mathbf{15}}\)) shown in Figs. 4 and 5 and the TSL phase (\(\mathcal{V}=\mathbf{1}+\mathbf{3}+\overline{\mathbf{3}}+\mathbf{6}+\mathbf{8}\)) shown in Fig. 5. To describe the trimer phase (see Fig. 5) one has to extend symmetric iPESS by allowing two different virtual spaces \(\mathcal{V}_{\mathrm{up}}\) and \(\mathcal{V}_{\mathrm{down}}\) on the up and down triangles, characterized by different \(Z_{3}\) charges \(Z_{\mathrm{up}}=1\) and \(Z_{\mathrm{down}}=0\), respectively. In the region of stability of the trimer phase, we indeed observe that the \([\mathcal{V}_{\mathrm{up}}=\mathbf{3}+\overline{\mathbf{6}}]:[\mathcal{V}_{ \mathrm{down}}=\mathbf{1}+\mathbf{8}]\) ansatz provides a very good variational energy, comparable to e.g. that of a generic \(D=8\) iPEPS ansatz (whose properties will be discussed later on). In Fig. 4, moving away from the AKLT phase towards the (fully-polarized) ferromagnetic phase, we see that the optimization of unconstrained (1-triangle) iPEPS provides comparable, or even better, energies than the SU(3)-symmetric ansatze, although with a modest bond dimension (\(D=7,8,9\) compared to e.g. \(D=27\) for one of the SU(3)-symmetric ansatz), suggesting the existence of an intermediate phase breaking SU(3)-symmetry. This also happens in a limited region of Fig. 5. In Fig. 6, in the vicinity of the (fully-polarized) ferromagnetic phase, the optimization of an unconstrained \(D=7\) C3-2 iPESS provides good variational energies comparable to or even better than the previously mentioned 1-triangle iPEPS. This suggests that in the magnetic SU(3)-broken phase, the lattice translation symmetry may be spontaneously broken to form a 3-triangle \(\sqrt{3}\times\sqrt{3}\) unit cell order, corresponding to the modulation specifically encoded in the C3-2 PES ansatz. Figure 4: Top: Energetics of ED, iMPS, iPESS and iPEPS wave functions along the \(\phi=\pi/2\) [mod \(\pi\)] meridian where \(J=0\), i.e., along the vertical diameter of Fig. 1 as highlighted in the top right corner. The dashed line corresponds to the exact ferromagnet energy. The phase boundaries are approximate except for the canted ferro-ferro transition at \((\phi,\theta)=(3\pi/2,\pi/3)\). Middle: Uniform magnetization of the unit cell \(m\) in units of \(m_{0}\). Bottom: ED low-energy (excitation) spectrum of a periodic 21-site cluster. Open (closed) symbols show the singlet (non-singlet) eigenstates and the GS energy has been subtracted. Different symbols correspond to different momenta as shown in the legend. The black circles on the right correspond to the largest SU(3) irrep. Inset panel: ED on a 27-site torus shows the disappearance of the gapped region close to the pole. ### Order parameters To further characterize the magnetic SU(3)-broken phase, we define the uniform magnetization of the unit cell \(m=|\sum_{i\in\text{unit cell}}\vec{\lambda}_{i}|\), \(\lambda_{i}^{\alpha}\) being the 8 generators of SU(3) acting on the site \(i\), giving the fraction \(m/m_{0}\) of the magnetization \(m_{0}\) of the fully polarized ferromagnet. As shown in the middle panels of Figs. 4, 5 and 6 the SU(3)-broken phase can be determined more accurately from the onsets of finite values of \(m\) which reaches its maximal value \(m_{0}=2/\sqrt{3}\) in the fully polarized ferromagnet. We have also defined the average magnitude as \(\tilde{m}=\sum_{i\in\text{unit cell}}|\vec{\lambda}_{i}|\) (not shown). We observe that \(\tilde{m}\) is different from \(m\) only for the 3-triangle iPEPS signaling canting of the site polarizations. In contrast, the 1-triangle iPEPS shows aligned magnetizations on the 3 sites of the Figure 5: Top: Energetics of ED, iMPS, iPESS and iPEPS wave functions on the \(\phi=0\) [mod \(\pi\)] meridian where \(K_{R}=0\), i.e. from the leftmost point on the equator to the rightmost point on the equator via the North Pole in Fig. 1 as highlighted in the top right corner. The iPESS ansatz for the trimer phase is indicated in the legend as \([\mathcal{V}_{\text{up}}]:[\mathcal{V}_{\text{down}}]\). Middle: Order parameters of iPEPS wave functions. The uniform magnetization \(m\) (open green squares) and its non-zero value identifies the SU(3)-broken phase. The trimer phase order parameter indicated by the arrow is shown on the right scale for various ansatze. Bottom: ED low-energy (excitation) spectrum of a periodic 21-site cluster. The same symbols are used as in Fig. 4. Figure 6: Top: Energetics of ED, iPESS and iPEPS wave functions along part of the \(\theta=\pi/8\) latitude as highlighted in the top right corner. Bottom: ED low-energy (excitation) spectrum of a periodic 21-site cluster on the same latitude, but on a larger arc from \(\phi=0\) (in the trimer phase). unit cell. Interestingly, the D=7 (1-triangle) iPEPS data point to a jump in the magnetization at the boundary to fully polarized ferromagnet. The product of three physical **3**-irreps can be decomposed into a direct sum of four irreps, given by the fusion rule \(\mathbf{3}^{\otimes 3}=\mathbf{1}+\mathbf{8}+\mathbf{8}+\mathbf{10}\). Hence, for the three SU(3) spins on each triangle, one can define the projection operators onto corresponding irreps in the direct sum (weights of irreps), \(w_{\mathbf{1},\mathbf{8},\mathbf{8},\mathbf{10}}\), which satisfy the completeness relation \(w_{\mathbf{1}}+w_{\mathbf{8}}+w_{\mathbf{8}}+w_{\mathbf{10}}=1\). As the trimer states spontaneously break the inversion symmetry and form SU(3) trimers on either up or down triangles, we define the trimer order parameter as the difference between projections \(w_{\mathbf{1}}^{\nabla,\Delta}\) within the down and up triangles onto **1**-irrep (weight of **1**-irrep) to identify the trimer phase. This trimer order parameter is shown on the middle panel of Fig. 5 (right scale). Interestingly, the unconstrained \(D=8\) iPEPS calculation gives a very similar order parameter in the trimer phase to the SU(3)-symmetric PESS ansatze specially designed to describe the trimer phase. It also shows an abrupt jump at the onset of the TSL phase while the iMPS results give a more continuous curve of the trimer order parameter when going from the trimer phase to the TSL phase. ### Low-energy excitations To refine the determination of the phase diagram we have computed the low-energy spectra obtained by ED on a 21-site torus along the selected cuts, as shown in the bottom panels of Figs. 4, 5 and 6. For a restricted set of parameters, the spectrum has also been computed on a 27-site torus for better accuracy (see inset of Fig. 4). The spectrum for \((\phi,\theta)=(\pi/2,0)\), on the left of Fig. 4, clearly shows a unique ground state and a gap of order 0.3 characteristic of the AKLT phase, but the rest of the spectrum seems to come down quickly when increasing \(\theta\). We can obtain a complementary estimate of the excitation gap by the MPS excitation ansatz (see section C.6 of Appendix C), shown in Fig. 7, which confirms that the gap decreases very quickly. The right side of Fig. 4 shows the finite gap (due to finite size effects) of the fully polarized ferromagnetic phase for \(\theta<\pi/3\) (at \(\phi=3\pi/2\)). Around the pole, a gapped phase is visible on the 21-site cluster. However, the larger 27-site cluster reveals low-energy (non-singlet) excitations compatible with the magnetic SU(3)-broken phase discussed above. On the right-hand side of Fig. 5, the even and odd (under inversion) lowest singlets (labeled \(\Gamma A\) and \(\Gamma B\)) are the precursor of the two-fold degenerate trimerized ground state. Between the AKLT and the trimerized region, we see two new low-energy singlets (\(\Gamma E_{1a}\)) coming down suggesting a three-fold degenerate GS typical of the CSL phase. As discussed before, the small gap seen around the pole is an artifact of the 21-site cluster, no longer present in the larger 27-site cluster. The ED data in the bottom panel of Fig. 6 are shown on a larger interval along the \(\theta=\pi/8\) meridian than the two other panels above, from \(\phi=0\) to \(\phi=\pi\). It shows the same characteristics encountered in Fig. 5 (described in the above paragraph) corresponding to the same sequence (in reverse order) of phases, i.e. the trimer, the TLS, the magnetic and the fully polarized ferromagnet, from left (right) to the right (left) in Fig. 6 (5). Again the trimer (TSL) phase shows two (three) low-energy singlets at the bottom of the spectrum, and a spurious gap appears in the magnetic SU(3)-broken phase (identified by complementary means). ### Edge modes in the TSL phase We shall now discuss further the nature of the topological spin liquid phase on the kagome lattice: our results suggest two candidates, (i) a CSL - characterized by \(\mathbb{Z}_{3}\) topological order with three sectors - and (ii) a DCS phase - characterized by \(D(\mathbb{Z}_{3})\) topological order with nine sectors. Three different routes have been followed to identify the edge modes: i) first, by studying the system on an open disk, whose low-energy spectrum should follow that of some \((1+1)d\) CFT; ii) second, we optimize iMPS on an infinite YCA cylinder in three different topological sectors, from which the entanglement spectrum can be straightforwardly deduced [18; 31]; iii) third, using a faithful TSL representation via symmetric PESS. Note that states with chiral topological order can be faithfully represented by PEPS or PESS [55; 24; 56], where an infinite correlation length is artificially introduced to generate the chiral features in the entanglement spectrum - the latter provides a useful diagnostics [57; 58] for the nature of the TSL. Fig. 8 shows the low energy spectrum in the TSL phase, computed by ED, on a \(N_{s}=24\)-site disk. We observe a linearly dispersing chiral mode as a function of the angular momentum associated with the C\({}_{6}\) symmetry of the cluster. The quantum numbers of the SU(3) multiplets are found to be in good agreement with the WZW SU(3)\({}_{1}\) tower of states issued from the singlet (**1**) ground state (see theoretical expectation in Table 2), namely all multiplets are in perfect agreement up to \(\ell=3\), while there are few extra multiplets for larger \(\ell\) (1 extra \(\mathbf{1}\) and 1 extra \(\mathbf{8}\) levels at \(\ell=4\), 1 extra \(\mathbf{1}\), 2 extra \(\mathbf{8}\) and 1 extra \(\mathbf{10}\) levels at \(\ell=5\)). This small discrepancy could be attributed to the small cluster size. This suggests that the TSL phase is a chiral SL. Similarly, the MPS ES computed on a YC4 infinite cylinder (see Appendix C for further details on the MPS construction) and shown in Fig. 9 reveal similar features, also supporting the CSL phase. This can be seen in all three sectors corresponding to different \(Q\)'s. The CFT predictions for these cases can be found e.g. in Tables 6 and 7 of Ref. [59]. To construct the symmetric PESS, we have followed Ref. [43] to implement the relevant symmetries in PESS, including \(C_{3}\) rotation symmetry and SU(3) spin rotation symmetry. Moreover, by choosing the appropriate local tensors, the PESS undergoes complex conjugation under reflection, fulfilling the symmetry requirement of both CSL and DCS phases breaking time-reversal symmetry. One important ingredient in the symmetric PESS construction is the representations carried by the virtual index of local tensors. With \(\mathbb{Z}_{3}\) gauge symmetry in mind, a minimal virtual space would be \(\mathcal{V}=\mathbf{1}+\mathbf{3}+\overline{\mathbf{3}}\), which was shown to support a \(\mathbb{Z}_{3}\) toric code type topological phase in the parameter space. It turns out, doing variational optimization in this class of PESS always runs into a simplex solid phase, where the up and down triangles become inequivalent. This could be understood from spontaneous symmetry breaking at the PESS level. Therefore, one has to consider a larger virtual space for representing SU(3) CSL phase. For that, we have used a SU(3) symmetric simple update algorithm (implemented with the QSpace library [45; 46]) combined with variational optimization, and found that virtual irreps \(\mathcal{V}=\mathbf{1}+\mathbf{3}+\overline{\mathbf{3}}+\mathbf{6}\) and \(\mathcal{V}=\mathbf{1}+\mathbf{3}+\overline{\mathbf{3}}+\mathbf{6}+\mathbf{8}\) could provide a good description of the TSL. The entanglement spectrum (ES) with virtual space \(\mathcal{V}=\mathbf{1}+\mathbf{3}+\overline{\mathbf{3}}+\mathbf{6}\) is computed and shown in Fig. 10. Using the \(\mathbb{Z}_{3}\) gauge symmetry, we group the ES into three sectors with \(\mathbb{Z}_{3}\) charge \(Q=0,1,2\), respectively. The momentum \(K\) around the circumference of the cylinder is a good quantum number and is used to label the ES. As shown in Fig. 10, we have obtained linear dispersing chiral branches in the low energy part of the ES in all three sectors. A close look at the content of the chiral branch in the \(Q=0\) sector compared to predictions of the WZW SU(3)\({}_{1}\) CFT in Table 2 reveals that the relevant SU(3) multiplets were captured up to the 7th Virazoro level apart from minute deviations (see figure caption). However, zooming in at the level content of the \(Q=1\) and \(Q=2\) sectors, one finds that the quasi-degenerate SU(3) multiplet structure differs from the simple tower of states given by the WZW CFT. Instead, the low energy spectrum in the \(Q=1\) sector can be explained by the tensor product of \(\overline{\mathbf{3}}\) with the \(Q=2\) SU(3)\({}_{1}\) CFT tower. Similar considerations apply to the \(Q=2\) sector giving the conjugate irreps of the \(Q=1\) sector. A comparison with Tables 3 and 4 shows that the counting is indeed consistent with \(\overline{3}\)-tower [\(\otimes\overline{3}\)] and its conjugate, respectively, up to the 4th Virasoro level (for \(Q=1\), at \(L_{0}=3\) one \(\overline{\mathbf{6}}\) irrep lies a bit away from the compact group of the remaining SU(3)\({}_{1}\) irreps). Similar features have been found in a simpler PESS on the kagome lattice with virtual space \(\mathcal{V}=\mathbf{1}+\mathbf{3}+\overline{\mathbf{3}}\)[43]. It was further established that this PESS belongs to a SU(3)\({}_{1}\times\overline{\text{SU(3)}}_{1}\) double Chern-Simons phase characterized by a slow SU(3)\({}_{1}\) chiral mode and a fast counter-propagating \(\overline{\text{SU(3)}}_{1}\) chiral mode [54]. Our findings suggest that our \(D^{*}=4\) (\(D=13\)) PEPS ansatz also belongs to the same phase. However, it is unclear whether the presence of a second fast mode is a genuine feature of the phase or a finite-\(D\) effect. Note that the ED results do not show evidence of the DCS phase: for instance, in Fig. 5 a 3-fold quasi-degenerate ground state manifold is seen on the torus around \((\phi,\theta)=(0,\pi/4)\), in agreement with the expectation for a chiral SL (while a 9-fold degenerate ground state is expected in a DCS phase). Similarly, the ES obtained in the MPS simulations in 3 topological sectors (see Appendix C.5 for details) differ from the ones obtained in the PEPS framework, as shown in Fig 9, being compatible with a SU(3)\({}_{1}\) CSL. It would be interesting to analyze whether the level splittings in ES can be described by a generalized Gibbs ensemble [54]. Very recently [60] such a refined analysis of the ES enabled to strengthen the evidence for the CSL nature of a SU(3) PEPS on the square lattice [44]. In that case, it was shown that the splittings between conjugate irreps in the same Virasoro level of \(Q=0\) sector and between \(Q=1\) and \(Q=2\) sectors should vanish (and numerically found to be extremely small compared to the scale of other level splittings), due to absence of certain conserved quantities which are not allowed by symmetry. In the present case (Fig. 10), the splittings between conjugate irreps \((3,0)\) and \((0,3)\) at \(L_{0}=4,5\) in Figure 8: Low-energy spectrum computed with ED on a 24-site kagome cluster with open boundary conditions for \(\theta=\pi/4\), \(\phi=0\). Relative energies are plotted vs the angular momentum \(\ell\) with respect to the C\({}_{6}\) rotation symmetry. All symbols agree with the CFT prediction, see Tab. 2. the \(Q=0\) sector is noticeable, and the entanglement energies between conjugate irreps in the \(Q=1\) and \(Q=2\) sectors also have small but visible differences. On the other hand, the entanglement spectrum from MPS calculation, shown in Fig. 9, agrees with the level counting of a chiral SL, but also has a splitting between conjugate irreps at the same Virasoro level (in \(Q=0\) sector or between \(Q=1\) and \(Q=2\) sectors). A full analysis of the splittings in Fig. 9 and Fig. 10 is left for future work. ## V Conclusions In this work, the investigation of the complete phase diagram of the SU(3) chiral Heisenberg model on the kagome lattice has been carried out. Physical spins transforming according to the fundamental irrep of SU(3) are considered on all lattice sites. The Hamiltonian includes the generic two-site and three-site couplings on nearest-neighbor sites and triangular units, respectively. To map out the phase diagram we have combined analytical and numerical tools such as magnon expansions, exact diagonalisations and tensor network (iMPS and iPEPS) tech Figure 10: Entanglement spectra of a \(D=13\) chiral PESS at \(\chi=169\) placed on a \(L_{y}=6\) infinite cylinder partitioned in two halves, computed at \(\theta=\pi/4,\phi=0\). The virtual space is \(\mathcal{V}=\mathbf{1}\oplus\mathbf{3}\oplus\overline{\mathbf{3}}\oplus \mathbf{6}\). The three panels correspond to the three topological sectors associated to different total \(\mathbb{Z}_{3}\) charge \(Q\) on the boundaries, \(Q=Q(\mathbf{1})=0\) (\(a\)), \(Q=Q(\mathbf{3})=1\) (\(b\)) and \(Q=Q(\overline{\mathbf{3}})=2\) (\(c\)). The content of the chiral modes agrees with predictions based on the SU(3)\({}_{1}\) WZW CFT up to the Virasoro level \(L_{0}=7\) for \(Q=0\) (apart from minute deviations) – see Table 2 – and based on the SU(3)\({}_{1}\times\overline{\text{SU(3)}}_{1}\) DCS theory up to \(L_{0}=4\) otherwise – see Tables 3 and 4. Note, in the \(Q=0\) sector one 8-dimensional irrep is missing from the compact group of low-energy states of the \(L_{0}=6\) Virasoro level. Also, all relevant irreps of the \(L_{0}=7\) level are grouped together below energy \(\sim 7.2\) except three missing (1,1), (0,3), and (2,2) irreps which appear slightly away at energy \(\sim 7.32\), \(\sim 7.92\) and \(\sim 7.52\), respectively. Figure 9: Entanglement spectra of MPS on a straight \(L_{y}=4\) cylinder (YC4) optimized at \(\theta=\pi/4,\phi=0\) in the three topological sectors, with largest MPS bond dimensions around \(D_{\text{MPS}}\approx 6000\). The three panels correspond to the three topological sectors associated to different total \(\mathbb{Z}_{3}\) charge \(Q\) on the boundaries, \(Q=0\) (\(a\)), \(Q=1\) (\(b\)) and \(Q=2\) (\(c\)). The contents of the chiral modes are consistent with a SU(3) CSL – see Table 2 and Table 7 of Ref. [59]. niques. In addition to the AKLT phase predicted by one of the authors (KP) [27] and the (expected) fully polarized ferromagnet (at large enough ferromagnetic couplings) we have found two gapped singlet phases and a magnetic phase that spontaneously breaks the SU(3) symmetry. One of the singlet phases, the trimer phase, breaks spontaneously the inversion center exchanging up and down triangles, as observed in the pure SU(3) Heisenberg model [52], a special point in our 2D parameter space. We have also found an enigmatic topological spin liquid. Although our numerical results show evidence for a gap and \(\mathbb{Z}_{3}\) gauge symmetry (via MPS and PEPS constructions), the exact nature of this TSL phase is still controversial with two possible candidates, either a SU(3)\({}_{1}\) CSL (proposed in Ref. [29]) or a double SU(3)\({}_{1}\times\overline{\text{SU}(3)}_{1}\) Chern-Simons spin liquid (discussed in Refs. [43, 54]). The not fully polarized SU(3)-broken magnetic phase is, most likely, a uniform partially polarized ferromagnet with collinear spins in the 3-site unit cell. Another competing non-uniform phase with spin canting occurring on three triangular sub-lattices separately (requiring a 9-site unit cell) seems to be slightly disfavored energetically. _Acknowledgments_ -- We acknowledge the participation of Seydou-Samba Diop (Ecole Normale Superieure de Lyon) and Matthew W. Butcher (Rice University) at an early stage of the project and support from the TNTOP ANR-18-CE30-0026-01 grant awarded by the French Research Council, and the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 101001604). This work was granted access to the HPC resources of CALMIP center under the allocations 2022-P1231 and 2022-P0677 as well as GENCI (grant x2021050225). We also acknowledge Jan von Delft for providing part of the computational resource. J.-Y.C. was supported by Open Research Fund Program of the State Key Laboratory of Low-Dimensional Quantum Physics (project No. KF202207), Fundamental Research Funds for the Central Universities, Sun Yatsen University (project No. 23qmy60), a startup fund from Sun Yat-sen University (No. 74130-12230034), and the Innovation Program for Quantum Science and Technology 2021ZD0302100. L.V. is supported by the Research Foundation Flanders. Y.X. and A.H.N. were supported by the Division of Materials Research of U.S. National Science Foundation under the Award DMR-1917511. The iPESS and iPEPS calculations at Rice University were supported in part by the Big-Data Private-Cloud Research Cyber infrastructure MRI-award funded by NSF under grant CNS-1338099 and by Rice University's Center for Research Computing (CRC). K.P. acknowledges support from the Hungarian NKFIH OTKA Grant No. K142652. ## Appendix A Analytical developments ### Stereographic projection The stereographic projection (see Fig. 11) maps the parameter space (see Eq. (2) for \(0\leq\theta\leq\pi/2\) and \(0\leq\phi<2\pi\) to a planar disk delimited by the image of the equator (a circle of radius 2). The image coordinates of \((\theta,\phi)\) points on the projection plane are given by \[X = \frac{\cos(\theta)\cos(\phi)}{\sin(\theta)+1},\] \[Y = \frac{\cos(\theta)\sin(\phi)}{\sin(\theta)+1}.\] ### SU(3) irreducible representations and conformal towers Irreducible representations (irreps) of the SU(3) group can be labeled differently. We show in Table 1 the one-to-one correspondence between Dynkin labels and Young tableaus. An irrep labeled by \((x,y)\) corresponds to a Young tableau with two rows containing \(x+y\) and \(y\) boxes. Conformal towers of various WZW CFTs mentioned in the main text, originating from \(\mathbf{1}\), \(\mathbf{\overline{3}}\)\(\left[\otimes\mathbf{\overline{3}}\right]\) and \(\mathbf{3}\)\(\left[\otimes\mathbf{3}\right]\), are shown in Tables 2,3 and 4, respectively. See also Ref. [59] for the towers originating from \(\mathbf{3}\) and \(\mathbf{\overline{3}}\). ### Single magnon dispersion The dispersion of a single magnon is determined by the eigenvalues of the matrix in reciprocal space given in Eq. 15 below, where the energy is measured from the energy of the ferromagnetic state and \(\mathbf{q}=(q_{x},q_{y})\) is the momentum of the magnon. The \(J+K_{R}\) appear together in the matrix above, so the dispersion depends only on two free parameters \(J+K_{R}\) and \(K_{I}\). \begin{table} \begin{tabular}{l l l} \hline \hline Symbols & Dynkin labels & Young tableaus \\ \hline \(\mathbf{+}\) & (0,0) & \(\frac{\mathbf{3}}{\mathbf{4}}\) \\ \(\mathbf{\rhd}\) & (1,0) & \\ \(\mathbf{\ll}\) & (0,1) & \\ \(\mathbf{\triangle}\) & (2,0) & \\ \(\mathbf{\nabla}\) & (0,2) & \\ \(\mathbf{\times}\) & (1,1) & \\ \(\mathbf{\times}\) & (3,0) & \\ \(\mathbf{\times}\) & (0,3) & \\ \(\mathbf{\perp}\) & (4,0) & \\ \(\mathbf{\nabla}\) & (0,4) & \\ \(\mathbf{\times}\) & (1,3) & \\ \(\mathbf{\nabla}\) & (3,1) & \\ \(\mathbf{\circ}\) & (2,2) & \\ \(\mathbf{\circ}\) & (4,1) & \\ \(\mathbf{\times}\) & (1,4) & \\ \(\mathbf{\times}\) & (1,4) & \\ \(\mathbf{\times}\) & (3,2) & \\ \(\mathbf{\times}\) & (3,2) & \\ \(\mathbf{\times}\) & (2,3) & \\ \(\mathbf{\times}\) & (2,4) & \\ \(\mathbf{\times}\) & (4,2) & \\ \(\mathbf{\times}\) & (4,2) & \\ \(\mathbf{\times}\) & (4,2) & \\ \(\mathbf{\times}\) & & Others \\ \hline \hline \end{tabular} \end{table} Table 1: Correspondence between Dynkin labels and Young tableaus for all SU(3) irreps discussed in the paper. Symbols used in the various figures are displayed in the first column. Labeling of the irreps follows conventions from the LieArt package [61]. Figure 11: Stereographic projection of the North hemisphere from the South pole, mapping every point of the sphere such as \(0\leq\theta\leq\pi/2\) and \(0\leq\phi<2\pi\) to its image on the upper planar disk. \[\left(\begin{array}{ccc}-4(J+K_{R})&2(J+K_{R}-iK_{I})\cos\frac{q_{x}-\sqrt{3}q_{y} }{2}&2(J+K_{R}+iK_{I})\cos\frac{q_{x}+\sqrt{3}q_{y}}{2}\\ 2(J+K_{R}+iK_{I})\cos\frac{q_{x}-\sqrt{3}q_{y}}{2}&-4(J+K_{R})&2(J+K_{R}-iK_{I}) \cos q_{x}\\ 2(J+K_{R}-iK_{I})\cos\frac{q_{x}+\sqrt{3}q_{y}}{2}&2(J+K_{R}+iK_{I})\cos q_{x}&- 4(J+K_{R})\end{array}\right) \tag{10}\] \(N-1\) states belong to the \({\bf d}=(N-1)(N+1)\) dimensional Young diagram with \((N-2,1)\) Dynkin label, and the state with zero energy and \({\bf q}=0\) to the \({\bf d}=(N+1)(N+2)/2\) dimensional fully symmetrical irreducible representation of the ferromagnetic state, represented by the Young diagram \((N,0)\). ### Two-magnon spectra This section considers two magnon excitations of the fully-symmetrical (ferromagnetic) state, where we introduce two spins with different flavors, following the calculation of Refs. [62] and [63] for the SU(2) case. Starting from the \(|AA\ldots A\rangle\) as a vacuum, the two-magnon wave function is \[\Psi=\sum_{i,j\in\Lambda}c_{i,j}|A\ldots AB_{i}A\ldots AC_{j}A\ldots A\rangle\;. \tag{11}\] The dimension of the Hilbert space spanned by \(|A\ldots AB_{i}A\ldots AC_{j}A\ldots A\rangle\) basis is \(N(N-1)\), as \(i=1,\ldots,N\) and \(j=1,\ldots,N\), but the \(B\) and \(C\) cannot occupy the same site (\(i\neq j\)). Furthermore, we symmetrize and antisymmetrize the wave functions so that \[c^{e}_{i,j} =c_{i,j}+c_{i,j}\,,\] \[c^{o}_{i,j} =c_{i,j}-c_{i,j}\,.\] The dimensions of the symmetric "e" (even) and of the antisymmetric "o" (odd) subspace are the same and equal to \(N(N-1)/2\). The even subspace is composed of the \((N,0)\), \((N-2,1)\), and the \((N-4,2)\) Young diagrams (see Fig. 12), each having multiplicities \(1\), \(N-1\), and \(N(N-3)/2\), respectively. The irreducible representations in the odd subspace are \((N-2,1)\) and the \((N-3,0)\) Young diagrams with multiplicities \(N-1\) and \((N-2)(N-1)/2\). Using the Casimir operator, we separate the energies of the \((N-4,2)\) and \((N-3,0)\) irreducible representations. The symmetric, even sector would also appear in the SU(2) case since symmetrization is equivalent to taking two \(B\) type spins instead of \(B\) and \(C\). The odd (antisymmetric) sector is unique to the SU(3). We diagonalized the Hamiltonian matrix for up to 81-site clusters numerically. Since this is a two-body problem, one might derive analytic expressions in principle, but they would be quite cumbersome. In Sec. III.5, we derived the conditions for the stability of one-magnon excitations. When the energy of the one-magnon is larger than that of the ferromagnetic state, the ferromagnetic phase is the ground state. If the energy needed to create a magnon is negative, the ferromagnetic phase is not a ground state anymore. The importance of the two magnon calculation is to reveal the interaction between the magnons. The ferromagnetic phase shrinks if the magnons attract each other and form bound states. Fig. 13 summarizes the result of our calculation. It shows the Young diagram (YD) having the lowest energy for two magnons. We can distinguish three regions: the \((N,0)\) ferromagnetic phase (the gray area), the \((N-4,2)\) for the symmetric combination of the two magnons (the blue area), and the red-colored area where the antisymmetric combination of \((N-3,0)\) Young diagram is the ground state. The boundary between the \((N,0)\) and the \((N-4,2)\) follows the one-magnon instability line for negative values of \(J\), but at \(\theta=\pi/3\) and \(\phi=3\pi/2\) corresponding to \(J=0\), \(K_{R}=-1\) and \(K_{I}=\sqrt{3}\) the three Figure 12: The Young diagrams appearing in the \([N_{A},N_{B},N_{C}]=[N-2,1,1]\) sector of the two magnon calculations, labeled by their Dynkin indices. \(N_{A}\) is the number of sites having \(A\) spins, and so on, so that \(N_{A}+N_{B}+N_{C}=N\). Figure 13: The ground state diagram in the two-magnon sector. The blue line denotes the 1-magnon instability line. The two magnons added to the system can form a symmetrical [Young diagram with Dynkin label \((N-4,2)\), blue shaded region] or antisymmetrical [Young diagram \((N-3,0)\), red shaded region] combination; the regions indicate regions where they are the lowest energy states. The boundaries for system sizes from 27 to 81 are drawn in red lines. regions meet, and the \((N-3,0)\) antisymmetric combination becomes the ground state. The boundary between the ferromagnetic phase and the \((N-3,0)\) no longer follows the one-magnon instability line. It is a hint for a formation of a bound state. To get further insight, we plot in Fig. 14(a) the energies of the different irreducible representations along the \(J+K_{R}=\pm\sqrt{3}K_{I}\) one-magnon instability line for a 27 site cluster. The lowest energies of the ferro state \((N,0)\), the one magnon \((N-2,1)\), and the two magnons in a symmetric combination \((N-4,2)\) are all equal (we note that the energies in these irreducible representations depend on the \(J+K_{R}\) combination only). In the \((N-3,0)\) antisymmetric sector, a band of bound-states appears with lower energy for \(J\gtrsim 0\) in the figure, where we keep \(J+K_{R}=-1\) constant (in the thermodynamic limit, the triple point is at \(J=0\), \(K_{R}=-1\), and \(K_{I}=\sqrt{3}\), i.e. \(\theta=\pi/3\) and \(\phi=3\pi/2\)). The number of bound-states is 18, which is equal to the number of triangles in the 27-site kagome cluster. We also confirmed that the number of bound states is \(2N/3\) in other clusters. In Fig. 14(b), we plot the energy gap to the ferromagnetic state around the full circle, keeping \(K_{I}=0\) (i.e., \(\theta=0\)). At the special point \(\phi=-\pi/4\), which corresponds to \(J=-K_{R}\) with \(J\) positive, the spectrum of the \((N-3,0)\) states greatly simplifies: we get a \(2N/3\)-fold degenerate manifold at \(E=-6J\), and all the other energy levels collapse at \(E=0\). The explanation is simple: the \(B\), \(C\) and an \(A\) form a localized SU(3) singlet on a triangle, while the remaining \(N-3\)\(A\) spins constitute the ferromagnetic background. The singlets can form on any of the \(2N/3\) elementary triangles in the kagome lattice, and this is the origin of the degeneracy. The result of a finite-size scaling for the boundary between the ferromagnetic state and the \((N-3,0)\) state is presented in Fig. 15 for \(K_{I}=0\). We get \(K_{R}/J=-2.7532\), which corresponds to \(\theta=1.611\pi\). Figure 16 shows the energy gap in the full parameter space. In this region where the gap is finite, the ground state in the \(N_{A}=N_{B}=N_{C}=N/3\) sector is the trimerized state. We can think of it as a condensation of the local SU(3) singlets with a repulsive interaction between them. The dispersion of a single magnon in the ferromagnetic background is flat along the one-magnon instability line. The flat bands are connected with modes lo Figure 14: The one- and two-magnon spectra for the 27-site cluster relative to the energy of the ferromagnetic state with \((27,0)\) Young diagram. Shown are the energies of the one-magnon states [green, (25,1) Young diagram] and the symmetric [blue,(25,1)] and antisymmetric [red,(24,0)] two-magnon states. (a) Varying the \(J\) and \(K\) while keeping the \(J+K_{R}=K_{I}/\sqrt{3}=-1\) constant follows the one-magnon instability line. The energies of the lowest unbound states are equal; for positive values of \(J\), 18 two-magnon bound states detach from the continuum (red lines with \(E_{(24,0)}<E_{(27,0)}\)). (b) The one- (continuous green curves) and two-magnon (open symbols) energies for \(K_{I}=0\) (i.e. \(\theta=0\)). The two-magnon bound states appear for \(-\pi/2<\phi<0\). Figure 15: The finite size scaling of the boundary between the \((N,0)\) ferromagnetic state and the \((N-3,0)\) state for \(K_{I}=0\). The boundary for the \(3L^{2}\) and \(9L^{2}\) type clusters goes to the same \(K_{R}/J=-2.7532\) value in the thermodynamic limit, though the slopes in \(1/N^{3}\) are quite different. calized on hexagons, also with delocalized modes, since a dispersing band touches the flat band. When we diagonalize the two-magnon spectrum along the instability line \(J+K_{R}=\pm\sqrt{3}K_{I}\), the number of states degenerate with the ferromagnet is \(\binom{N_{\rm hex}-1}{2}\) for the symmetric \({\rm YD}=(N-4,2)\) and \(\binom{N_{\rm hex}}{2}\) for the antisymmetric \({\rm YD}=(N-3,0)\) combination, where \(N_{\rm hex}=N/3\) is the number of hexagons. ## Appendix B Lanczos Exact Diagonalization We have performed ED on various lattices using space group symmetries as well as color conservation (equivalent to the conservation of the 2 U(1) Cartan generators of SU(3)). By using a Lanczos algorithm, we are able to converge a few low-energy states in each symmetry sector. A typical energy plot is shown for instance in the bottom panel of Fig. 4. In order to sketch a tentative phase diagram using a single system size, we have computed systematically the low-energy spectrum on the 21-site kagome cluster with periodic boundary conditions on a fine \((\phi,\theta)\) parameter grid. For each set of parameters, we have attempted in Fig. 17 to determine its ground-state (GS) properties using the following criteria: * ferromagnetic phase: the finite-size GS belongs to the fully symmetric irrep and its energy is known exactly. * AKLT: the ground-state is non-degenerate and there is an apparent large gap to the first excitation. * Trimerized: there are two low-energy singlet states in the \(\Gamma\).A and \(\Gamma\).B irreps, as expected if the inversion symmetry is broken in the thermodynamic limit. * CSL: there are three low-energy singlet states at momentum \(\Gamma\) as expected for a chiral spin liquid on a torus. * SU(3)-broken: either the GS is not an SU(3) singlet, or there is a small gap to a non-singlet state. This could be a critical state or a canted ferromagnet. By using these rules, we are able to plot a qualitative phase diagram in Fig. 17. Note that finite-size effects have been shown to be important in some regions. For instance, our ED data on the 21-site cluster are rather similar at the exact AKLT point (\(\phi=\pi/2,\theta=0\)) and close to the North pole (see Fig. 4) so that both regions are labelled in the same way on the phase diagram in Fig. 17. However, the situation is radically different on the 27-site cluster (see inset of Fig. 4), which rather indicates an SU(3)-broken phase in a large region around the North pole. Hence, it is crucial to combine different numerical methods in order to get reliable results. ## Appendix C Cylinder MPS simulations ### Geometry There are different possible geometries possible for putting the kagome lattice on an infinite cylinder. The most common one is the YC cylinder, shown in Fig. 18(a). Here we choose a slightly different geometry, Figure 16: The value of the energy gap \(E(N-3,0)-2E(N-2,1)+E(N,0)\) for the 75-site cluster. The bound state appears when the gap is negative; this is the green area in the plot. Figure 17: Stereographic projection (same parametrisation as in Fig. 1) of the phase diagram of the SU(3) chiral antiferromagnet on the kagome lattice obtained from ED on a 21-site periodic cluster. The various phases discussed in the text are represented by dots of different colors. The dashed (dash-dotted) line corresponds to the single (two) magnon instability of the ferromagnetic phase. shown in Fig. 18(b), where we have shifted the periodic boundary conditions with a single triangle in the horizontal direction. This shifted geometry has the advantage that the unit cell of the resulting one-dimensional Hamiltonian has a single-triangle unit cell. ### MPS ground states The shifted geometry has the additional advantage that also the MPS ground-state approximation can be chosen to have a single-triangle unit cell. We can represent this MPS as \[|\Psi(A)\rangle=\quad\includegraphics[width=85.358268pt]{MPS ground states}\quad, \tag{10}\] where for convenience we have grouped the three physical \(\mathbf{3}\) spins in a single MPS tensor - in the simulations we always keep three different MPS tensors. We impose SU(3) symmetry on the MPS, which implies that the virtual degrees of freedom in the MPS can be labeled by SU(3) irreps. The different blocks in the MPS tensor need to obey the SU(3) fusion rules, i.e. we need virtual irreps \(I_{v}\) and \(I^{\prime}_{v}\) \[\includegraphics[width=85.358268pt]{MPS ground states}\quad. \tag{11}\] Now we use the \(\mathbb{Z}_{3}\) property of the SU(3) fusion rules, where we can group the irreps in three different groups with a \(\mathbb{Z}_{3}\) charge: \[\begin{cases}\mathbf{\overline{3}},\mathbf{6},\cdots:Q=-1\\ \mathbf{1},\mathbf{8},\cdots:Q=0\\ \mathbf{3},\mathbf{\overline{6}},\cdots:Q=+1\end{cases} \tag{12}\] The three physical spins transform jointly as \(Q=0\) irreps, so the \(\mathbb{Z}_{3}\) property of the fusion rules dictates that \(I_{v}\) and \(I^{\prime}_{v}\) can only contain irreps from one and the same group, and, therefore, that we have three classes of MPS labeled by the \(\mathbb{Z}_{3}\) charge of the irreps on the bonds. Depending on the phase we are simulating, the optimal iMPS ground state will be found in one or more of these classes. The diagnostics for deciding whether an optimal iMPS is found within a certain \(\mathbb{Z}_{3}\) sector is through the entanglement spectrum and transfer matrix spectrum. In the following, we illustrate this procedure for the different SU(3) singlet phases in the phase diagram. In Figs. 19, 20 and 21, we plot the entanglement spectrum and transfer matrix spectrum for three different parameter choices, each time with iMPS optimized in the three different classes. For the entanglement spectrum we plot the different entanglement eigenvalues or Schmidt values with magnitude on the axis; the different colors in Figs. 19-21 correspond to the different SU(3) quantum numbers, which are labeled on the horizontal axis by their Dynkin label. For the transfer matrix spectrum (unit circles in the complex plane exhibited in the insets), we show a few dominant eigenvalues in the first two SU(3) sectors (again denoted by their Dynkin label) in the complex plane. ### AKLT phase Let us first consider the exact AKLT state, which is represented as a PESS with virtual irrep \(\mathbf{\overline{3}}\). On an infinite cylinder, this state can be represented as a snake-like iMPS by dragging the virtual links along the MPS around the cylinder. The virtual links of the iMPS then contain a number of \(\mathbf{\overline{3}}\) irreps that scales with the circumference of the cylinder, which are then fused into a number of SU(3) irreps on the virtual leg of the iMPS. Therefore, the \(\mathbb{Z}_{3}\) quantum number of the MPS depends on the cylinder circumference. We have now optimized iMPSs in the three different classes for the \(L_{y}=4\) cylinder. The resulting entanglement spectra and transfer matrix spectra can be found in Fig. 19. As one can see from these figures, only one of the three choices of virtual irreps gives rise to an injective MPS upon optimization, in this case the \(Q=1\) irreps. When choosing the other virtual irreps we find a non-injective MPS, which can be seen from the degeneracies in the entanglement spectrum and the fact that we find a transfer-matrix eigenvalue on the unit circle in a non-trivial sector (depicted as insets in Fig. 19). ### Trimerized phase We can play the same game in the trimerized phase. In this phase, the ground state has a strong inclination to form singlets on the triangles, and the iMPS geometry will clearly favor the trimerization to happen on the up triangles. The fully-trimerized state (product state of trimers on up-triangles) is represented by the above MPS, with \(I_{v}=I^{\prime}_{v}=\mathbf{1}\) on the bonds. Therefore, all iMPSs in this phase are adiabatically connected with this product state and will have virtual irreps with \(Q=0\), irrespective of the cylinder circumference. Figure 18: Different geometries for the kagome lattice on an infinite cylinder with a three-triangle unit cell in the periodic direction; the grey-shaded triangles are identified. As an illustration, in Fig. 20 we have plotted the MPS entanglement and transfer matrix spectra for the MPS ground state in the point \(\theta,\phi=0\) for circumference \(N=4\). Clearly, only the choice of \(Q=0\) irreps leads to the correct MPS ground state, whereas choosing \(Q=\pm 1\) leads to non-injective iMPSs. Figure 19: MPS entanglement spectra of the AKLT state (\(\theta=0\), \(\phi=\pi/2\)) on an \(L_{y}=4\) cylinder. The pseudo-energies of the ES are sorted according to the \(\mathbb{Z}_{3}\) charge (0 to 2 from top to bottom) and the SU(3) irreps (defined by Dynkin labels) and displayed with decreasing magnitude along the horizontal axis (arbitrary units). The insets show the transfer matrix spectra in the complex plane – the real (imaginary) part being associated to the correlation length (spatial oscillations of the correlation function). We have imposed the three different groups of irreps on the bonds. Only the middle spectrum corresponds to an injective MPS (irreps with \(Q=1\)), whereas the top and bottom correspond to non-injective MPS obtained by artificially adding a virtual \(\mathbf{3}\) or \(\mathbf{\overline{3}}\) bond. The exact degeneracies in the top and bottom entanglement spectra (in different SU(3) sectors) are the signatures of adding this extra virtual irrep. In addition, the occurrence of a transfer matrix eigenvalue in the \((1,1)\) sector on the unit circle points to a non-injective MPS. Figure 20: MPS entanglement (main) and transfer matrix (inset) spectra of an optimized MPS in the trimer phase (\(\theta=0\), \(\phi=0\)) on an \(L_{y}=4\) cylinder (same display as in Fig. 19), where we have imposed the three different groups of irreps on the bonds. Only the top spectrum corresponds to an injective MPS (irreps with \(Q=0\)), whereas the lower two panels correspond to non-injective MPS obtained by artificially adding a virtual \(\mathbf{3}\) or \(\mathbf{\overline{3}}\) bond. Again, the exact degeneracies in these two entanglement spectra are the signatures of adding this extra virtual irrep. The bond dimension of the \(Q=0\) MPS is around \(\chi\approx 7000\). ### Topological spin liquid phase In the spin-liquid phase, the situation is different because we expect three distinct ground states on the infinite cylinder, which are labeled by the \(\mathbb{Z}_{3}\) charge. Indeed, we find that the leading eigenvalues of the MPS entanglement spectrum are the same up to the fourth significant digit in the three charge sectors \(Q=0,1\) and \(2\) (see Fig. 21). The degeneracy is exponential in the circumference of the cylinder. The 3-fold degenerate nature of the TSL ground state is also corroborated by the leading eigenvalue of the iMPS transfer matrix, which lies on the unit circle in the imaginary plane and is degenerate among all three charge sectors (see insets in Fig. 21). ### Estimating the gap In order to estimate the gap, we apply the quasiparticle excitation ansatz. In this context, this boils down to applying as a variational ansatz the state \[|\Phi_{q}^{s}(B)\rangle=\sum_{n}\mathrm{e}^{iqn}\quad\raisebox{-14.226378pt}{ \includegraphics[width=14.226378pt]{figs_1}}, \tag{30}\] which has well-defined momentum \(q\) and SU(3) quantum number \(s\). Note that the SU(3) fusion rules dictate that \(s\) should have a quantum number with \(Q=0\), i.e. we have \(s=\mathbf{1},\mathbf{8},\dots\). We can variationally optimize the tensor \(B\) for every momentum \(q\) and in each sector \(s\), yielding a variational dispersion relation. By choosing the shifted boundary conditions, the momentum quantum number follows one continuous line through the 2-D Brillouin zone. Note that, if we have multiple ground states (in the spin liquid phase), we can build domain wall states that carry fractional quantum numbers (i.e., other quantum numbers \(s\)). The description of these spinon excitations is not further pursued here. ## Appendix D Projected Entangled Simplex States (PESS) and Pair States (PEPS) ### General formulation _PESS_: The wavefunction for the 1-triangle unit cell is defined as a product of 3 site projectors and 2 trivalent tensors, \((B_{a})_{ip}^{s_{a}}\), \((B_{b})_{ql}^{s_{c}}\), \((B_{c})_{rs}^{s_{c}}\), \((T_{d})_{pqr}\), \((T_{u})_{sjk}\), given by \[|\psi(s_{a},s_{b},s_{c})\rangle=(B_{a})_{ip}^{s_{a}}(T_{d})_{pqr}(B_{b})_{ql}^{ s_{b}}(B_{c})_{rs}^{s_{c}}(T_{u})_{sjk}\] Figure 21: MPS entanglement (main) and transfer matrix (inset) spectra of an optimized MPS in the TSL phase (\(\theta=\frac{\pi}{4}\), \(\phi=0\)) on an \(L_{y}=4\) cylinder (same display as in Fig. 19), where we have imposed the three different groups of irreps on the bonds. All three choices give rise to injective MPS (as expected in a TSL phase), and no artificial degeneracies are observed in the entanglement spectra or transfer matrix spectra. The energy densities are almost equal: \(e_{Q=0}=-0.7318557278\), \(e_{Q=+1}=-0.7317589213\), \(e_{Q=-1}=-0.7318473342\) and the total bond dimensions are all around \(\chi\approx 12000\). _PEPS_: In PEPS construction, each down (or up) triangle is considered as the basic building block which tiles the entire lattice. The wavefunction for the three sites in each down (or up) triangle is given by a single rank-5 PEPS tensor which fuses all the physical degrees of freedom together. \[|\psi(s_{a},s_{b},s_{c}))=a^{S}_{uldr},\ S=s_{a}s_{b}s_{c} \tag{49}\] ### 1-triangle PEPS phase diagram In this section, we describe the phase diagram obtained using a 1-triangle iPEPS (see Fig. 22), revealing significant differences compared to the one shown in Fig. 17. The criteria for identifying different phases are elaborated as follow. First, states in the SU(3) broken phase (in magenta) has non-zero magnetization (a threshold of \(m_{1}=0.01\) is chosen while the maximum allowed value is \(m_{0}=2/\sqrt{3}\) for the Ferro states in red). Then, one computes the projection operators onto \(\mathbf{1}\), \(\mathbf{8}\), \(\mathbf{8}\), \(\mathbf{10}\) respectively for down and up triangles. If there is a inversion symmetry breaking on up and down triangles, the state is identified as the trimer state. For those states preserving the inversion symmetry, if the two dominant irreducible representations are the two \(\mathbf{8}\), the state is identified as the AKLT phase. Otherwise, if the second dominant irreducible representation is \(\mathbf{1}\), the state is identified as the CSL state. ### 3-triangle unit cell PESS and PEPS For the 3-triangle unit cell PESS and PEPS ansatzes, the unit cell is extended along one direction to contain 9 sites (3 triangles). Neither of them, if not explicitly pointed out, impose constraints of point group symmetry on the tensors. For the 3-triangle PESS ansatz, there are three **independent** sets of PESS tensors for the three triangles, i.e., \(5\times 3=15\) PESS tensors - \(\{T^{\alpha}_{d},T^{\alpha}_{u},B^{\alpha}_{a},B^{\alpha}_{b},B^{\alpha}_{c}\}\), \(\alpha=1,2,3\). For the 3-triangle PEPS ansatz, a product of three **independent** 1-triangle PEPS tensors, \((a^{\alpha})^{S}_{uldr}\), \(\alpha=1,2,3\), are used to represent the physical wavefunction for the 9 sites in the 3 triangles. For the 3-triangle unit cell, there are three choices of the lattice vectors to tile the entire kagome lattice, while two of them are related by mirror symmetry, as shown in Fig. 23. The tiling with equal length lattice vectors has the C3 rotation symmetry, and thus is denoted by \(\sqrt{3}\times\sqrt{3}\) tiling. The other two are simply referred to as \(3\times 1\) tiling. The C3 PESS is a constrained kind of 3-triangle PESS with \(\sqrt{3}\times\sqrt{3}\) tiling whose physical wavefunction has C3 lattice rotation symmetry, and is constructed using only one 1-triangle PESS by rotating the PESS tensors for the first triangle to obtain the PESS tensors for the other two. There are two different ways of rotation, which give rise to two different patterns, as shown in Fig. 24. The corresponding relations for PESS tensors in different triangles are given as follow: Figure 23: Three choices of lattice vectors that tile the kagome lattice by the 3-triangle unit cell. Figure 22: Stereographic projection of the phase diagram (same parametrisation as in Figs. 1 and 17) of the SU(3) chiral antiferromagnet on the kagome lattice obtained by a 1-triangle PEPS ansatz. The various phases discussed in the text are represented by dots of different colors. The dashed (dash-dotted) line corresponds to the single (two) magnon instability of the ferromagnetic phase. ### Lattice symmetry breaking in SU(3)-broken phase? In Fig. 6 of the main text, we have shown the C3-2 iPESS results (red triangles), while having fewer variational parameters than the 1-triangle iPEPS ansatz with the same bond dimension, can establish lower energy states with additional lattice symmetry breaking patterns. Here, we make a more detailed scaling analysis of the energetics at one point, \((\phi,\theta)=(\frac{3\pi}{4},\frac{\pi}{8})\), where the potential lattice symmetry breaking happens, as shown by Fig. 25. First, one can see that the extrapolation of the energies from 1-triangle iPEPS wave function already gives a value which is very close to the \(N=21\) ED result (with a difference smaller than \(3\times 10^{-3}\)). Second, as the bond dimension increases, one can see the energy gap between the uniform states and lattice symmetry broken states decreases. Based on these facts, we tend to attribute the lattice symmetry breaking we observe to the finite bond dimension effects. In other words, with small bond dimensions (low entanglement), the states gain more energy by breaking the lattice symmetry. A similar phenomenon has also been observed in an SU(3) model on the honeycomb lattice [64]. But still, to clear up the issue, one needs to go for larger bond dimensions, which unfortunately goes beyond our current computational capability. ### SU(3)-symmetric PESs The SU(3) symmetric PESS is a constrained family of 1-site PESS, where each tensor is invariant under SU(3) symmetry. In addition, for the AKLT phase and the CSL phase, the SU(3) PESS is further constrained by the lattice point group symmetry. In practice, first the relevant SU(3) irreps in the virtual spin are found using the simple update method [65]. Then a tensor classification is carried out for both the site projectors and trivalent tensors. The classification scheme follows from Ref. [66; 41], which was recently adapted to the kagome lattice [67]. Figure 25: Scaling of energetics for iPEPS and iPESS wave functions with respect to bond dimensions at \((\phi,\theta)=(\frac{3\pi}{4},\frac{\pi}{8})\). The corresponding spatial patterns of different ansatzes only appear with large bond dimensions (indicated by opaque markers). 1-triangle iPEPS wave function (green squares) gives a uniform pattern with all the spins pointing towards the same direction. \(\sqrt{3}\times\sqrt{3}\) iPEPS and C3-2 iPESS (blue and red triangles) wave functions both give a C3-rotation symmetry broken (with respect to the center of each hexagon) pattern. \(3\times 1\) iPEPS wave function (brown diamonds) gives a partially lattice translation symmetry broken pattern along the direction in which the unit cell is enlarged. Figure 24: The patterns of the PESS tensors for (a) C3-1 PESS and (b) C3-2 PESS. The bond tensors (site projectors) in the same color are forced to be identical, with the same index always contracted with the same type of trivalent tensor (either \(T_{d}\) or \(T_{u}\)). For each trivalent tensor, legs of different lengths stand for different indices. For down or up trivalent tensors in different triangles, the legs of the same length correspond to each other. \begin{table} \begin{tabular}{|c|l|} \hline \hline \(L_{0}\) & Irreps / Multiplicities \\ \hline 0 & \(\bullet\)\(\bullet\ \begin{table} \begin{tabular}{|c|c|} \hline \hline \(L_{0}\) & Irreps / Multiplicities \\ \hline \(0\) & \(\begin{array}{c}\includegraphics[width=142.26378pt]{20.0pt}\\ \end{array}\) \\ \hline \(1\) & \(\begin{array}{c}\includegraphics[width=142.26378pt]{20.0pt}\\ \end{array}\) \\ \hline \(2\) & \(\begin{array}{c}\includegraphics[width=142.26378pt]{20.0pt}\\ \end{array}\) \\ \hline \(3\) & \(\begin{array}{c}\includegraphics[width=142.26378pt]{20.0pt}\\ \end{array}\) \\ \hline \(4\) & \(\begin{array}{c}\includegraphics[width=142.26378pt]{20.0pt}\\ \end{array}\) \\ \hline \(5\) & \(\begin{array}{c}\includegraphics[width=142.26378pt]{20.0pt}\\ \end{array}\) \\ \hline \(6\) & \(\begin{array}{c}\includegraphics[width=142.26378pt]{20.0pt}\\ \end{array}\) \\ \hline \(7\) & \(\begin{array}{c}\includegraphics[width=142.26378pt]{20.0pt}\\ \end{array}\) \\ \hline \end{tabular} \end{table} Table 4: Conformal tower in the \(Q=2\) topological sector: tower originating from \(\begin{array}{c}\includegraphics[width=142.26378pt]{20.0pt}\\ \end{array}\).
2303.13634
Physics-informed PointNet: On how many irregular geometries can it solve an inverse problem simultaneously? Application to linear elasticity
Regular physics-informed neural networks (PINNs) predict the solution of partial differential equations using sparse labeled data but only over a single domain. On the other hand, fully supervised learning models are first trained usually over a few thousand domains with known solutions (i.e., labeled data) and then predict the solution over a few hundred unseen domains. Physics-informed PointNet (PIPN) is primarily designed to fill this gap between PINNs (as weakly supervised learning models) and fully supervised learning models. In this article, we demonstrate that PIPN predicts the solution of desired partial differential equations over a few hundred domains simultaneously, while it only uses sparse labeled data. This framework benefits fast geometric designs in the industry when only sparse labeled data are available. Particularly, we show that PIPN predicts the solution of a plane stress problem over more than 500 domains with different geometries, simultaneously. Moreover, we pioneer implementing the concept of remarkable batch size (i.e., the number of geometries fed into PIPN at each sub-epoch) into PIPN. Specifically, we try batch sizes of 7, 14, 19, 38, 76, and 133. Additionally, the effect of the PIPN size, symmetric function in the PIPN architecture, and static and dynamic weights for the component of the sparse labeled data in the loss function are investigated.
Ali Kashefi, Leonidas J. Guibas, Tapan Mukerji
2023-03-22T06:49:34Z
http://arxiv.org/abs/2303.13634v3
Physics-informed PointNet: On how many irregular geometries can it solve an inverse problem simultaneously? Application to linear elasticity ###### Abstract Regular physics-informed neural networks (PINNs) predict the solution of partial differential equations using sparse labeled data but only over a single domain. On the other hand, fully supervised learning models are first trained usually over a few thousand domains with known solutions (i.e., labeled data) and then predict the solution over a few hundred unseen domains. Physics-informed PointNet (PIPN) is primarily designed to fill this gap between PINNs (as weakly supervised learning models) and fully supervised learning models. In this article, we demonstrate that PIPN predicts the solution of desired partial differential equations over a few hundred domains simultaneously, while it only uses sparse labeled data. This framework benefits fast geometric designs in the industry when only sparse labeled data are available. Particularly, we show that PIPN predicts the solution of a plane stress problem over more than 500 domains with different geometries, simultaneously. Moreover, we pioneer implementing the concept of remarkable batch size (i.e., the number of geometries fed into PIPN at each sub-epoch) into PIPN. Specifically, we try batch sizes of 7, 14, 19, 38, 76, and 133. Additionally, the effect of the PIPN size, symmetric function in the PIPN architecture, and static and dynamic weights for the component of the sparse labeled data in the loss function are investigated. **Keywords:** Physics-informed deep learning; PointNet; Irregular geometries; Linear elasticity; Inverse problems ## Highlights 1. Similar to supervised deep learning, we test big data (i.e., number of geometry) and relatively large batch sizes (i.e., number of geometries per epoch) for the framework of physics-informed neural networks for the first time. 2. We specifically use PIPN as an advanced version of physics-informed neural networks. 3. Using PIPN, an inverse problem of linear elasticity is solved over 532 irregular geometries simultaneously. 4. Batch sizes (i.e., number of geometries per epoch) of 7,14, 19, 28, 38, 76, and 133 are tested. ## 1 Introduction and motivation Physics-informed Neural Networks (PINNs), introduced by Raissi, et al. [42] in 2019, are recognized as a promising tool for solving inverse problems in a variety of scientific and industrial fields such as solid mechanics [15, 43, 21, 47, 49, 54, 12, 8, 52, 3, 16, 5, 35, 45, 19, 41, 11], incompressible and compressible flows [22, 25, 32, 18, 17, 42, 37, 40, 36, 6], chemistry [51, 20], heat transfer [25, 50, 7], flow in porous media [26, 2, 57], etc. The main idea of PINNs for solving inverse problems can be briefly explained as follows. Given sparse observations of a field of interest as well as the partial differential equations (PDEs) governing the physics of the field, train "a neural network" such that its predictions (outputs) minimize the residuals of the PDEs as well as the distance between the predictions and sparse observations at sensor locations, in a certain norm (such as \(L^{2}\) norm or other norms). Other partial information such as boundary or initial conditions may be included in this minimization problem. Our focus, here, is on the term "a neural network". We believe that the choice of the neural network significantly affects the ability and capacity of a PINN configuration. A common choice is a fully connected neural network (e.g., see Fig. 1 in Ref. [43], Fig. 3 in Ref. [15], Fig. 2 in Ref. [55], Fig. 1 in Ref. [32], Fig. 3 in Ref. [58], Fig 1 in Ref. [56], Fig. 1 in Ref. [33], Fig. 4 in Ref. [31], Fig. 1 in Ref. [10], etc.). An immediate consequence of this choice is that a PINN with a fully connected neural network is only able to predict the solution of an inverse problem on a single geometry. Hence, for any computational domain with a new geometry, one needs to train a PINN from scratch. This scenario necessitates high computational expenses, specifically when the goal is the investigation of a wide range of geometric parameters for optimizing an industrial design. This issue has been first addressed by Gao, et al. [13] in 2021 and later by Kashefi and Mukerji [25] in 2022. To resolve this issue, Gao, et al. [13] first proposed the PhyGeoNet [13] framework and later Kashefi and Mukerji [25] introduced the PIPN [25] framework. PIPN [25] also successfully overcame the shortcomings of PhyGeoNet [13]. We later compare PIPN [25] versus PhyGeoNet [13], but for now, let us focus on the main theme of our article. What was one of the main motivations for introducing PIPN? To answer this question, let us discuss PINNs from another perspective. PINNs can be categorized as weakly supervised learning frameworks in the sense that the associated neural networks predict the solution on a computational domain using some sparse labeled data, which are available on that specific domain. On the other hand, neural networks used in supervised learning methods for computational mechanics are first trained on a large number of computational domains with different geometries, where the solutions are known, and then they predict the solution on new computational domains with new geometries. Supervised learning methods are commonly trained on a few hundred to a few thousand geometries (e.g., 2595 in Ref. [27], 1505 in Ref. [48], 880 in Ref. [46], 1600 in Ref. [9], etc.). One of the goals of introducing PIPN was to fill the gap between the weakly-supervised learning and supervised learning models in terms of the number of geometries for which we can predict the solution simultaneously (see Fig. 1). Now, the question is given the current capacity of available commercial graphics processing units (GPUs) to the public, how many computational domains (with different geometries) is PIPN able to predict the solution of an inverse problem on, simultaneously? The answer to this question depends on different parameters. But definitely one of them is the level of complexity of a problem, which has a direct relevancy to its governing PDEs. When Kashefi and Mukerji [25] introduced PIPN, they [25] showed its applications for incompressible flows and natural convection, where the governing PDEs are nonlinear (see Eqs. 1-3 of Ref. [25]). Due to the nonlinearity of the problem, they had to select high resolutions (number of inquiry points) per computational domain. Specifically for the natural convection problem (see Sect. 4.2 of Ref. [25]), they run PIPN on 108 geometries, while each geometry had a resolution of 5000 inquiry points. They [25] further showed the effect of resolution on the accuracy of the PIPN prediction (see Fig. 18 of Ref. [25]). Because of the current limitations on GPU memories, they were not able to explore a "big" data set comparable to those used in supervised learning. One way to reduce the complexity level is to select a set of linear PDEs to examine the PIPN capacity. A reasonable choice is the equations of two-dimensional linear elasticity problems, which have a wide range of applications in the industry. Hence, to examine the capacity of PIPN in terms of simultaneous handling of the maximum possible number of geometries, we focus on the two-dimensional linear elasticity problems. An important hyperparameter that needs to be tuned for training neural networks in supervised learning of computational mechanics is "batch size". The concept of "batch size" is defined under the category of mini-batch gradient descent [14], where the training data set is split into smaller data sets as mini-batches. The size of each mini-batch is then called "batch size". The batch size plays a critical role in the training convergence as well as the ability of a trained neural network to generalize predictions [23, 28, 4, 34]. In supervised deep learning of quantities of interest as a function of geometric features of domains, the term batch size refers to the number of domains, with different geometries, that are fed to a neural network at each epoch. In regular PINNs, however, the term batch size refers to the number of inquiry points belonging to a sub-domain of a single domain that is fed to a neural network at each epoch. In PIPN, Kashefi and Mukerji [25] used the term batch size to address the number of domains at each epoch, similar to the scenario for supervised learning. Nevertheless, the maximum batch size was 13 as reported in their research article [25], again due to the GPU memory limitations and nonlinearly of their considered PDEs. In this article, we investigate a wide range of batch sizes and their influences on the training convergence and generalizability of PIPN with its applications to a linear elasticity problem. Lastly, we study another important hyperparameter, which is the network size. In the framework of regular PINNs, where fully connected layers are used, the network size is tuned by changing the number of layers and a number of neurons in each layer. Contrarily to regular PINNs, PIPN has a more complicated architecture, and its network size can practically vary in different ways. In this work, we define a global scalar variable to control the network size of PIPN. Afterward, we study the performance of the PIPN framework as a function of this global variable. For the sake of completeness and also convincing the potential audiences why we go with PIPN [25] rather than PhyGeoNet [13] in this article, we compare PIPN [25] with PhyGeoNet [13]. In PhyGeoNet [13], a convolutional neural network (CNN) is used as the "neural network" in a PINN, instead of a fully connected neural network. CNNs are able to extract geometric features of input computational domains and represent those features in a hidden space. Hence, all the parameters of CNNs (e.g., weights and bias) become a function of the geometric features. As a result, PhyGeoNet [13] is able to solve an inverse problem on multiple sets of computational domains. Notwithstanding these successes, PhyGeoNet [13] and its later version [44] come with several shortcomings. These shortcomings have been addressed in detail in Sect. 1 of Ref. [25]. Here, we summarize them. First, PhyGeoNet [13] uses a finite difference discretization method for computing the loss function of desired PDEs rather than using the automatic differentiation technology [1]. In this way, PhyGeoNet [13] is limited to the accuracy order of the chosen finite difference scheme and faces challenges for treating near boundaries of computational domains in the case of using high-order finite difference methods. Second, PhyGeoNet [13] is not able to handle irregular "non-parameterizable" geometries. Third, even on irregular parameterizable geometries, PhyGeoNet [13] is strictly limited to handling domains with up to only five \(C_{0}\) continuous boundaries. To obviate the limitations of PhyGeoNet [13], Kashefi and Mukerji [25] introduced physics-informed PointNet (PIPN). In PIPN, PointNet [38] carries out the role of the "neural network" in PINNs. Similar to CNNs, PointNet [38] is able to also encode the geometric features of input computational domains; however, using a different mathematical methodology. Details of the PointNet architecture can be found in Refs. [38, 27, 25, 24]. We also briefly review the PointNet structure in Sect. 2.2 of this research paper. The advantages of PIPN compared to PhyGeoNet have been explained in detail by Kashefi and Mukerji [25]. We list them here for a quick review. First and most important, PIPN [25] can handle non-parametrizable irregular geometries without any restriction. Second, PIPN [25] uses automatic differentiation technology [1] and thus the spatial derivative of output components with respect to the corresponding input component can be conveniently programmed and computed over entire irregular geometries in a data set (with no need for a finite difference/element discretization). Third, in the PIPN framework, interior and boundary points of irregular geometries are explicitly identified, and it allows twofold benefits: first, a smooth representation of boundaries (e.g., an airfoil surface), and second, explicit identification of their corresponding components in the loss function (with no need for implicit labeling of boundary points versus interior points and an artificial mixing up near boundaries, especially with sharp corners). The rest of this research paper is structured as follows. We mathematically formulate the linear elasticity problem in Sect. 2.1. An illustration of PIPN from computer science and applied mathematics perspectives are given in Sect. 2.2. Details about generating data for validation of the PIPN framework are described in Sect. 2.3. We elaborate on the computational setup for training PIPN in Sect. 2.4. A general analysis of the PIPN performance as well as the effect of batch size and neural network size is discussed in Sect. 3. Concluding remarks are listed in Sect. 4. ## 2 Problem statement and methodologies ### Problem formulations In this article, we focus on elastic materials with the constitutive model of Hooke's law, relating the Cauchy stress tensor to the infinitesimal strain tensor and we restrict our studies to isotropic materials under plane stress assumptions. Specifically, we consider a two-dimensional domain \(V\) having a cavity space characterized by various shapes. The domain \(V\) is placed under a thermal conductivity loading. The static linear elasticity equations governing the displacement fields in the domain \(V\) are given by \[-\frac{\partial}{\partial x}\Big{(}\frac{E}{1-\nu^{2}}\frac{\partial u}{ \partial x}+\frac{E\nu}{1-\nu^{2}}\frac{\partial v}{\partial y}\Big{)}-\frac{ \partial}{\partial y}\Big{(}\frac{E}{2(1+\nu)}(\frac{\partial u}{\partial y}+ \frac{\partial v}{\partial x})\Big{)}=-\frac{E\alpha}{1-\nu^{2}}\frac{ \partial T}{\partial x}, \tag{1}\] \[-\frac{\partial}{\partial y}\Big{(}\frac{E\nu}{1-\nu^{2}}\frac{\partial u}{ \partial x}+\frac{E}{1-\nu^{2}}\frac{\partial v}{\partial y}\Big{)}-\frac{ \partial}{\partial x}\Big{(}\frac{E}{2(1+\nu)}(\frac{\partial u}{\partial y}+ \frac{\partial v}{\partial x})\Big{)}=-\frac{E\alpha}{1-\nu^{2}}\frac{ \partial T}{\partial y}, \tag{2}\] where \(T\) indicates the temperature variable. Displacements in the \(x\) and \(y\) directions are shown by \(u\) and \(v\), respectively. \(\alpha\) denotes the thermal expansion coefficient. \(E\) is the elastic Young's modulus and \(\nu\) is the Poisson ratio. We assume that the material of the domain \(V\) remains in the elastic regime under thermal loading. Mathematically, our goal is to solve an inverse problem using PIPN on a set of irregular domains \(\Phi=\{V_{i}\}_{i=1}^{m}\), formulated as follows: given the temperature field of the domains \(\Phi=\{V_{i}\}_{i=1}^{m}\) and a set of sparse observations Figure 1: A conceptual comparison between regular physics-informed neural networks (PINNs), physics-informed PointNet (PIPN), and fully supervised deep learning algorithms for computational mechanics in terms of the number of geometries and the batch size definition of the displacement fields of the domains of the set \(\Phi=\{V_{i}\}_{i=1}^{m}\), find the full displacement solutions for all the domains of the set \(\Phi=\{V_{i}\}_{i=1}^{m}\). We explicitly illustrate geometric variations of the set \(\Phi=\{V_{i}\}_{i=1}^{m}\) in Sect. 2.3. ### Physics-informed PointNet (PIPN) As we discussed in Sect. 1, the idea of PIPN was first proposed by Kashefi and Mukerji [25] in 2022. Here, we briefly review the PIPN methodology and adjust it for the governing equations of our interest (Eqs. 1-2). PIPN is built based on the combination of two critical pieces: PointNet [38] and a physics-based loss function. In simple words, the PIPN mechanism can be explained in two steps. In the first step, PointNet [38] makes the predicted outputs as a function of geometric features of each \(V_{i}\in\Phi\). As the second step, the PIPN loss function is computed by taking these outputs and manipulating them using automatic differentiation [1] to build up the equations describing the physics of the problem. Hence, the PIPN loss function is not only aware of the physics but also aware of the geometries of each \(V_{i}\in\Phi\). In this way, by training PIPN, the predicted solutions for the displacement fields on a domain \(V_{i}\) become a function of the geometric characteristic of \(V_{i}\). In this sense, PIPN is able to predict the solutions of the governing PDEs (Eqs. 1-2) on multiple computational domains with various geometries, simultaneously. #### 2.2.1 Architecture The PIPN architecture is exhibited in Fig. 2. In PIPN, each \(V_{i}\) is represented by a point-cloud \(\mathcal{X}_{i}\) with \(N\) points. \(\mathcal{X}_{i}\) is defined as \(\mathcal{X}_{i}=\{\mathbf{x}_{j}\in\mathbb{R}^{4}\}_{j=1}^{N}\), where \(d\) is the spatial dimension and we set \(d=2\) in this research study. Thus, each \(\mathbf{x}_{j}\) has two components \(x_{j}\) and \(y_{j}\) as the spatial coordinates. PIPN maps \(\mathcal{X}_{i}\) to \(\mathcal{Y}_{i}\) via a function \(f\), where \(\mathcal{Y}_{i}\) is the prediction of the PDE (Eqs. 1-2) solutions. \(\mathcal{Y}_{i}\) is defined as \(\mathcal{Y}_{i}=\{\mathbf{y}_{j}\in\mathbb{R}^{n_{\text{PDE}}}\}_{j=1}^{N}\), where \(n_{\text{PDE}}\) indicates the number of fields in the solution. Here, we set \(n_{\text{PDE}}=2\) as the displacement fields (\(u\) and \(v\)) are the unknowns. Hence, each \(\mathbf{y}_{j}\) has two components, \(u_{j}\) and \(v_{j}\) as the network outputs. Mathematically, it can be written as \[(u_{j},v_{j})=f((x_{j},y_{j}),g(\mathcal{X}_{i}));\forall(x_{j},y_{j})\in \mathcal{X}_{i}\text{ and }\forall(u_{j},v_{j})\in\mathcal{Y}_{i}\text{ with }1\leq i\leq m \text{ and }1\leq j\leq N, \tag{3}\] where \(f\) is the mapping function approximated by the PointNet [38] neural network and \(g\) is a symmetric function representing the geometric feature of a point cloud \(\mathcal{X}_{i}\) and can be approximated as \[g(\mathcal{X}_{i})=s(h(x_{1},y_{1}),\ldots,h(x_{N},y_{N}));\forall(x_{j},y_{j} )\in\mathcal{X}_{i}\text{ with }1\leq i\leq m\text{ and }1\leq j\leq N, \tag{4}\] where \(s\) is a symmetric function and \(h\) is a function representing the two shared Multilayer Perceptrons (MLPs) [30, 38, 39] in the first branch of PointNet [38] (see Fig. 2). In simple words, \(g(\mathcal{X}_{i})\) can be thought as the global feature in the PIPN architecture shown in Fig. 2. We further explain the role of shared MLPs and the symmetric function in the following. We define \(n_{s}\) as a global scaling variable controlling the size of the PIPN. In the following, we denote batch size by \(B\), and reemphasize that "batch size" in this study is the number of domains fed into PIPN at each epoch. In practice, the input of PIPN is a three-dimensional tensor (i.e., numeric array) of size \(B\times N\times 2\). Afterward, there are two sequential shared MLPs with sizes of \((n_{s}\times 64,n_{s}\times 64)\) and \((n_{s}\times 64,n_{s}\times 128,n_{s}\times 1024)\), as displayed in Fig. 2. In the next step, the symmetric operator (\(s\)) forms the global feature with a size of \(n_{s}\times 1024\). As exhibited in Fig. 2, the global feature is concatenated to the intermediate feature tensor of size \(B\times N\times(n_{s}\times 64)\), resulting in a tensor of size \(B\times N\times(n_{s}\times 1088)\). Next, two other sequential shared MLPs, respectively, with sizes of \((n_{s}\times 512,n_{s}\times 256,n_{s}\times 128)\) and \((n_{s}\times 128,n_{\text{PDE}})\) operate in the PIPN architecture. The output of the previous action is a tensor of size \(B\times N\times n_{\text{PDE}}\), as shown in Fig. 2. Using automatic differentiation [1], the governing PDEs (Eqs. 1-2) are built and fed into the PIPN loss function. Note that reasonable choices for \(n_{s}\) could be 0.125, 0.25, 0.5, 1.0, 2.0, and 4.0 such that a positive integer number represents the size of resulting shared MLPs. **Shared MLPs and symmetric functions** The PIPN is invariant with respect to \(N!\) permutations of the input vector (see Fig. 2). In other words, if we randomly permute the input vector (\(\mathcal{X}_{i}\)), the constructed geometry of the domain does not change and thus the solution (\(\mathcal{Y}_{i}\)) should not change. PIPN becomes permutation invariant by means of two features: shared MLPs and symmetric functions. The concept of shared MLPs should not be confused with regular fully connected layers or so-called "dense" layers in the TensorFlow [1] terminology. Here we explain the concept of shared MLPs with a simple example. Let us consider the first MLP layer in the first branch of PIPN with a size of \((n_{s}\times 64,n_{s}\times 64)\) as shown in Fig. 2. Let us take \(n_{s}=1\) for a moment. More specifically, we focus on the first shared layer with a size of \(64\). The transpose of the input vector \(\mathcal{X}_{i}\) can be written as \[\mathcal{X}_{i}^{tr}=\begin{bmatrix}x_{1}&x_{2}&\ldots&x_{N}\\ y_{1}&y_{2}&\ldots&y_{N}\end{bmatrix}. \tag{5}\] After applying the first shared layer to \(\mathcal{X}_{i}\), the output is a matrix of size \(64\times N\) and can be written as \[\left[\mathbf{a}_{64\times 1}^{(1)}&\mathbf{a}_{64\times 1}^{(2)}&\ldots& \mathbf{a}_{64\times 1}^{(N)}\right], \tag{6}\] where \(\mathbf{a}_{64\times 1}^{(1)}\), \(\mathbf{a}_{64\times 1}^{(2)}\),..., \(\mathbf{a}_{64\times 1}^{(N)}\) are vectors, which are computed as follows \[\mathbf{a}_{64\times 1}^{(1)}=\sigma\Big{(}\mathbf{W}_{64\times 2} \begin{bmatrix}x_{1}\\ y_{1}\end{bmatrix}+\mathbf{b}_{64\times 1}\Big{)},\] \[\mathbf{a}_{64\times 1}^{(2)}=\sigma\Big{(}\mathbf{W}_{64\times 2 }\begin{bmatrix}x_{2}\\ y_{2}\end{bmatrix}+\mathbf{b}_{64\times 1}\Big{)}, \tag{7}\] \[\vdots\] \[\mathbf{a}_{64\times 1}^{(N)}=\sigma\Big{(}\mathbf{W}_{64\times 2 }\begin{bmatrix}x_{N}\\ y_{N}\end{bmatrix}+\mathbf{b}_{64\times 1}\Big{)},\] where \(\mathbf{W}\) and \(\mathbf{b}\) are the shared weight matrix and bias vector, respectively. The nonlinear activation function is shown by \(\sigma\), which acts elementwise. As can be realized from Eq. 7, the same (shared) \(\mathbf{W}\) and \(\mathbf{b}\) are applied to each spatial point of the domain with the corresponding vector of \([x_{j}\;\;y_{j}]^{tr}\), while \(1\leq j\leq N\). That is why we call it shared MLPs. A similar procedure is conducted at the rest of the layers. By this strategy, it is immediately concluded that each point is independently processed in the PIPN framework and the only place that points meet each other is where the global feature needs to be identified (see Eq. 4). Concerning the symmetric function, we consider the two formats: one being the "maximum" function such that \[g(\mathcal{X}_{i})=\max(h(x_{1},y_{1}),\ldots,h(x_{N},y_{N}));\forall(x_{j},y _{j})\in\mathcal{X}_{i}\text{ with }1\leq i\leq m\text{ and }1\leq j\leq N, \tag{8}\] and the other being the "average" function such that \[g(\mathcal{X}_{i})=\text{average}(h(x_{1},y_{1}),\ldots,h(x_{N},y_{N}));\forall (x_{j},y_{j})\in\mathcal{X}_{i}\text{ with }1\leq i\leq m\text{ and }1\leq j\leq N. \tag{9}\] We compare the PIPN performance using these two symmetric functions. One may refer to Ref. [25] for a further description of the details of PIPN; and to Ref. [38] for a further discussion on the computer science aspects of PointNet [38]. #### 2.2.2 Loss function For any pair of \(\mathcal{X}_{i}\) and \(\mathcal{Y}_{i}\) (\(1\leq i\leq m\)), the residuals of the linear momentum in the \(x\) direction (\(\mathcal{J}_{i}^{\text{momentum}_{x}}\)), linear momentum in the \(y\) direction (\(\mathcal{J}_{i}^{\text{momentum}_{y}}\)), along with the residuals of the sparse observations of the displacement field (\(\mathcal{J}_{i}^{\text{displacement}_{\text{sensor}}}\)) are respectively defined as follows: \[\mathcal{J}_{i}^{\text{momentum}_{x}} =\frac{1}{N}\sum_{k=1}^{N}\!\!\left(\!-\frac{\delta}{\delta x_{k} }\Big{(}\frac{1}{1-\nu}\frac{\delta\tilde{u}_{k}}{\delta x_{k}}+\frac{\nu}{1- \nu}\frac{\delta\tilde{v}_{k}}{\delta y_{k}}\!\Big{)}\!-\!\frac{\delta}{ \delta y_{k}}\Big{(}\frac{1}{2}\big{(}\frac{\delta\tilde{u}_{k}}{\delta y_{k}} +\frac{\delta\tilde{v}_{k}}{\delta x_{k}}\!\Big{)}\!\Big{)}\!+\!\frac{\alpha} {1-\nu}\frac{\partial T_{k}}{\partial x_{k}}\right)^{2}, \tag{10}\] \[\mathcal{J}_{i}^{\text{momentum}_{y}} =\frac{1}{N}\sum_{k=1}^{N}\!\!\left(\!-\frac{\delta}{\delta y_{k} }\Big{(}\frac{\nu}{1-\nu}\frac{\delta\tilde{u}_{k}}{\delta x_{k}}+\frac{1}{1- \nu}\frac{\delta\tilde{v}_{k}}{\delta y_{k}}\!\Big{)}\!-\!\frac{\delta}{ \delta x_{k}}\Big{(}\frac{1}{2}\big{(}\frac{\delta\tilde{u}_{k}}{\delta y_{k}} +\frac{\delta\tilde{v}_{k}}{\delta x_{k}}\!\Big{)}\!\Big{)}\!+\!\frac{\alpha} {1-\nu}\frac{\partial T_{k}}{\partial y_{k}}\right)^{2},\] (11) \[\mathcal{J}_{i}^{\text{displacement}_{\text{sensor}}}=\frac{1}{M} \sum_{k=1}^{M}\!\!\left(\Big{(}\tilde{u}_{k}-u_{k}^{\text{sensor}}\Big{)}^{2} \!\!+\!\Big{(}\tilde{v}_{k}-v_{k}^{\text{sensor}}\Big{)}^{2}\right)\!, \tag{12}\] where \(M\) is the number of sensors located at each point cloud for a sparse measurement of the displacement fields. The automatic differentiation operator is shown by \(\delta\). The \(x\) and \(y\) components of the displacement fields measured at the sensor locations are shown by \(u_{k}^{\text{sensor}}\) and \(v_{k}^{\text{sensor}}\), respectively, while the network outputs are denoted by Figure 2: Architecture of physics-informed PointNet (PIPN); \(B\) indicates the batch size and \(n_{s}\) is a global scaling parameter for controlling the network size. The spatial derivatives are computed using automatic differentiation to build up the residuals of the governing PDEs as the PIPN loss function. and \(\bar{v}_{k}\). Note that because we assume that the temperature field is known to us, then the temperature gradient is also known. The hyperbolic activation function defined as \[\sigma(\gamma)=\frac{e^{2\gamma}-1}{e^{2\gamma}+1}, \tag{13}\] is implemented in all the layers of PIPN, similar to Refs. [25, 26]. Note that due to the presence of the second-order spatial derivative of the displacement fields in the governing PDEs (Eqs. 1-2), the second-order derivative of the activation function used in PIPN must be well-defined. As can be realized from Eq. 3, because the displacement fields are a function of \(g(\mathcal{X}_{i})\), the spatial derivatives of the displacement also become a function of \(g(\mathcal{X}_{i})\) in PIPN. For example, \[\frac{\delta u_{j}}{\delta x_{j}}=\frac{\delta f((x_{j},y_{j}),g(\mathcal{X}_{ i}))}{\delta x_{j}};\forall(x_{j},y_{j})\in\mathcal{X}_{i}\text{ and }\forall u_{j}\in\mathcal{Y}_{i}\text{ with }1\leq i\leq m\text{ and }1\leq j\leq N. \tag{14}\] Similar expressions can be written for \(\frac{\delta u_{j}}{\delta y_{j}}\), \(\frac{\delta v_{j}}{\delta x_{j}}\), \(\frac{\delta}{\delta x_{j}}\)(\(\frac{\delta u_{j}}{\delta x_{j}}\)), etc. Hence, all the components presented in the loss function of PIPN contain the geometric information of the point clouds of the set \(\Phi=\{V_{i}\}_{i=1}^{m}\), a specific feature that is not available in regular PINNs [42]. Eventually, the PIPN loss function is written as \[\mathcal{J}=\frac{B}{m}\sum_{b=1}^{m/B}\biggl{(}\frac{1}{B}\sum_{i=1+\langle b -1\rangle B}^{bB}\Bigl{(}\omega_{\text{momentum}}\bigl{(}\mathcal{J}_{i}^{ \text{momentum}_{x}}+\mathcal{J}_{i}^{\text{momentum}_{y}}\bigr{)}+\omega_{ \text{sensor}}\mathcal{J}_{i}^{\text{displacement}_{\text{sensor}}}\Bigr{)} \biggr{)}, \tag{15}\] where \(\omega_{\text{momentum}}\) and \(\omega_{\text{sensor}}\) are the corresponding weights of each residual, while their units are the inverse of the unit of their associated residuals such that the total loss function (\(\mathcal{J}\)) becomes unitless at the end. We elaborate on the choice of these weight factors in the following section. **Component weights in the PIPN loss function** Based on our machine learning experiments, the choice of \(\omega_{\text{momentum}}\) and \(\omega_{\text{sensor}}\) remarkably affect the convergence rate of the training procedure. Here, we propose a few simple, but practical functions for \(\omega_{\text{momentum}}\) and \(\omega_{\text{sensor}}\), and discuss their effects in Sect. 3. The first trivial selection is setting an equal weight for both the momentum and sensor components of the loss function such that \[\begin{cases}\omega_{\text{momentum}}=1\text{ m}\\ \omega_{\text{sensor}}=1\text{ m}^{-1}.\end{cases} \tag{16}\] The next choice is setting a higher weight for the sensor component compared to the momentum one in the PIPN loss function (see Eq. 13) such that \[\begin{cases}\omega_{\text{momentum}}=1\text{ m}\\ \omega_{\text{sensor}}=\omega_{0}\text{ m}^{-1},\text{ with }\omega_{0}>1.\end{cases} \tag{17}\] Note that \(\omega_{0}\) remains constant during the training and can be thought of as a hyperparameter. Additionally, our machine learning experiments show that setting a higher weight for the momentum component compared to the sensor one leads to a significant reduction in the convergence speed and thus we do not propose it here. Our third proposal is setting a higher weight for the sensor component, while this weight exponentially decreases during the training process such that \[\begin{cases}\omega_{\text{momentum}}=1\text{ m}\\ \omega_{\text{sensor}}=\max\Bigl{(}\omega_{1}\times\exp\bigl{(}\frac{-\text{ epoch}}{r_{1}}\bigr{)},1.0\Bigr{)}\text{ m}^{-1},\text{ with }\omega_{1}>1\text{ and }r_{1}>0,\end{cases} \tag{18}\] where "epoch" refers to the training iteration. Again, \(\omega_{1}\) and \(r_{1}\) are the hyperparameters. Our last suggestion is similar to our third one with the difference that the weight decreases logarithmically such that \[\begin{cases}\omega_{\text{momentum}}=1\text{ m}\\ \omega_{\text{sensor}}=\max\Bigl{(}\omega_{2}\times\ln\bigl{(}-\text{epoch}+r_{2 }\bigr{)},1.0\Bigr{)}\text{ m}^{-1},\text{ with }\omega_{2}>1\text{ and }r_{2}>0,\end{cases} \tag{19}\] where \(\omega_{2}\) and \(r_{2}\) are the hyperparameters. Note that in all our proposals, \(\omega_{\text{sensor}}\) never becomes less than one. Moreover, the units of \(\omega_{\text{momentum}}\) and \(\omega_{\text{sensor}}\) are set such that \(\mathcal{J}\) becomes unitless. At the end of this subsection, we address the fact that setting optimal weights in the loss functions of deep learning models is an active research area by itself (see e.g., Ref. [53]); however, this is the first time that we introduce this concept to the PIPN configuration. ### Data generation Computational domains of the set \(\Phi=\{V_{i}\}_{i=1}^{m}\) are thin square plates with a cavity. The side length of the square plates takes three different values while the cavity takes shapes of a square, regular pentagon, regular hexagon, regular octagon, and regular nonagon. We further enlarge the number of domains by rotating the cavity with respect to the fixed thin square plates. Details of the geometric features of the set \(\Phi=\{V_{i}\}_{i=1}^{m}\) are listed in Table 1. In total, 532 geometries (i.e., \(m=532\)) are the input of PIPN. In this way, we establish the set \(\Phi=\{V_{i}\}_{i=1}^{532}\). Three examples of geometries of this set are shown in Fig. 3. In our machine learning investigations, we consider batch sizes (\(B\)) of 7, 14, 19, 28, 38, 76, and 133, which are all divisors of 532. We recall that the maximum batch size (\(B\)) implemented by Kashefi and Mukerji [25] was 13 and the maximum number of input data (\(m\)) considered by Kashefi and Mukerji [25] was 108. Another difference is that the side length of the outer boundaries takes three values of 1.6 m, 1.8 m, and 2.0 m in the current study; however, Kashefi and Mukerji [25] exclusively considered a fixed value of 2.0 m for the side length of outer boundaries when they studied natural convection in a square enclosure with a cylinder (see Fig. 12 and Table 7 of Ref. [25]). In this way, we experience more variations in geometric features of the input point clouds in this study and thus a more challenging task is defined for PIPN. The MATLAB PDE toolbox is used for the two purposes of validation of the PIPN predictions and generations of the sparse labeled data at virtual sensor locations. The ground truth data is generated using the toolbox as follows. The domains of the set \(\Phi=\{V_{i}\}_{i=1}^{532}\) are loaded by thermal conduction with Dirichlet boundary conditions such that the temperature on the outer and inner boundaries are respectively set to 1 and zero in International Unit System. Note that the thermal conductivity does not affect the solution of the Laplace equation governing the steady-state thermal conduction. Moreover, we impose zero displacement boundary conditions on the boundaries of the domains of the set \(\Phi=\{V_{i}\}_{i=1}^{532}\). set \(\Phi=\{V_{i}\}_{i=1}^{532}\) are exhibited, for instance, in Fig. 3. We use the Adam optimizer [29] with the hyperparameters of \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), and \(\hat{\epsilon}=10^{-6}\). The mathematical explanation of \(\beta_{1}\), \(\beta_{2}\), and \(\hat{\epsilon}\) are expressed in Ref. [29]. The constant learning rate of 0.0003 is set for the entire machine learning experiments. For a fair comparison, all the simulations are run on an NVIDIA A100 SXM4 graphic card with a memory clock rate of 1.41 GHz and 40 Gigabytes of RAM. ## 3 Results and discussion ### General analysis In Table 2, we tabulate the average, maximum, and minimum errors of the predicted displacement fields in the \(x\) and \(y\) directions for the domains of the set \(\Phi=\{V_{i}\}_{i=1}^{532}\) with batch size of \(B=28\), the network size of \(n_{s}=1.0\), and the weight of \(\omega_{\text{sensor}}=50\) m\({}^{-1}\) for the sensor component in the PIPN loss function (see Eq. 15). According to the data tabulated in Table 2, the average pointwise relative errors (\(L^{2}\) norm) for all 532 geometries of the set \(\Phi\) is less than 9%, indicating a successful accomplishment for the PIPN framework. Moreover, the maximum pointwise relative errors (\(L^{2}\) norm) do not exceed 14%, demonstrating a reasonable accuracy for engineering applications. A comparison between the ground truth (i.e., the finite element solutions) and the PIPN predictions is made in Figs. 4-6 for six different geometries taken from the set \(\Phi=\{V_{i}\}_{i=1}^{532}\). As can be observed in Figs. 4-6, there is an excellent agreement between the ground truth and the displacement fields predicted by PIPN. For each geometry, the maximum local pointwise error happens on the boundaries of the inner cavity of thin plates. This outcome is expectable because, first, PIPN is not informed from boundary conditions (i.e., zero displacements), and second, most variations from one geometry to another one take place on the boundaries of the domains of the set \(\Phi=\{V_{i}\}_{i=1}^{532}\). In fact, predicting boundary values is the most difficult task for PIPN during the training procedure. This fact can be realized by looking at Fig. 7. We exhibit the prediction of PIPN for the displacement fields for one geometry taken from the set \(\Phi=\{V_{i}\}_{i=1}^{532}\) after 3, 500, and 1500 epochs. Based on what can be seen in Fig. 7, the PIPN prediction is inaccurate after 10 epochs. After 500 epochs, the displacement fields are accurately predicted by PIPN except on the boundaries of the domain. An improvement in the prediction of zero displacements on the boundaries of the domain is observed at the PIPN outcome after 1500 epochs. Hence, we conclude that the most time-consuming part for PIPN is to predict the right values on the domain boundaries. The absolute pointwise error (\(L^{2}\) norm) for geometries with the maximum and minimum relative pointwise errors of the displacement fields in the \(x\) and \(y\) directions are shown in Fig. 8. According to Fig. 8, the extremum errors occur for domains with different geometries of the set \(\Phi=\{V_{i}\}_{i=1}^{532}\), revealing the fact that PIPN is not overfitted to one specific geometry. Moreover, we observe that the maximum pointwise errors happen for the domains with the side length of 1.6 m, which is the smallest side length available in the data set \(\Phi=\{V_{i}\}_{i=1}^{532}\). On the other hand, the minimum pointwise errors occur for geometries with the side square length of 2.0 m, which is the largest available side length in the data set \(\Phi=\{V_{i}\}_{i=1}^{532}\). Based on our deep learning experiments, the ideal range for the spatial coordinates of the input geometries is when they are expanded in \([-1,\,1]\). Having stretched geometries out of \([-1,\,1]\) or compressed geometries inside of \([-1,\,1]\) results in an error increase. Therefore, the domains with a side length of 1.6 m end up having higher levels of errors compared to larger side lengths. Figure 4: The first group of examples taken from the set \(\Phi=\{V_{i}\}_{i=1}^{532}\), comparing the finite element solutions to the PIPN predictions for the displacement fields Figure 5: The second group of examples taken from the set \(\Phi=\{V_{i}\}_{i=1}^{532}\), comparing the finite element solutions to the PIPN predictions for the displacement fields Figure 6: The third group of examples taken from the set \(\Phi=\{V_{i}\}_{i=1}^{532}\), comparing the finite element solutions to the PIPN predictions for the displacement fields ### Effect of batch size and network size We first explore the effect of batch sizes (\(B\)) on the PIPN performance in this subsection. It is worthwhile to note that at each epoch, the geometries are shuffled and randomly divided into mini-batch, exactly similar to the scenario of fully supervised learning models. We collect the average relative pointwise errors (\(L^{2}\) norm) of the displacement fields for the batch sizes (\(B\)) of 7, 14, 19, 28, 38, 76, and 133 for three different network sizes (\(n_{s}\)) of 1.0, 0.5, and 0.25 in Table 3. Note that due to the memory limitations, a few of these machine learning experiments are not doable such as the combination of a batch size of \(B=133\) with the network size of \(n_{s}=1.0\). According to Table 3, for a fixed network size of \(n_{s}=1.0\), the batch size (\(B\)) does not significantly affect the prediction accuracy. A similar story is true when the network size of \(n_{s}=0.5\) is taken, except for the batch size of \(B=76\), where the accuracy of the PIPN prediction notably decreases and the average relative pointwise errors (\(L^{2}\) norm) of the displacement fields becomes approximately 37%. The optimal batch size (\(B\)) is 28 for this network size (\(n_{s}=0.5\)). By reducing the network size to \(n_{s}=0.25\), the overall performance of PIPN sharply decreases. For batch sizes (\(B\)) of 7, 14, 19, 76, and 133, the PIPN prediction is completely off. For the batch sizes (\(B\)) of 28 and 38, although the average relative pointwise errors (\(L^{2}\) norm) are less than 50%, the PIPN solution is not reliable. The lack of PIPN performance for the choice of \(n_{s}=0.25\) is further discussed at the end of this subsection. But it is concluded that it is important to first select a PIPN with a suitable size, before investigating the effect of the batch size (\(B\)). In other words, the effect of network size on PIPN is significantly more noticeable compared to the influence of the batch size. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Batch size (\(B\)) & \(n_{s}=1.0\) & \(n_{s}=1.0\) & \(n_{s}=0.5\) & \(n_{s}=0.5\) & \(n_{s}=0.25\) & \(n_{s}=0.25\) \\ \hline & Average & Average & Average & Average & Average & Average \\ & \(||\hat{u}-u||/||u||\) & \(||\hat{v}-v||/||v||\) & \(||\hat{u}-u||/||u||\) & \(||\hat{v}-v||/||v||\) & \(||\hat{u}-u||/||u||\) & \(||\hat{v}-v||/||v||\) \\ \hline 7 & 8.56410E\(-2\) & 8.58942E\(-2\) & 8.40571E\(-2\) & 8.90152E\(-2\) & 1.00000 & 1.00000 \\ 14 & 8.24455E\(-2\) & 7.84251E\(-2\) & 8.71806E\(-2\) & 7.68771E\(-2\) & 9.99999E\(-1\) & 1.00000 \\ 19 & 8.34842E\(-2\) & 8.42466E\(-2\) & 8.03585E\(-2\) & 8.33552E\(-2\) & 5.08574E\(-1\) & 1.00000 \\ 28 & 8.11108E\(-2\) & 8.60237E\(-2\) & 7.44200E\(-2\) & 7.40376E\(-2\) & 3.24522E\(-1\) & 3.45531E\(-1\) \\ 38 & 8.69939E\(-2\) & 9.05297E\(-2\) & 8.45176E\(-2\) & 8.20743E\(-2\) & 3.68147E\(-1\) & 3.69285E\(-1\) \\ 76 & \(\times\) & \(\times\) & 3.77084E\(-1\) & 3.78790E\(-1\) & 1.00000 & 9.99997E\(-1\) \\ 133 & \(\times\) & \(\times\) & \(\times\) & \(\times\) & 1.00000 & 5.05933E\(-1\) \\ \hline \hline \end{tabular} \end{table} Table 3: Error analysis of the displacement fields for the domains of the set \(\Phi=\{V_{i}\}_{i=1}^{532}\) for different batch sizes (\(B\)) and three different network sizes of \(n_{s}=1.0\), \(n_{s}=0.5\), and \(n_{s}=0.25\) when \(\omega_{\text{sensor}}=50\) m\({}^{-1}\) and \(\omega_{\text{momentum}}=1\) m. \(||\ldots||\) denotes the \(L^{2}\) norm. The cross symbol (\(\times\)) indicates that the machine learning experiment is not doable due to a memory limitation. Figure 7: A comparison between the ground truth and prediction of PIPN for the displacement fields after 3, 500, and 1500 epochs The evolution of the PIPN loss function (see Eq. 15) for the batch sizes (\(B\)) of 7, 14, 19, 28, 38, and 76 for the network size of \(n_{s}=0.5\) is shown in Fig. 9a. According to Fig. 9a, for the batch sizes (\(B\)) of 7, 14, 19, 28, and 38, although the loss function eventually converges to approximately the same value, the machine learning experiment with the batch size of \(B=19\) reaches this convergence with a significantly smaller number of epochs. Moreover, the PIPN loss associated with the batch size of \(B=76\) converges to a value larger than the other investigated batch sizes, as can be realized from Fig. 9a. This fact can be seen in another form by looking at the relative errors tabulated in Table 3 as already discussed. Additionally, the effect of network size is carried out and outcomes are listed in Table 4. Based on the information listed in Table 4, for relatively small network sizes (\(n_{s}=0.125\), \(n_{s}=0.25\)) and relatively large network sizes (\(n_{s}=2.0\), \(n_{s}=2.5\)), the PIPN configuration experiences a higher level of errors compared to the moderate sizes of the network (\(n_{s}=0.5\), \(n_{s}=1.0\), \(n_{s}=1.5\)). For a relatively small PIPN, the network is too simple and a bias (i.e., underfitting) takes place. For a relatively large PIPN, the number of data is not sufficient to well determine the weight matrix (**W**) and bias vector (**b**) of the network, and hence, the PIPN predictions suffer from the lack of accuracy. Furthermore, the evolution of the total loss value over all the geometries of the set \(\Phi=\{V_{i}\}_{i=1}^{532}\) is displayed for the network sizes (\(n_{s}\)) of 0.125, 0.25, 0.5, 1.0, 1.5, 2.0, and 2.5 in Fig. 9b. As can be seen in Fig. 9b, for \begin{table} \begin{tabular}{l l l} \hline \hline Network size (\(n_{s}\)) & Average \(\frac{\|\tilde{u}-u\|}{\|u\|}\) & Average \(\frac{\|\tilde{v}-v\|}{\|\tilde{v}\|}\) \\ \hline 0.125 & 1.00001 & 1.00000 \\ 0.25 & 5.08574E\(-1\) & 1.00000 \\ 0.5 & 8.03585E\(-2\) & 8.33552E\(-2\) \\ 1.0 & 8.34842E\(-2\) & 8.42466E\(-2\) \\ 1.5 & 8.51296E\(-2\) & 7.35883E\(-2\) \\ 2.0 & 9.25092E\(-2\) & 1.00152E\(-1\) \\ 2.5 & 1.51389E\(-1\) & 1.52311E\(-1\) \\ \hline \hline \end{tabular} \end{table} Table 4: Error analysis of the displacement fields for the domains of the set \(\Phi=\{V_{i}\}_{i=1}^{532}\) for different network sizes (\(n_{s}\)) when \(B=19\), \(\omega_{\text{sensor}}=50\) m\({}^{-1}\), and \(\omega_{\text{momentum}}=1\) m. \(||\ldots||\) indicates the \(L^{2}\) norm. Figure 8: Distribution of absolute pointwise error when the relative pointwise error (\(L^{2}\) norm) becomes (**a**) maximum for \(\tilde{u}\), (**b**) maximum for \(\tilde{v}\), (**c**) minimum for \(\tilde{u}\), and (**d**) minimum for \(\tilde{v}\) the network size (\(n_{s}\)) of 0.125, the training loss does not converge, leading to 100% relative errors as tabulated in Table 4. As discussed earlier, for the network sizes (\(n_{s}\)) of 0.5, 1.0, and 1.5, the relative errors of PIPN are approximately the same and smaller than other network size choices. Among these three selections, the network size (\(n_{s}\)) of 1.5 converges approximately after 800 epochs, while PIPN with the network size (\(n_{s}\)) of 0.5 converges after approximately 2800 epochs. For the network size (\(n_{s}\)) of 2.0 and 2.5, although the training loss converges, we observe a tremendous tendency for divergence and instability in the optimization process. The reason comes back to the fact that the number of data is not adequate for this choice of the PIPN size as explained in the previous paragraph. ### Choice of a symmetric function in PIPN We investigate the effectiveness of two different symmetric functions (see Eqs. 8-9) in the setting of the PIPN configuration. Specifically, we set the batch size of \(B=19\) and the network size of \(n_{s}=1.0\) with the weight of \(\omega_{\text{sensor}}=50\) m\({}^{-1}\) for the sensor component in the PIPN loss function for this machine learning experiment. Figure 9c illustrates the evolution of the PIPN loss function (see Eq. 15) for the maximum and average as two symmetric functions for extracting global features of point clouds. For a fair comparison, the network size and the batch size are set the same in both experiments. As can be observed in Fig. 9c, the max function shows a better final performance compared to the average functions, though the loss with the average function initially drops faster than the loss function with the max function. However, a sharp decrease in the PIPN loss value with the maximum function occurs around approximately 1800 epochs such that this loss value becomes smaller than Figure 9: Evolution of the PIPN loss function \(\mathbf{a}\) for different batch sizes (\(B\)); \(\mathbf{b}\) for different network sizes (\(n_{s}\)); \(\mathbf{c}\) for two different symmetric functions (see Eqs. 8–9); \(\mathbf{d}\) for different setups of the weight of the sensor component in (\(\omega_{\text{sensor}}\)) in the loss PIPN function (see Eqs. 16–19) the corresponding value with the average function. Quantitatively, the average relative pointwise error (\(L^{2}\) norm) of the displacement fields (\(u\), \(v\)) in the \(x\) and \(y\) directions are respectively 4.69055E\(-1\) and 4.39773E\(-1\) for the average function while these errors are 8.34842E\(-2\) and 8.42466E\(-2\) for the maximum function. We note that Qi, et al. [38] and Qi, et al. [39] used the maximum function for the purpose of object classification and segmentations in the area of computer graphics, and we observe here a higher performance with the maximum function in the area of computational mechanics as well. The reason can be explained as follows. As shown by Kashefi, et al. [27] for supervised learning, using the maximum function, boundary points of point clouds of the domains always contribute to the global feature in PointNet [38]. This is while the most variation from one geometry to another one happens also on boundaries. In this way, PIPN can more clearly distinguish the PDE solution depending on the geometric features of each domain of the set \(\Phi=\{V_{i}\}_{i=1}^{532}\). ### Effect of component weights in the PIPN loss function We investigate four different formulas for the weight component of the sensor data in the PIPN loss function. More specifically, we set \(\omega_{0}=50\) m\({}^{-1}\) in Eq. 17, we set \(\omega_{1}=50\) m\({}^{-1}\) and \(r_{1}=800\) in Eq. 18, and we set \(\omega_{2}=\frac{50}{8}\) m\({}^{-1}\) and \(r_{2}=3002\) in Eq. 19. We display the evolution of the weight of the sensor component (\(\omega_{\text{sensor}}\)) during training in Fig. 10 for these four different setups (Eqs. 16-19). The relative pointwise errors (\(L^{2}\) norm) are tabulated in Table 5. In general, the best PIPN performances are obtained when the sensor component (\(\omega_{\text{sensor}}\)) has a constant but higher value compared to the PDE component (\(\omega_{\text{momentum}}\)) (see Eq. 17 and Fig. 9d) and when the sensor component (\(\omega_{\text{sensor}}\)) logarithmically decreases while still has a significantly higher value compared to the PDE component (\(\omega_{\text{momentum}}\)) during most of the training procedure (see Eq. 18 and Fig. 9d). Additionally, we plot the evolution of the PIPN loss function during training as a result of these four different formulas (see Eqs. 16-19). These results demonstrate that setting a reasonably higher weight for the sensor component of the loss function (i.e., \(\omega_{\text{sensor}}\)) compared to the PDE component of the loss function (i.e., \(\omega_{\text{momentum}}\)) even during the entire training procedure leads to more accurate outputs. Mathematically, because PIPN is not informed of any boundary conditions in this problem, there are infinite solutions to the governing PDEs and thus obtaining the right solution is challenging for PIPN. But by setting a higher weight for the sensor data, PIPN is forced to have a priority for satisfying the mismatch between the network output and sensor data; however, this enforcement makes it convenient to also satisfy the governing PDEs because PIPN looks for the PDE solutions in the space created by the sensor data (rather than a weakly unknown space). Figure 10: Evolution of the weight of the sensor component (\(\omega_{\text{sensor}}\)) in the PIPN loss function during the training procedure for four different formulas (see Eqs. 16–19) Additionally, we plot the evolution of the total PIPN loss over all geometries of the set \(\Phi=\{V_{i}\}_{i=1}^{532}\) for four different setups of the sensor component weight (see Eqs. 16-19) in Fig. 9d. Comparing the deep learning experiment with the constant value of \(\omega_{\text{sensor}}=50\) m\({}^{-1}\) (see Eq. 17) and the logarithmic function (see Eq. 19), a convergence with fewer epochs happens for the logarithmic function, as can be seen in Fig. 9d. Note that according to Table 5, PIPN demonstrates a higher performance for the two mentioned choices of \(\omega_{\text{sensor}}\) in comparison with the constant value of \(\omega_{\text{sensor}}=1\) m\({}^{-1}\) (see Eq. 16) and the exponential function (see Eq. 18). Moreover, the PIPN loss evolution with the exponential function for presenting \(\omega_{\text{sensor}}\) (see Eq. 18) has a smooth decay and converges to a value, which is slightly higher than the convergence values of deep learning experiments with the constant value of \(\omega_{\text{sensor}}=50\) m\({}^{-1}\) (see Eq. 17) and the logarithmic function (see Eq. 19). For the choice of the constant value of \(\omega_{\text{sensor}}=1\) m\({}^{-1}\) (see Eq. 16), the initial loss is smaller than the other three options simply because the weight of the sensor component in the loss function (see Eq. 15) is scaled down by a factor of 50. However, the initial loss decreases with a very gentle slope up to 3000 epochs and thus there is a 38% average relative error as reported in Table 5. ## 4 Summary and conclusions Generally speaking, there are two different deep learning categories in the area of computational mechanics: weakly supervised learning models (requiring sparse labeled data) (see e.g., [42]) and fully supervised learning models (requiring plentiful labeled data) [48]. A specific class of supervised learning models is physics-informed neural networks (PINNs) [42]. From an industrial point of view, an ideal deep learning framework should contain the ability for predicting desired fields over hundreds (or even thousands) of domains with various geometries for the goal of swiftly optimizing geometric designs. Regular PINNs are applicable to the prediction of desired fields on a single geometry, whereas fully supervised learning models are used to predict desired fields on a few hundred geometries. In this sense, there is a gap between the weakly supervised learning and supervised learning models. Physics-informed PointNet (PIPN) [25] is a novel class of physics-informed deep learning algorithms that fills this gap. PIPN requires sparse label data but potentially is able to predict desired fields on a few hundred geometries. In 2022, Kashefi and Mukerji [25] proposed PIPN and employed it to solve incompressible flow and thermal fields over 108 domains with different geometries, simultaneously. Furthermore, they only investigated the batch sizes of 1, 2, 3, and 4 for the natural convection problem and the batch size of 13 for the method of manufactured solutions [25]. In the current article, we tried to explore the underlying capacity of PIPN in terms of the number of geometries that desired fields can be predicted on, simultaneously, and also to investigate if PIPN (as a weakly supervised learning model) is able to compete with fully supervised learning models from this point of view or not. We answered this question by considering a linear elasticity problem and more specifically plane stress conditions. Given our computational resources, PIPN was able to successfully predict the displacement fields over 532 domains with different geometries, simultaneously. The average relative pointwise error (\(L^{2}\) norm) was approximately 9% over the set. For the first time, we comprehensively explored the effects of batch size on the PIPN performance. By the term "batch size", we meant the number of geometries that were fed into PIPN at each sub-epoch. Particularly, we executed batch sizes of 7, 14, 19, 28, 38, 76, and 133. In addition, we pioneered introducing a global parameter for controlling the network size of PIPN. Moreover, we investigated the effect of this parameter on the accuracy of the PIPN predictions. It was concluded that the network size plays a more important role in controlling the PIPN performance compared to the batch size. It was observed that the network size should be compatible with the size of data (i.e., the number of geometries). In fact, we realized that when a suitable size for PIPN was selected, the effect of batch size on the output predictions by PIPN was insignificant. However, the batch size affected the convergence rate. Furthermore, we studied the accuracy of the displacement fields predicted by the PIPN methodology as a result of different constant weights and dynamics weights (i.e., as a function of epoch) for the component of the partial differential equations and the component of the sparse labeled data in the loss function. It was concluded that setting a constant higher weight for the component of the sparse labeled data compared to the component of the partial differential equations leads to higher prediction accuracy. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Data availability The data and the developed Python code are available on the following GitHub repository: [https://github.com/Ali-Stanford/PhysicsInformedPointNetElasticity](https://github.com/Ali-Stanford/PhysicsInformedPointNetElasticity) ## Acknowledgements The authors acknowledge funding by Shell-Stanford Collaborative Project on Digital Rock Physics 2.0 for supporting this research project. Additionally, we would like to thank the Stanford Research Computing Center for supporting our studies by providing computing resources.
2307.10812
Constraining stellar and orbital co-evolution through ensemble seismology of solar-like oscillators in binary systems -- A census of oscillating red-giants and main-sequence stars in Gaia DR3 binaries
Binary systems constitute a valuable astrophysics tool for testing our understanding of stellar structure and evolution. Systems containing a oscillating component are interesting as asteroseismology offers independent parameters for the oscillating component that aid the analysis. About 150 of such systems are known in the literature. To enlarge the sample of these benchmark objects, we crossmatch the Two-Body-Orbit Catalogue (TBO) of Gaia DR3, with catalogs of confirmed solar-like oscillators on the main-sequence and red-giant phase from NASA Kepler and TESS. We obtain 954 new binary system candidates hosting solar-like oscillators, of which 45 and 909 stars are on the main sequence and red-giant, resp., including 2 new red giants in eclipsing systems. 918 oscillators in potentially long-periodic systems are reported. We increase the sample size of known solar-like oscillators in binary systems by an order of magnitude. We present the seismic properties of the full sample and conclude that the grand majority of the orbital elements in the TBO is physically reasonable. 82% of all TBO binary candidates with multiple times with APOGEE are confirmed from radial-velocity measurement. However, we suggest that due to instrumental noise of the TESS satellite the seismically inferred masses and radii of stars with $\nu_\textrm{max}$$\lesssim$30$\mu$Hz could be significantly overestimated. For 146 giants the seismically inferred evolutionary state has been determined and shows clear differences in their distribution in the orbital parameters, which are accounted the accumulative effect of the equilibrium tide acting in these evolved binary systems. For other 146 systems hosting oscillating stars values for the orbital inclination were found in the TBO. From testing the TBO on the SB9 catalogue, we obtain a completeness factor of 1/3.
P. G. Beck, D. H. Grossmann, L. Steinwender, L. S. Schimak, N. Muntean, M. Vrard, R. A. Patton, J. Merc, S. Mathur, R. A. Garcia, M. H. Pinsonneault, D. M. Rowan, P. Gaulme, C. Allende Prieto, K. Z. Arellano-Córdova, L. Cao, E. Corsaro, O. Creevey, K. M. Hambleton, A. Hanslmeier, B. Holl, J. Johnson, S. Mathis, D. Godoy-Rivera, S. Símon-Díaz, J. Zinn
2023-07-19T17:46:20Z
http://arxiv.org/abs/2307.10812v3
Red-giant and main-sequence solar-like oscillators in binary systems revealed by ESA _Gaia_ Data Release 3+ ###### Abstract Context:Binary systems constitute a valuable astrophysics tool for testing our understanding of stellar structure and evolution. Systems containing at least one oscillating component are interesting because asteroseismology offers independent parameters for the oscillating component that aid the analysis. Of particular interest are systems with known inclinations. With \(\sim\)0.8 million binary candidates, the Two-Body-Orbit Catalogue (TBO) _Gaia_ Data Release 3 (DR3) substantially increases the number of known binaries and the quality of the astrometric data available for them. _Aims_ To enlarge the sample of these astrophysically valuable benchmark objects, we search for new binary system candidates identified in the _Gaia_ DR3 TBO, for which one component has a detection of solar-like oscillations in the literature. Methods: We cross-match the TBO and eclipsing binary catalog in _Gaia_ DR3 with catalogs of confirmed solar-like oscillators in the main-sequence and red-giant phase from the NASA _Kepler_ mission and stars in Southern Continuous Viewing Zone of NASA TESS. The wealth of seismic information is used to characterize the oscillating star. To test the completeness and robustness of the values reported in the TBO catalog, we perform a similar analysis on stars of the \({}^{9\mu}\)_catalog of spectroscopic binary orbits_ (SB9). Results:The analysis of the SB9 reveals an overall completeness factor for the Gaia TBO catalog of up to \(\sim\)30% providing reliable orbital parameters for \(z\geq 90\) % of the systems below \(\rho_{\rm{Pa,SB9}}\lesssim\)250d. We obtain new 954 unique binary system candidates from _Gaia_ DR3, which host solar-like oscillators, of which we find 45 stars in binary candidates to be on the main sequence and 909 in the red-giant phase. Additionally, 918 oscillators in potentially long-periodic systems are reported. We present the seismic properties of the full sample and test if the reported orbital periods are physically possible. For 146 giants the evolutionary state has been determined from their mixed-mode period spacing showing clear differences in their distribution in the orbital parameters. Two new eclipsing binary systems, hosting at red-giant primary were found. For other 146 systems hosting oscillating stars values for the orbital inclination were found in the TBO. Of 181 TBO candidate systems observed multiple times with APOGEE 149 (82%) are confirmed as binaries from radial-velocity measurement. Conclusions:We conclude that the grand majority of the orbital elements, reported in the TBO catalog are physically reasonable and realistic. This finding increases the number of the sample of known solar-like oscillators in binary systems by an order of magnitude. The large fraction of confirmed binaries from APOGEE RV measurements indicates that the TBO catalog is robust. We suggest that due to instrumental noise of the instrument the seismically inferred masses and radii of stars observed with the TESS satellite and with an excess of oscillation power \(v_{\rm{max}}\lesssim 30\,\mu\)Hz could be significantly overestimated. The differences in the distributions of the orbital period and eccentricity are due to the accumulative effect of the equilibrium tide acting in these evolved binary systems. ## 1 Introduction Among the 5000 stars that are visible to the naked eye, about 2000 are known to be multiple-star systems. Naked-eye stars are a small fraction of the Milky Way but are reasonably representative of the incidence of binarity, which is estimated to be between 50 and almost 100% (e.g., Eggleton, 2006). Some systems are close enough to be in contact; most are far apart enough to evolve almost independently. Binary systems are known with orbital periods as short as 0.2 days or as long as thousands of years. Stars in multiple systems are also precious test benches for testing our understanding of stellar structure and evolution (Moe & Di Stefano, 2017; Offner et al., 2022). While their components may vary significantly in temperature, luminosity, radius, and lithium abundance, both components are identical in their initial conditions, age, and distance (e.g. Prsa, 2018). The advent of space telescopes, such as CoRoT (_Convection_, _Rotation et Transits planeaires_, Baglin et al., 2006), _Kepler_(Borucki et al., 2010), its refurbished K2 mission (Howel et al., 2014), and TESS (_Transiting Exoplanet Survey Satellite_, Ricker et al., 2014) allowed for the detection of solar-like oscillations in late-type stars. These convection-driven oscillations provide a frequency pattern that allows direct identification of the spherical degree of the oscillation mode (see monograph by Aerts, Christensen-Dalsgaard, & Kurtz, 2010, and references therein), providing optimal input information for stellar modeling. In the event where individual mode fitting is not feasible, the information content from the power spectrum can be summarized by two characteristic frequencies. These frequencies can then be combined through scaling relations to yield stellar mass and radius (Brown et al., 1991; Kjeldsen & Bedding, 1995). The detection and exploitation of non-radial modes in giant stars by De Ridder et al. (2009) and later of dipole-mixed modes (Beck et al., 2011, 2012; Bedding et al., 2011; Mosser et al., 2011) unlocked the full potential of the asteroseismic analysis of red-giant stars. The constraint set by stellar binarity, amended with the independent information on stellar structure and its properties, offers a unique opportunity for testing the complex microscopic and macroscopic physics involved in building stellar models (e.g. Beck et al., 2018; Li et al., 2018; Johnston et al., 2021). The number of known binaries with solar-like oscillating components is still limited. From photometry, binaries are only detectable if their orbital geometry leads to eclipses or if the hydrostatic readjustment due to the tidal interaction of the stars at periastron gives rise to significant flux modulations (Kumar et al., 1995). The latter systems are colloquially referred to as "heartbeat stars". In a recent effort, Beck et al. (2022) made an inventory of the red-giant oscillators that belong to binary systems (eclipsing and not) in the literature and searching for more in the \(9^{th}\)_catalog of spectroscopic binary orbits_ (SB9, Pourbaix et al., 2004), leading to a total of 182 oscillating red giants in binary systems with resolved orbital parameters (see Fig. 1). Following a different approach, Hon et al. (2022) provided a catalog of seismic parameters of members of 99 binary systems, of which the red-giant primary is spatially resolved from the secondary. Because of their very long orbits, orbital periods are hardly known for these systems. Of particular interest are double-lined spectroscopic binaries (SB2) \(-\) spectral lines from both components are detectable \(-\) with a known inclination angle of 90 degrees, for which stellar masses and radii can be accurately determined. Gaulme et al. (2016) used 10 of these systems to test the accuracy and precision of the asteroseismic scaling relations, suggesting an offset of up to 15% in mass and 5% in radius. Themessl et al. (2018) and Benbakoura et al. (2021) added another four eclipsing systems, increasing the count of these high-value targets to 14 (see Beck et al., 2022, for a complete list). For a robust analysis of the comparison of dynamical and seismic masses, a substantially larger number of systems with known inclination angle is necessary. The third data release (DR3) of the ESA _Gaia_ mission (Gaia Collaboration et al., 2016, 2023) is the first _Gaia_ DR to include a specific analysis for non-single stars (see Gaia Collaboration et al., 2023; Holl et al., 2023; Halbwachs et al., 2023; Mowlavi et al., 2023; Siopis et al. subm., Damerdji et al. subm., and Gosset et al. subm., for catalogs and details of the analysis). In the _Gaia_ data, a multiple-star system can be identified with one or several of the following manners, (_i_) astrometric measurements, (_ii_) detecting periodic photometric dimming caused by eclipses or phase effects, (_iii_) radial velocity measurements obtained by the high-resolution spectrometric channel and (_iv_) by the low-res spectro-photometry from SED fitting. From 34 months or 1034 days of observations of the whole sky, Gaia Collaboration et al. (2023) reports about 814 000 binaries in _Gaia_ DR3, resulting from _Gaia_ astrometric (Halbwachs et al., 2023; Holl et al., 2023), and radial velocity orbital fitting (Gosset et al., 2023; Damerdji et al., 2023). This catalog contains only a small subset of the 2.2 million eclipsing binaries1 detected by photometry Mowlavi et al. (2023), for which an orbital solution has been computed (Siopis et al. subm.). In the optimal case, astrometric solutions offer the possibility to obtain the inclination of the orbital plane in the sky, which allows us to measure the masses of their components and utilize non-eclipsing systems for calibrating asteroseismology. This new catalog of non-single stars released by _Gaia_ constitutes a major change in the inventory of such type of systems. Footnote 1: see the more complete gaiadr3.vari_eclipsing_binary catalog It will take a long time to go through the data from the past CoRoT, _Kepler_, and K2 missions, the current TESS, and the forthcoming Chinese _Earth 2.0_(Ge et al., 2022), ESA PLATO (Rauer et al., 2014) and the NASA _Roman_(Johnson et al., 2020; Huber et al., 2023) missions, which are expected to be operational by the end of the decade. Before exploring the archived photometric data, a natural first step consists of checking which of the known stellar pulsators are listed as non-single stars in the _Gaia_ DR3 catalog. This work presents the successful search for solar-like oscillating stars in binary systems, revealed through photometric, spectroscopic, and astrometric solutions in the _Gaia_ DR3. The paper is structured in the following logic. In Sec. 2, we describe the selection of the sample red-giant stars and our search for orbital solutions reported in the _Gaia_ DR3. The general asteroseismic properties of the found sample of binary candidates with oscillating red-giant primaries are described in Sec. 3. The history of tidal interaction and stellar activity and rotation is analyzed in Sec. 4. In Sec. 5, we present an eclipsing Figure 1: Distribution of the sample in the seismic HRD with the frequency of the oscillation-power excess on the left vertical axis. The colored symbols show binary candidates reported in _Gaia_ DR3 hosting giant or main-sequence solar-like oscillators. Red marks red giants with asteroseismic solutions obtained from TESS data. Blue denotes red giants with asteroseismic solutions from the _Kepler_ mission. Yellow shows oscillating solar-like main-sequence stars and subgiants, observed with _Kepler_. In black triangles, the literature sample is depicted. Only systems with APOGEE temperatures are shown. For the literature sample, all targets with asteroseismic solutions are shown, independent if a solution in the _Gaia_ DR3 TBO catalog exists. With grey lines the evolutionary paths for stars of 1 M\({}_{\odot}\), 2 M\({}_{\odot}\), and 3 M\({}_{\odot}\) and with solar metallicity are shown. The grey probability distribution, underlying the scatter and curves, is the distribution of all targets with asteroseismic solutions of the above mentioned catalogs. binary system, listed in the _Gaia_ DR3 eclipsing binary catalog and report oscillations and updated orbital period from TESS photometry. Furthermore, we present 146 solar-like oscillators in systems with _Gaia_ inclinations. In this chapter we discuss why eclipsing binaries are difficult to find resulting in their small number, compared to the known binary population. Independent confirmation for 181 reported binary candidates through RV variations, mainly from APOGEE (Majewski et al. 2017) is provided in Sec. 6. Section 7 discusses symbiotic binaries and giants with anomalous peaks in the power-spectral density (PSD), two related science cases, including red-giant binaries, on which the data from _Gaia_ DR3 sheds new light. ## 2 Sample selection and binary statistics Using the _Simbad_ module in the _AstroPy_ package (Astropy Collab. et al. 2018, and references therein), we created a table of cross-identifiers. This list was then used to query the inventory of the Two-Body-Orbit (TBO) catalog of _Gaia_ DR3 (Gaia Collaboration et al. 2023) on the _Gaia_ data archive2. Footnote 2: [https://gea.esac.esa.int/archive/](https://gea.esac.esa.int/archive/) ### Completeness and robustness of Gaia DR3 binary candidate solutions The documentation of the _Gaia_ DR3 TBO stresses that the reported systems are 'only' candidates for binary systems. To better understand the completeness, robustness, and quality metrics of the orbital solutions provided in the catalog, particularly for the outliers, we explore the TBO inventory for a well-known sample of binaries. A commonly chosen gold-star sample is the \(9^{th}\)_Catalogue of Spectroscopic Binary Orbits_ (SB9) created and curated by the team of Pourbaix et al. (2004), which we used to assess the quality and completeness of the _Gaia_ DR3 solutions. In this subsection, we only review the take away results of our comparison. For the details of the full analysis, the reader is referred to App. A. In the SB9, we identified the 3413 unique objects. Because the efficiency of the methods to detect binaries is dependent on the (integrated) object's magnitude (see Fig. 10 in Gaia Collaboration et al. 2023), we constructed a magnitude limited sample between the _Gaia_ magnitudes \(4\lesssim G\) [mag] \(\lesssim 13\) to asses a completeness factor, bias corrected for objects outside the detection range of _Gaia_. Because many SB9-systems have orbits longer than the baseline of _Gaia_ DR3, we further limited the sample to \(P_{\rm orb,SB9}\lesssim 1\,100\) d. From the 2343 systems in the period-magnitude-limited SB9 sample 668 were identified as binary systems (Fig. 2, middle panel). This corresponds to a completeness factor of 28.5%. Additional 241 SB9 systems are present in the catalog of non-linear solutions (Gaia Collaboration et al. 2023), whose orbital elements could not be resolved from the current data in _Gaia_ DR3 and most likely indicate systems with orbital periods longer than the baseline of _Gaia_ DR3. Indeed the comparison in Fig. 2 (top panel) confirms this expectation. Taking these detections into account increases the overall completeness factor to 38.8%. The middle panel of Fig. 2 shows the _e-P_ plane of all 715 systems for which both parameters, eccentricity and period are provided in the SB9 and the _Gaia_ DR3 catalog. We consider a solution reliable, if the residual between the values listed in SB9 and _Gaia_ DR3 (bottom panel of Fig. 2) differ by less than 10%. For such solutions, we find a good agreement for the eccentricities with a mean residual of 0.011 and a standard deviation of 0.104. Figure 3: Distribution of _rnwe_ value of the SB9 systems in _Gaia_ in logarithmic scale as a function of the _Gaia_ parallax and orbital period from SB9 catalog. Figure 2: Comparison of period and eccentricity from _Gaia_ DR3 TBO and the SB9 catalog. The top panel depicts the SB9 values of the orbital parameters of systems for which only non-linear (violet dots) or acceleration solutions (purple triangles) are reported in the _Gaia_ TBO. The middle panel shows 729 systems that have a full solution for the orbital parameters in both catalogs. The solution reported in the SB9 catalog is shown in purple. The DR3 solutions for the same set of systems are separated based on their _rnwe_ value (_rnwe_\(<1.2\): grey dots; _rnwe_\(>1.2\): cyan circles). Triple or higher-order systems are marked with a cross. The bottom panel depicts the absolute residuals for the period and eccentricity between the two sets of solutions. A more complex picture is found for the orbital periods. In the range of \(\mathrm{P_{orb,SB9}}\)\(\lesssim\)250 d, and 250 \(\leq\)\(\mathrm{P_{orb,SB9}}\) [d] \(\lesssim\)500, we find that \(\sim\)90% and \(\sim\)99% of all systems have periods with residuals better than 10%, respectively. Interestingly, with a completeness factor of 41.0% the longer period range yields a higher detection rate than the short periodic range with 24.9%. The borders of the intervals were chosen to resemble one-fourth and half of the time base of _Gaia_ DR3. The lower percentage of reliable solutions in the short periodic range can be explained with a larger number of short-periodic solutions for long periodic systems (Fig. A.1). For \(P_{orb,\mathrm{DR3}}\)\(\geq\) 500, the number of reliable solutions drops to \(\sim\)74%, which contains a significant number of significantly underestimated orbital periods of systems with \(P_{orb,\mathrm{SB9}}\)\(\geq\) 1100 d. The comparison of the orbital elements in the SB9 and _Gaia_ DR3 catalog are depicted in the bottom panel of Fig. 2, Fig. A.2, and discussed in more detail in App. A. The _rawe_ parameter presents the _Renormalised Unit Weight Error_ of a star's astrometry (for details see Gaia Collaboration et al., 2023). The analysis of the SB9 sample taught us that \(\sim\)40% of all systems in the period-magnitude limited sample have _rawe_ values below the threshold value of \(\sim\)1.4 (Fig. A.3), which is often discussed as a good general indicator for a binary detection. As shown in Fig. 3, these systems are typically far away and their components' displacement is therefore small with respect to the astrometric precision of the _Gaia_ satellite. ### Solar-like oscillators in _Gaia_ DR3 binary candidates To guide our search for solar-like oscillators in binary star systems, we compiled a list of known solar-like oscillators whose global seismic parameters were reported in the literature (Table 1). Due to their characteristic shape, the PSD of solar-like oscillations are typically described through the center frequency of the excess of oscillation power, \(\nu_{\mathrm{max}}\). This quantity correlates with the stellar surface gravity, \(\log g\), and the stellar luminosity \(L\). The structure of oscillation modes in the power excess is very regular (Tassoul, 1980; Mosser et al., 2011). The average frequency separation between modes of the same spherical degree \(\ell\), but consecutive radial orders is described through the large-frequency separation, \(\Delta\nu\). The large-frequency separation is a proxy of the average speed of sound in the oscillating stars. We refer to the review by Aerts (2021) for a recent overview of the two global-seismic parameters. The largest input sample comprises over eighteen thousand oscillating red giants, measured and identified from _Kepler_ photometry. The _Kepler-giant sample_ was compiled from Yu et al. (2018). Because of the long time base of the _Kepler_ photometry, the frequency resolution allows for a clear identification of the mixed-mode pattern. Evolutionary states were extracted from seismology by Vrard et al. (2016). In this sample, 17 544 systems have valid astrometric solutions listed in the _Gaia_ DR3 catalog. The second sample is the _TESS-giant sample_ provided by the giants in the TESS mission's Southern Continuous Viewing Zone (SCVZ). Of the fifteen thousand giants identified from photometric and spectroscopic calibration, Mackereth et al. (2021) detected oscillations in about two-thirds of the stars. It has been a matter of discussion in the literature (e.g. Silva Aguirre et al., 2020) that while individual and well-proven asteroseismic pipelines agree on the position of the power excess within a few percent, there is an increased scatter on the determined large-frequency separation from TESS data. Therefore, we also accept stars with missing or uncertain \(\Delta\nu\) values into our catalog. We consider oscillations to be detected in a giant if at least two of the three pipelines used by Mackereth et al. (2021) report a power excess. For our analysis, we use the mean value of \(\nu_{\mathrm{max}}\) reported in their paper. For 12 016 oscillating giants, an astrometric solution exists in _Gaia_ DR3. We follow the evolutionary states for this sample, determined by Vrard et al. (2021). Main-sequence solar-like stars and sub-giants with solar-like oscillations complement the giants samples (hereafter referred to as the _MS+SG sample_). Here we compiled a list of stars observed by _Kepler_ (compiled by Mathur et al., 2022). Of the 624 oscillating dwarfs, an astrometric solution exists for 595 stars in _Gaia_ DR3. The distribution of the 30 155 oscillating stars from the _Kepler_, TESS and MS+SG samples (Table 1), with an astrometric solution in _Gaia_ DR3 in the seismic Hertzsprung-Russell diagram (HRD) is depicted as the probability density map shown in the grey-scale of Fig. 1. The orbital and seismic parameters of the detected binary candidates are presented in Table 1. For reference and orientation of the reader, we show evolutionary tracks for a star of the solar metallicity of 1, 2, and 3 M\({}_{\odot}\), calculated with the MESA evolutionary code (Paxton et al., 2018; Jermyn et al., 2023, and references therein). For our queries of the _Gaia_ data archive, we produced cross-identifier tables between the solution identifier of _Gaia_ DR3, the target input catalogs for the _Kepler_ and TESS missions (KIC and TIC, resp.), and the Two-_Micron All Sky Survey_ (2MASS). For the entire sample of oscillating stars in _Gaia_ DR3, we found 970 unique orbital solutions for binary system candidates in the _Gaia_ DR3 TBO catalog (Table 1). Between the input sample and the literature sample we find a cross section of 44 systems, of which 16 are detected. These are listed in the bottom panel of Table 1. Therefore, we report the detection of 954 solar-like oscillators in binary systems. The position of these binary systems in the seismic HR diagram is shown in Fig. 1. For 22 candidate systems, the _Gaia_ catalog provides multiple solutions of the DR3 data productions that did not converge to a single solution. We have chosen not to keep them in our final sample. Figure 4: Distribution of the apparent G magnitude in the input samples for the search (top panel) and found binary candidate systems (bottom panel). The bin size is one-fourth of a magnitude. The grey-shade regions mark magnitudes outside the magnitude-limited sample. The dash-dotted lines indicate the reported candidates with acceleration and non-linear solutions for the respective samples. ### Potentially long-periodic systems Our search of the acceleration and non-linear solutions in TBO sub-catalogs delivered in total 918 results for the giants- and dwarf-star samples (see Table 1 and Fig. 4). These solutions could indicate additional binary systems. However, no orbital parameters are given, as the orbital periods of these systems could substantially exceed the time base of _Gaia_ DR3. Under the assumption that all these detections represent actual binaries, we would roughly double the binary detection rate. Confirming these suggested binary detections will require more extended time bases of _Gaia_ observations with forthcoming data releases or ground-based RV monitoring. Both are beyond the scope of this paper. Therefore, we only list these systems in Table 2 and do not further explore their binary characteristics. ### Binary detection rates Differences between the input samples are found in the detection rate. Interestingly, the sample for giants seismically characterized with _Kepler_ (2.2%) and TESS (4.6%) differ by about a factor of 2. Several observational aspects can explain these differences. The main difference between the samples is the distribution of the apparent magnitude of their targets. Figure 4 compares these data sets by depicting the distributions of the mean _Gaia_ G band magnitude for the three input catalogs (Table 1). Indeed, the peak of the _Kepler_ giant sample is about 3.5 magnitudes fainter than that of the TESS giants. The _Kepler_ main-sequence sample peaks at about 11.5 mag. Therefore, we correct the fractional count of the yields in Table 1, calculated from the subsample of targets within the brightness limits (4 \(\leq\) G [mag] \(\leq\) 13) described in Sec. A. As shown in Fig. 4, correcting the sample sizes for the magnitude range mainly affects the sample size of the _Kepler_ giant sample, as about 40% of all targets are fainter than 13\({}^{\rm th}\) mag. Because the TESS telescope also shows significant saturation effects around 4\({}^{\rm th}\) mag and the sample of Mackereth et al. (2021) was limited to TESS magnitudes brighter than 11\({}^{\rm th}\) mag, all targets fall into the magnitude-limited range. To make the comparison of the binary rate more compatible between the space telescopes, we recalculated the fractional yields for a magnitude-limited sample, which leads to 3.6% and 4.6% for the _Kepler_ and TESS sample, respectively. Therefore, the corrected yields bring the results from the two giant samples into a closer agreement. The remaining difference could originate from the differences in the scanning law of the _Gaia_ satellite. As shown by Gaia Collaboration et al. (2023a, Fig. 7), the _Kepler_ field of view has been substantially less intensively covered than the Southern Continuous Viewing Zone of TESS. The _Kepler_ main-sequence sample achieves the highest detection rate with 8.1%. On first look, this is surprising, as their distribution mainly ranges between the bright TESS and the fainter _Kepler_ giant sample and suffers from the same lower number of scanning events as the _Kepler_ giants. This sample contains more stars on shorter orbital periods that are easily detected by the _Gaia_ satellite. These orbital periods are also located at periods untypical or physically impossible for giants in binary systems due to their extended radii. Therefore the difference could already point to a different binary fraction as a function of the evolutionary state. As discussed in the literature, based on the analysis of the large samples from the SB9 catalog and APOGEE data (e.g. Van Eylen et al. 2016; Badenes et al. 2018; Beck et al. 2022) most systems in the red-giant phase are found at periods longer than several hundred to a few thousand days, which is beyond the baseline of _Gaia_ DR3. Because we do not know the initial distribution of orbital periods, we cannot compile a magnitude-period limited sample, as done for the SB9 comparison in Sec. A.2 and correct the yields. However, we can estimate the approximate detection rates by including the 890 and 28 acceleration and non-linear solutions for giants and dwarfs (see Table 1), respectively, in the calculation of the magnitude-corrected binary yields. Therefore, we gain tentative binary detection rates of \(\sim\)6.6% for the _Kepler_ giants, \(\sim\)11.0% for the TESS giants, and \(\sim\)14.1% for the _Kepler_ dwarfs. These numbers are still low when compared with the expected binary fraction for the mass range of solar-like stars of about \(\sim\)50% known from stellar population studies (see the reviews by Moe & Di Stefano 2017; Offner et al. 2022, and ref \begin{table} \begin{tabular}{l r r r r r r r r|r} \hline \hline \multicolumn{2}{c}{Input samples} & \multicolumn{2}{c}{Oscillators} & \multicolumn{1}{c}{Magnitude} & \multicolumn{1}{c}{NSS TBO} & \multicolumn{2}{c}{Binary fraction} & \multicolumn{1}{c}{Astrom.} & \multicolumn{1}{c}{NSS TBO} \\ Name & \multicolumn{1}{c}{Size} & \multicolumn{1}{c}{_Gaia_ DR3} & \multicolumn{1}{c}{limited} & \multicolumn{1}{c}{Uni.} & \multicolumn{1}{c}{Alt.} & \multicolumn{1}{c}{Full} & \multicolumn{1}{c}{Mag. lim.} & \multicolumn{1}{c}{Inclin.} & \multicolumn{1}{c}{Acc+NL} \\ \hline \hline Kepler giants & 18 824 & 17 544 & 10 374 & 376 & 3 & 2.2\% & 3.6\% & 31 & 257 \\ TESS giants & 15 405 & 12 016 & 12 016 & 549 & 18 & 4.6\% & 4.6\% & 105 & 633 \\ MS + SG & 624 & 595 & 595 & 45 & 1 & 8.1\% & 8.1\% & 14 & 28 \\ \hline Lit. giants & 190 & 190 & 53 & 53 & 1 & 27.9\% & 100\% & 1 & 116 \\ SB9 sample & 3 413 & \(-\) & 2 343\({}^{\star}\) & 668 & 0 & 19.6\% & 28.5\({}^{\star}\) & 68 & 241 \\ \hline \end{tabular} 1 1 and 2: The name of the sample and the input samples for the search and the number of all targets in the catalog; 3: number of oscillating targets within this sample that have a _Gaia_ DR3 solution; 4: number of targets in the magnitude limited sample within 4 \(\leq\) G [mag] \(\leq\) 13.[ENDFOOTNOTE] The next 7 columns present the results of the search of the Non-Single-Star (NSS) catalog in _Gaia_ DR3. 5: gives the number of unique solutions returned by the ADQL query for the sample of oscillators with _Gaia_ solutions (col. 3); 6: gives the number of alternative orbital solutions in case TBO contains multiple solutions; 7: presents the binary detection fraction, calculated from the sample of all oscillating targets with a _Gaia_ DR3 solution (col. 3); 8: presents the binary detection, calculated from the magnitude limited sample (col. 4) and binaries within the magnitude limit; 9: presents the number of systems hosting an oscillating component with inclination reported in TBO; 10: number of sources with acceleration and non-linear solutions in TBO could point to longer-period binary systems. \(\star\): Because the SB9 sample extends significantly further in period than the _Gaia_ TBO sample, the magnitude limited sample for SB9 has been further corrected for a cut-off period of P\(\leq\)1100 days.[ENDFOOTNOTE] \end{table} Table 1: Input samples for oscillating main-sequence and red-giant stars and the corresponding binary yields. erences therein). In Sec. A.2, we showed that the binary yields are at least a factor 3 too low. We argue that this high fraction of nondetections is due to insufficient data or the extensive orbital periods expected for these binaries. Correcting these binary fractions by this factor for incompleteness puts us into the ballpark of about \(\sim\)30% to \(\sim\)40%, which is getting close to the expected value. ## 3 General properties of the sample From searching the catalogs in Table 1, we find a total of 954 new binary-system candidates hosting a solar-like oscillating red-giant (909) or either a main-sequence or a sub giant (45) star star in the TBO. We note that Gaia Collaboration et al. (2023) also presented a study of red-giant binaries. The _Gaia_ sample was drawn from a cut of all TBO solutions, whereby the giants were identified from their 2MASS colors (\(J-K>0\) mag, and \(M_{K}<0\) mag.). Because our sample was selected on the premises of detected oscillations, the sample's "purity", size, and distribution are different in these two works. The detection of oscillation in the target source has predetermined our sample. This additional selection criterion allows us to select giants with high accuracy and apply seismic techniques to exploit the properties of the sample. ### Masses and radii from asteroseismology We calculate the mass and radius of an oscillating star in the binary candidates with the standard asteroseismic scaling relations (Brown et al., 1991; Kjeldsen & Bedding, 1995). Using the solar values as a base reference, this homological formalism allows estimating the masses and radii from their measured global seismic parameters, the peak frequency of the global power excess \(\nu_{\rm max}\) and the large frequency separation \(\Delta\nu\), and spectroscopically measured effective temperature, \[\frac{R_{\star}}{R_{\odot}} = \left(\frac{\nu_{\rm max}}{\nu_{\rm max,\odot}}\right)^{3}\cdot \left(\frac{\Delta\nu}{\Delta\nu_{\odot}}\right)^{-4}\cdot\left(\frac{T_{\rm eff,\star}}{T_{\rm eff,\odot}}\right)^{3/2}\,, \tag{1}\] \[\frac{M_{\star}}{M_{\odot}} = \left(\frac{\nu_{\rm max}}{\nu_{\rm max,\odot}}\right)\cdot \left(\frac{\Delta\nu}{\Delta\nu_{\odot}}\right)^{-2}\cdot\left(\frac{T_{\rm eff,\star}}{T_{\rm eff,\odot}}\right)^{1/2}\,. \tag{2}\] We use the reference values of the A2Z pipeline (Mathur et al., 2010)\(\nu_{\rm max,\odot}\) = 3100 \(\mu\)Hz, \(\Delta\nu_{0}\) = 135.2 \(\mu\)Hz, and \(T_{\rm eff,\odot}\) = 5777 K to describe the oscillations in the Sun. The seismic scaling relations assume the star's oscillations to be in the asymptotic regime of high radial-order modes, where oscillation modes are equally spaced in frequency by \(\Delta\nu\)(Tassoul, 1980). It has been shown that with the decreasing frequency of the excess oscillation power, the stars oscillate in lower radial orders. Additionally, non-adiabatic effects in the stellar atmosphere have a stronger impact on the oscillation modes with increasing luminosity, and decreasing density as the star approaches the luminous regime of the red-giant branch (RGB). These effects lead to a departure of the measured global seismic properties with respect to the asymptotic assumptions, built into the scaling relations. To correct for these effects, numerous techniques have been developed (e.g. Sharma et al., 2017). For the giants, we use the formalism of Mosser et al. (2013) with the suggested value of \(\zeta\)=0.038 to correct for the observed large frequency separation (\(\Delta\nu_{\rm obs}\)) for its asymptotic value, \[\Delta\nu_{\rm asy}=(1+\zeta)\Delta\nu_{\rm obs}\,. \tag{3}\] Figure 5: Characterisation of the set of binary candidates from _Gaia_ DR3. The top panel shows the distribution of the binary candidates in the radius–mass plane. The probability density map in the background shows the distribution of all oscillating stars in the Kepler sample. The bottom panel provides a test of the feasibility of the orbital period as a function of the stellar radius. The black line resembles the estimated minimum period for a system to fill the Roche lobe of its giant. The dark and light shaded area mark the ranges in which RC and 2RC stars are expected, respectively. The meaning of the yellow, blue and red symbols represents similar to Fig. 1 the dwarfs, and giants from the _Kepler_ sample and giants from the TESS sample, respectively. The literature sample is represented by grey triangles. The horizontal lines indicate the full, half and one fourth of the time base of _Gaia_ DR3. Figure 6: Identification of the evolutionary stage of the primary of the system. Teal, purple and orange indicate RC, 2RC, and RGB stars, respectively. Dots indicate stars observed with the _Kepler_ mission. Squares mark stars observed by TESS. The points in grey mark the full input sample by Vrard et al. (2021), whereby the greyscale value indicates the mass of the star, determined by seismology. For stars on the main sequence, we use the uncorrected value as their structural properties, and consequently, their global seismic parameters are close to the solar reference values. The top panel of Fig. 5 shows the binary candidates' distribution of the obtained masses and radii. From the comparison with the masses and radii for giants and dwarfs from the _Kepler_ mission (Table 1), we see that stars reported from the TESS mission that have radii larger than the clump show an excess of massive stars (\(2\,\hbox{\hbox{$<$}\kern-8.0pt\lower 4.3pt\hbox{$\sim$}}\,M/M_{\odot}\,\hbox{ \hbox{$<$}\kern-8.0pt\lower 4.3pt\hbox{$\sim$}}\,4\)). This trend is more pronounced if \(v_{\rm max}\) and \(\Delta v\) values from a single pipeline are used. For TESS giants, we therefore use the mean values reported by Mackereth et al. (2021). We understand this as a problem of accurately determining the large frequency separation from TESS data if \(v_{\rm max}\) is low. In such circumstances, it is challenging to resolve the comb-like pattern of the power excess due to the contaminating signal from the granulation background, systematic effects introduced through multiple sectors with multiple CCD/pixel combinations, and a smaller number of modes excited in the power excess as well as a lower-frequency resolution due to the shorter time base to resolve them. ### Evolutionary states from asteroseismology The evolution of a solar-like star includes several structurally distinct phases. It is straightforward to separate main-sequence stars and subgiants from the red-giant stars based on the peak frequency of their excess oscillation power. The red-giant phase, however, is an apposition of several structurally distinct phases (e.g. Kippenhahn et al. 2013; Pinsonneault & Ryden 2023, and references therein). The first phase is the RGB. Once a star has consumed its core hydrogen content, the core will contract, and the envelope will expand. In this phase, the only energy source of the star is the fusion of hydrogen in a shell around the He core. Consequently, its luminosity will rise until the inert He-core ignites. Depending on the star's mass, this will happen under degenerate conditions (for stars with \(M_{\star}\,\hbox{\hbox{$<$}\kern-8.0pt\lower 4.3pt\hbox{$\sim$}}\,2\,M_{\odot}\)) and settle for quiescent core-He burning in the red-clump. For more massive stars, the ignition temperature of He will be reached before the formation of a degenerate core. Core helium ignition then proceeds under non-degenerate conditions, with the star settling on the less luminous secondary clump (2RC). These phases are followed by the asymptotic giant branch (AGB), once the helium in the core has been exhausted. These structural readjustments force substantial changes in the stellar radius and luminosity. We discuss the impact of the radius on the tidal forces in detail in Sec. 4. Figures 1 and 5 indicate the location of the MS, RGB, RC, and 2RC in the respective parameter space. Because the frequency pattern of mixed modes is sensitive to the density contrast between the surface and the stellar core, these modes can use the spacing in the dipole forest to unambiguously determine the evolutionary state of a giant (Bedding et al. 2011). The approach of Mosser et al. (2015) reconstructs the value of the asymptotic-period spacing of dipole-gravity modes, \(\Delta\Pi_{\ell=1}\)(Tassoul 1980). To identify binary candidates whose primaries have a determined evolutionary state, we correlated the published values of \(\Delta\Pi_{1}\) for _Kepler_ and TESS giants (Vrard et al. 2016, 2021) with the binary candidate lists. The identification of RGB stars from TESS data was quite uncertain due to similar problems as discussed for the \(\Delta\nu\) for the same mission. Therefore, we excluded any H-shell burning stars identified in the TESS sample. In total, we have four samples composed of 45 stars on the main sequence, and 41 on the RGB (H-shell burning). For the He-core burning phase, we get 80 stars in the RC and 25 in the 2RC. The colored dots in Fig. 6 illustrate the identified stars in binary candidates from _Gaia_ DR3. Because the RGB evolution is a continuous process, we believe the gap of binary candidates between \(8\,\hbox{\hbox{$<$}\kern-8.0pt\lower 4.3pt\hbox{$\sim$}}\,\Delta\nu\,[\mu{\rm Hz }]\,\hbox{\hbox{$<$}\kern-8.0pt\lower 4.3pt\hbox{$\sim$}}\,9\) is purely a statistical artifact as it is not found in the larger sample of single stars. Figure 7: Distribution of the sample of binary-candidate systems from _Gaia_ DR3, hosting evolved stars as their primary stellar component. Colored dots mark systems with detected solar-like oscillations in the components primaries (blue and red mark stars from the sample of giants in _Kepler_ and TESS, yellow indicate main sequence stars, observed with _Kepler_). Systems from the published _Kepler_ sample with an oscillating red-giant primary are depicted as black triangles. Systems hosting two oscillating red-giant components (PB2) are shown as grey diamonds. The background density plot represents the distribution of all systems in _Gaia_ DR3 TBO with full orbital solutions. The thin and thick black contour lines show the isocontour regions of densely populated regions and the envelope of the distribution of SB9 binary systems, respectively. The vertical solid, dashed, and dotted lines represent the 1034 d baseline of _Gaia_ DR3 as well as one half and one fourth of it. ### Validation and distribution of orbital parameters From the seismically inferred radius, we can test if the orbital period reported for the binary candidate in the _Gaia_ DR3 TBO is physical, as larger stellar radii require wider binary systems, and a very short period (\(P_{\rm orb}\lesssim 10\,\)d) for a giant indicates a possible problematic solution for the period. If it is too short, the giant star would eventually fill up its Roche lobe, the boundary within which the material is gravitationally bound to the star. Mass transfer and a Roche-lobe overflow (RLOF) would lead to alternated orbital evolution on very short time scales (Soberman et al., 1997). As a red-giant branch star further expands its envelope, RLOF would soon be followed by a common envelope phase, which ends in the ejection of the common envelope on even shorter time scales (Han et al., 2002). To identify such non-physically short-periodic binary systems, in Fig. 5, we show the Roche-lobe limit in the Radius-Period plane in the formulation of Gaia Collaboration et al. (2023), \[\log\left(\frac{P_{d}^{\rm Roche}}{365.25}\right) = \frac{3}{2}\frac{\log R_{1}}{216}-\frac{1}{2}\log(M_{1}+M_{2})- \tag{4}\] \[-\frac{3}{2}\log\left(0.38+0.2\log\frac{M_{1}}{M_{2}}\right),\] whereby \(R_{1}\) is the radius of a primary and \(M_{1}\) and \(M_{2}\) are the primary and secondary masses, respectively. In Fig. 5 (bottom panel), this criterion is depicted for a hypothetical system of \(M_{1}\)=1.3 M\({}_{\odot}\) and \(M_{1}\)=1 M\({}_{\odot}\). For the primary's potential radius we assumed a range from 1 to 100 R\({}_{\odot}\). While Gaia Collaboration et al. (2023) finds systems close to the Roche limit, the periods of most binary-candidate systems in our sample are substantially longer than the indicative Roche-lobe limit shown in Fig. 5. This selection effect is explained by the selection criterion, as each primary component of systems depicted in the diagram oscillates. As shown in numerous papers (Gaulme et al., 2014; Beck et al., 2018; Tayar et al., 2022), strong tidal interaction and the induced stellar activity suppress oscillation modes. Tidal forces get enhanced as a star increasingly fills its Roche-lobe, leading to the shown selection effect. A few systems fall below the Roche-lobe limit. While systems with orbits of less than 10 days are close to the limit of being realistic (but short-lived), they would have an orbital trajectory inside or very close to the giant. While these systems, on average, have a lower _ruwe_ value, this parameter is insufficient to identify these problematic systems. The documentation of the TBO reports an excess of orbital periods around four days, which could be connected to aliases that induce a spurious period, while the binary detection is valid. Therefore, we list these systems in the paper but flag them as unreliable and exclude them from our additional analysis. Figure 7 presents the orbital period and eccentricity of binary candidates that host a main-sequence or a red-giant star. The same figure also compares the distribution of the full TBO catalog. From the _Gaia_ systems, we find the same distribution as in the SB9 sample with two clear overdensities, whereby the one at shorter periods is populated with hot main-sequence stars (Torres et al., 2010). It was shown by Beck et al. (2022) that oscillating giants mainly fall into the second overdensity between 500 and 1000 days. Figure 8: Distributions of the Galactic space velocities U, V, and W for the red-giants in the TESS, and red-giants as well as main-sequence stars in the Kepler and literature samples are shown in the left and right Toomre diagram, respectively. The color of the markers indicates the metallicity. We note that 9 candidate systems have velocity values that place them outside the shown space velocity range in the left and 11 in the right panel. The small grey symbols represent the stars without a solution for the metallicity in the _Gaia_ DR3. The thick black line marks the approximative separation between the galactic thick (X \(\lesssim 100\) km/s) and thin disk (X \(\gtrsim 100\) km/s). The form of the markers corresponds to the different samples as presented in Fig 1. The grey scale in the background shows the distribution of all stars with seismic values in the _Kepler_ and TESS samples. ### Metallicity, distance and space velocities The numerous detections provided by _Gaia_ allow us to view the rich dataset of the binary candidates in the much broader context of galactic archaeology. A standard tool to study the membership of stars to a particular galactic structure is the Toomre diagram. It projects the distribution of the Galactic space velocities \(U,V\), and \(\dot{W}\), where V describes the absolute value of the velocity \(|V|\) in the direction of the rotation of the Milky Way. The velocity \(U\) represents the component of the motion in the direction from the sun toward the Galactic center, and W is the component perpendicular to the Galactic plane. To illustrate \(U,V,W\) in two dimensions, one calculates \(\sqrt{U^{2}+W^{2}}\), corresponding to the velocity perpendicular to \(V\), which is the vector pointing away from the Galactic center. To collapse the diagram into one quadrant, we show \(|V|\). \(U,V,W\) were corrected for the local standard of rest (LSR) (\(U_{\odot}=-8.63\), \(V_{\odot}=4.76\), \(W_{\odot}=7.26\) in \(km\,s^{-1}\)) taken from Ding et al. (2019). The combined velocity, to which we refer to as \(X=\sqrt{V^{2}+U^{2}+W^{2}}\), describes the radial distance to the origin of the plot and indicates if a star tends to belong to the younger thin disk, the older thick disk, or the halo of the galaxy. The left and right panel of Figure 8 depict the position of the binary candidates and the distribution of the of the of the TESS and Kepler input sample as the background density map, respectively, in the parameter space of the Toomre diagram. As input parameters we used the astrometry provided by _Gaia_ DR3 and the radial velocity determined from spectra taken with the _Gaia_ Radial Velocity Spectrometer (RVS) (Gaia Collaboration et al., 2023; Katz et al., 2023, respectively). We note that the published RVS velocities are the average from all visits and are not the true systemic velocity. For simplicity, we used the inverse of the parallax as the proxy for the distance. Most binary candidates are located in the thin disk (X \(\lesssim\) 100 km/s). This result is in agreement with the location of the input sample and other ensemble studies (e.g. Hon et al., 2021). We note that a particular bias is set through the selection criterion of oscillations. Because detecting modes requires good signal-to-noise ratios in the frequency analysis, the sample is biased toward closer and bright objects. The larger mirror size of _Kepler_ allows for the detection of oscillations in fainter stars, which results in a richer population of stars located in the thin disk and in the Halo. In total, 159 binary candidates are located in the range of 100 \(\lesssim\) X [km/s] \(\lesssim\) 200 which indicates a membership in the thick disk. We also find 22 candidate systems that are likely halo stars (X \(\gtrsim\) 200 kms). Giants in the halo are typical of spectral type K to produce the amplitudes needed to be detectable over such large distance (Mathur et al., 2016; Hon et al., 2021). The binary yields as a function in Fig. 9. As expected, we find a clear trend in the binary detection rate (top panel) as a function of the distance (left panels in Fig. 9). As described before, this is connected to the decreasing integrated brightness of the source. This trend is best seen in the combined and _Kepler_ sample. The large increase of the binary yields from TESS and _Kepler_ dwarfs is due to small number statistics. The histograms of the binary detection (bottom panel) also shows that, as expected from the apparent magnitudes (Fig. 4), the TESS sample contains stars generally closer to earth (within \(\sim\)1 kpc) while the _Kepler_ giant sample is rich in stars in the kilo-parsec range. Naturally, dwarfs have a low luminosity. All dwarfs which are bright enough to allow for the detection of solar-like oscillations are therefore close (\(\lesssim\) 700pc). Previous works have reported a strong trend in the binary occurrence rate as a function of the stellar metallicity, whereby metal poor F, G, and K stars are more likely to be found in binary systems (e.g. Moe and Di Stefano, 2017; Badenes et al., 2018; Offner et al., 2022, and references therein). For further interpretation of the sample, we used the metallicity ([M/H]) derived from the _Gaia_ RVS spectra (mh_gspspec Recio-Blanco et al., 2023; Creevey et al., 2023; Fouesneau et al., 2023). The right panels of Fig. 9 shows the binary yields and the histograms of the detection (top and bottom, respectively). For the _Kepler_ giants, a weak trend towards higher binary rates with lower metallicities is found, with the peak at [M/H] \(\simeq\) -0.7 dex. For TESS, we find the same low-metallicity peak as for the _Kepler_ giants. However for the TESS giants, we find a flat distribution. The increase for [M/H] \(\gtrsim\) 0.2 dex is likely due to small number statistics. The bi Figure 9: Binary fractions and distribution of the samples as a function of the distance (left panel) and metallicity (right panel). The top panels visualize the ratio of the number of binary-candidates over the whole sample for each bin of the histogram in the bottom panel. The colors represent the samples of giants detected with TESS and the giants and main sequence stars observed with _Kepler_ as red, blue, and yellow, respectively. The thick grey line indicates the total binary fraction. The bottom panels depict the distribution of the distances and metallicities in a histogram. The filled bars correspond to the number of binary-solutions of the samples. The samples have the same color as in the top panel and the black bars represent the literature sample. For comparison, the dashed lines show the normalized density of the corresponding input samples in the same color. The first and last bar of the histogram in the bottom left panel represents all stars with a metallicity lower than -1 and bigger than 0.4 dex, respectively. For the histogram of the distance the last bar indicates all systems with a distance higher than 4200 pc nary yields of the _Kepler_ dwarf sample appear particularly noisy, again due to small number statistics. We, therefore, do not find the expected trend as clearly as it was described in the literature. ## 4 Orbital evolution through tidal interaction and stellar activity Stars in binary systems provide many constraints that simplify the ill-constrained parameter space for stellar modeling. However, for a detailed analysis, the effects of the interaction between the two stars need to be considered. The dissipation of tidal energy in the stellar structure influences the parameters of the system and the interior of its stellar components (for comprehensive reviews of tidal theory, the reader is referred to Zahn, 2013; Ogilvie, 2014; Mathis, 2019, and references therein). The tidal forces lead to the circularization of the orbit and the alignment and synchronization of the orbital and rotational spins. In the case of strong tidal interaction, the additional heat induced by dissipating the kinetic energy of the tides into the stellar structure might lead to an inflation of the stellar radius (Mathis, 2013). From the first principles of physics, such extra heat will force the star to adjust its radius to stay in the thermal equilibrium and lead to the expansion of the stellar radius. Because the seismic scaling relation is based on the unperturbed solar case, such departure could lead to overestimating the seismically inferred stellar mass and radius. Therefore, it is essential to test the systems used for calibrating the seismic scaling relations for having negligible levels of tidal interaction. ### Strength of the equilibrium tide Because giant stars have deep convective envelopes and reach large stellar radii, the dominating mechanism for dissipation of tidal energy is expected to be the equilibrium tide (e.g. Mathis, 2015; Remus et al., 2012; Gallet et al., 2017). It was confirmed by Beck et al. (2018) that the dynamical tide could have a small contribution in the sub-giant phase and on the very low-luminosity RGB but is overall negligible for the orbital evolution of the binary system. Using the formalism of Verbunt and Phinney (1995) (based on Zahn, 1977; Hut, 1981) to quantify the efficiency of the dissipation of the equilibrium tide, we calculated the rate of the eccentricity reduction in a binary system, which we refer to as \(\varepsilon_{r}\), \[\varepsilon_{r}=\log\left[-\frac{\Delta\ln e}{f}\right]\,. \tag{5}\] In this notation, the parameter \(f\) is an unknown normalization factor on the order of unity. The change in eccentricity, \[\frac{\Delta\ln e}{f} = \frac{-1.7}{10^{5}}\cdot\left(\frac{M_{1}}{M_{\odot}}\right)^{-11 /3}\cdot\frac{q}{(1+q)^{5/3}}\cdot I(t)\cdot\left(\frac{P_{\rm orb}}{\rm day} \right)^{-16/3} \tag{6}\] with \(q\)=\(M_{2}/M_{1}\) as the mass ratio between the system's secondary and primary components, incorporates the third Keplerian law and the circularization function, \[I\left(t\right)=\int_{0}^{t}\left(\frac{T_{\rm eff}}{4500\,K}\right)^{4/3} \cdot\left(\frac{M_{\rm em}(t^{\prime})}{M_{\odot}}\right)^{2/3}\cdot\left( \frac{R_{1}(t^{\prime})}{R_{\odot}}\right)^{8}dt^{\prime}\left[yr\right]. \tag{7}\] This function is dependent on the effective temperature (\(T_{\rm eff}\)), the mass in its convective envelope (\(M_{\rm em}\)), and most importantly, the stellar radius (\(R_{1}\)) of the primary. Because the circularization function depends on the eighth power of the primary radius, the equilibrium tide will become increasingly important as a star advances the red-giant branch. Following the approach of Beck et al. (2018), we used the asteroseismic masses and radii, calculated as described in Section 3.1 for this analysis. The top panel of Fig. 10 depicts the distribution of the binary systems as a function of the change in eccentricity. By construction, large values of \(\varepsilon_{r}\) indicate strong tidal interaction in a system. Verbunt and Phinney (1995) suggested that an \(\varepsilon_{\rm crit}\) = 0.478 approximates the segregation between the systems with strong and less efficient tides. As \(\varepsilon_{\rm crit}\) is not a sharp limit, eccentric binaries with values slightly above this value could be short-lived systems that are about to circularize. Kroupa (1995) coined the term _forbidden binaries_ for this group. Previous studies (e.g. Verbunt and Phinney, 1995; Beck et al., 2018; Benbakoura et al., 2021) did not show eccentric systems at values of \(\varepsilon_{r}\ga 3\). These outliers are very likely unphysical and originate from significantly underestimated orbital periods in the TBO. Most of the system candidates depicted in Fig. 10, therefore, have reported periods in the TBO that are physically meaningful. The lack of systems with values \(\varepsilon_{r}>\varepsilon_{crit}\) in Fig. 10 is a result of the selection of only oscillating objects. Several papers demonstrated (Gaulme et al., 2016; Beck et al., 2018; Mathur et al., 2019; Benbakoura et al., 2021) that the tidally driven spin of the outer stellar layers increases the dynamo action. The increased magnetic activity suppresses solar-like oscillations. Beck et al. (2018) indeed showed that this limit also separates between system hosting giants with oscillating (\(\varepsilon_{r}\la\varepsilon_{\rm crit}\)) and non-oscillating giant primaries (\(\varepsilon_{r}\ga\varepsilon_{\rm crit}\)). Furthermore, a Figure 10: Orbital eccentricity of the parameterized time scale of the tidally driven orbital circularization, \(\varepsilon_{r}\). The top panel shows the position of all binary candidates in the parameter plane. The histogram in the bottom panel depicts the distribution of \(\varepsilon_{r}\) for the systems with an oscillating primary, for which the evolutionary state could be determined from seismology. The color distinguishes between the evolutionary states of secondary-clump (2RC), red-clump (RC), red-giant branch (RGB) in purple, teal, and orange respectively. Because of their small number and nearby evolutionary state, main-sequence and sub-giant primaries are shown as one group in yellow (MS+SG). The grey dashed line marked the proposed value of \(\varepsilon_{\rm crit}\). dependency on the orbital eccentricity can be assumed due to increased tidal strength and additional effects if the system encounters Roche-lobe overflow at the periastron. The resulting distribution of \(\varepsilon_{\rm crit}\) supports the test using the approximation of the Roche-lobe radius, shown in Fig. 5. The main sequence stars are typically located at intermediate to shorter periods (Fig. 7). For this class of stars, the formalism presented in Eq. 5 to 7 separates well the circularized from the eccentric systems. It is interesting to note that in Fig. 10 the main-sequence dwarfs are the group of stars that extends to the lowest values of \(\varepsilon_{\nu}\). This is a consequence of the much smaller radii and convective envelopes of solar-like dwarfs compared to red giants. In such stars radiative structures become significant, necessitating the inclusion of the dynamical tide to accurately describe the tidal budget of the system (Ahuir et al., 2021; Barker, 2022). For stellar objects cooler than the Kraft-break limit and orbital periods less than \(\sim\)10 days, the dynamical tide is so efficient that systems are quickly circularized and synchronized (Offner et al., 2022, and references therein). ### Distribution of orbital eccentricities and periods Binary systems hosting solar-like stars are expected to be born with a flat distribution of eccentricities over a wide range of eccentricities (Moe and Di Stefano, 2017; Mirouh et al., 2023). The subsequent tidal evolution is a function of dwell time and the advances in stellar evolution of both binary components. Therefore, we can study the impact of tidal forces from the distributions of the parameters in distinct evolutionary stages. Hereby, the efficiency of the dissipation of tidal energy has been debated in the literature. In the current understanding, ec Figure 11: Orbital periods and eccentricities of the binary systems hosting a primary with identified evolutionary stages. The particular color and the shape of the data points indicate the seismically inferred evolutionary stage and the space mission this star has been observed with, respectively. The light-blue lines indicate the arcs of constant angular momentum in the e–P plane for circular orbital periods for 1, 10, 100, and 1 000 days. The background color map represents the normalized probability-density distribution of the full SB9 sample. The black lines envelop the regions with a density of at least 7 times the median probability density. The white vertical dashed line represents the 1034 d time base of _Gaia_ DR3. Figure 12: Distribution of orbital eccentricities, separated by evolutionary stage and channel. The left and right panel give the number of systems per eccentricity and period bin, respectively. The color distinguishes between the evolutionary states of secondary-clump (2RC), red-clump (RC), red-giant branch (RGB) in purple, teal, and orange respectively. Because of their small number and nearby evolutionary state, main-sequence and sub-giant primaries are shown as one group in yellow (MS+SG). The vertical dashed line represents the 1034 days time base of _Gaia_ DR3. centric binaries with an evolved component should have circularised during phases of their evolution when they have expanded to large radii. Verbunt and Phinney (1995) argue that, because of the large radius dependence, they expect all systems hosting a red-clump star or an AGB to be circularized. Similar behavior is expected long before the onset of RLOF (e.g. see Vos et al., 2015). Yet, contrary to this prediction, Beck et al. (2018) pointed out the existence of red clump stars in eccentric binary systems. This finding agrees with the eccentricity distributions found in the large sample study of APOGEE time series spectroscopy by Badenes et al. (2018). Furthermore, wide hot-subdwarf binaries are almost all eccentric, even though the primary is a post-RGB star that underwent a mass-loss episode near the tip of the RGB (e.g. Vos et al., 2013, and references therein). Further examples are Ba stars, and Tc-poor S stars (e.g. Van der Swaelmen et al., 2017), symbiotic stars (e.g. Merc et al., 2019), RV Tauri binaries (e.g. Escorza et al., 2020) or dusty post-AGB stars (e.g. Gorlova et al., 2014), all of which can occur in binary systems with significantly non-zero eccentricities, even though they should have circularised if their periods are shorter than \(\sim\)3000 days (Pols et al., 2003). To explain the eccentricities found in red-giant binary systems, Nie et al. (2017) suggested from modelling the binary evolution that the efficiency of the current formalism could be overestimated by a factor of 100. Solar-like oscillators cover a wide range of evolutionary phases and channels. From such a sample, we can probe the distribution of the orbital eccentricities and periods as a function of the evolutionary states to learn more about the efficiency of the equilibrium tide. Figure 11 shows the position of systems with a solar-like oscillator with a known evolutionary state in the \(e\)-\(P\) plane. The sample sizes of the various evolutionary stages, presented in Section 3.2, are sufficient to derive general conclusions. To better quantify and discuss the distributions of these four groups, we present their distribution in histograms shown in Fig. 12. Stars on the main sequence are significantly less evolutionary advanced than any star in the giant phase and consequently closer to the system's initial conditions. As discussed above, these stars have smaller radii which in wider binary systems (\(P_{\rm orb}\!\gtrsim\!\!10\) d) results in lower tidal interaction compared to the equilibrium tide. This is the reason why the systems with primaries in this evolutionary phase, which lasts two orders of magnitude longer than the red-giant phase, keep a relatively flat eccentricity distribution between \(0\!\lesssim\!e\!\lesssim\!0.9\), originating from the birth distribution of \(e\). The situation changes for the systems hosting a post-main sequence star. Even considering that the eccentricities have rather large error bars compared to typical values from ground-based RV monitoring, a general trend is found that red giant stars have lower eccentricities than main-sequence stars. RGB stars are found between \(0.1\!\lesssim\!e\!\lesssim\!0.7\). The lack of circularized systems (\(e\!\lesssim\!0.1\)) with RGB primaries is in agreement with previous studies (Beck et al., 2014, 2022; Gaulme et al., 2014; Benbakouz et al., 2021). Clear differences are found among the two stages of the quiescent helium-core burning, which occupy different regions in the HRD. While the RC stars have their highest occurrence rate below \(e\!\lesssim\!\!0.2\), the more massive 2RC stars show a flat distribution between \(0\!\lesssim\!e\!\lesssim\!\!50.8\), similar to the main-sequence stars. These differences in the distributions are likely to be the product of the accumulated tidal history along the stellar evolution. If the mass of a star is \(M_{\star}\!\lesssim\!\!2\,M_{\odot}\), the inert core will degenerate before it reaches the ignition temperature of He. To obtain the energy to again lift the core degeneracy, the star needs to reach a high luminosity, which forces the star to keep expanding (see Hekker et al., 2020, and references therein) until the core ignites and the star's envelope readjusts. Due to the degeneration, all cores of RGB stars of a given luminosity are similar independent of their mass and metallicity (log(\(L/L_{\odot}\)) \(\simeq\) 3.4, Serenelli et al., 2017). At the tip of the RGB, a star has \(\sim\)175 \(R_{\odot}\). In stars with masses \(M_{\star}\!\gtrsim\!\!2\,M_{\odot}\), the core reaches the ignition temperature before the central regions degenerate. Consequently, such a star will ignite He in its core much earlier and at smaller radii (\(\sim\)30 \(R_{\odot}\)), thus settling in the less luminous secondary clump. The important aspect between RC and 2RC in the context of the tidal analysis is the difference in their maximum radius on the RGB. Because the equilibrium tide depends strongly on the stellar radius, systems hosting 2RC stars are expected to be far less circularised than those hosting RC primaries. Another effect that could lead to lower eccentricities is mass transfer if the system had episodes of RLOF. The range of higher eccentricities in the RGB sample results from an observational bias. By selecting oscillating giants with resolved seismic parameters and evolutionary states, we limit ourselves to giants on the lower part of the RGB with radii \(4\!\lesssim\!R/R_{\odot}\!\lesssim\!\!12\) (see Fig. 1 and 5). Therefore, the stars in the RGB sample are preferentially smaller and observed prior to maximal tidal interactions. While many systems are expected to be found between 1 000 and 2 000 d, the pronounced peak of periods of 1000 days is likely to be an artifact of the _Gaia_ DR3 solutions because periods longer than that are often underestimated (Fig. 2, and 5). However, Fig. 11 strongly indicates that the excess of giants in system periods around 1 000 days results from He-core burning stars. We interpret the reduced eccentricity scatter for RC primaries (\(0.1\!\lesssim\!e\!\lesssim\!0.5\)) compared to RGB primaries to be the result of tidal interactions. The effect of the radius dependence on the tidal strength is also seen in the period distributions as a function of the four evolutionary stages. Because RC stars have already reached their maximum radius on the RGB, many systems with periods below \(\sim\)500 days could have undergone a common-envelope phase, potentially leading to the destruction of the giant primary. Therefore, we only see RC systems at longer periods, while RGB, 2RC, MS and SG are also found at short periods. This is also seen in the typical tidal strength \(e_{\rm r}\) for both evolutionary stages. Figure 10 depicts that the remaining RC systems, because of their smaller radii and wider orbits have indeed much lower tidal interaction than RGBs. ### Surface rotation and stellar activity The rotational behavior of stellar components of a binary system is strongly connected to tides (Zahn, 2013; Ogilvie, 2014). The subsequent phases of pseudo-synchronization and full synchronization of the stellar rotation with the orbital motion of the binary are steps of the evolutionary path to the equilibrium state. Particularly for red giant stars, with their slow rotation period, the tidal spin-up of the envelope will give rise to strong magnetic fields through the triggered dynamo action. Lately, from photometric and chromospheric-emission measurements, Gaulme et al. (2020) and Gehan et al. (2022) showed that red giants belonging to binary systems in a configuration of spin-orbit resonance display an enhanced magnetic activity compared to single stars with the same rotation rate. Therefore, stellar rotation and activity are key observables of stars that provide information about tidal interaction. The level of activity and the rotational period can be estimated from the rotationally modulated flux signal introduced by dark stellar spots being rotated in and out of the line of sight of the observer (Mathur et al., 2019, and references therein). For \begin{table} \begin{tabular}{r|r r r r r|r r r} \hline \hline KIC & P\({}_{\rm orb}\) & \(e\) & P\({}_{\rm rot}\) & P\({}_{\rm orb}\)/P\({}_{\rm rot}\) & Type & \(\nu_{\rm max}\) & \(\Delta\nu\) & S\({}_{\rm ph}\) & Ref \\ & [d] & & [d] & & [uHz] & [uHz] & [ppm] & \\ \hline [MISSING_PAGE_POST] g) 9163769 & 3.17\(\pm\)0.0 & 0.01\(\pm\)0.02 & 3.2\(\pm\)0.2 & 1.0 & SB1 & 1573\(\pm\)11 & 80.62\( the solar-like main-sequence and red-giant stars in our binary candidate sample, rotation periods were derived from the dominant period of the brightness modulation, determined through auto-correlation and wavelet analysis of the _Kepler_ photometry by Garcia et al. (2014), Ceillier et al. (2017) and Santos et al. (2021). We assume that the spots originate from the more luminous, oscillating component. The values are reported in Table 2. The full new picture of the extended sample is shown in Fig. 13. As a reference, the range of the equatorial and polar rotation of the Sun is shown in this picture (27 and 35 days, respectively). Most of the main-sequence primaries with determined rotation periods spin significantly faster than the Sun or at least with solar-like rotation rates. Depending on the orbital period of the systems, we find three distinct forms of appearance related to the interplay between rotation and orbital eccentricity. As expected from theoretical predictions, the surface rotation of stars in systems with orbital periods below our days is synchronized, and the orbit is circularized (Ogilvie & Lin, 2007; Barker, 2022). Dwarf-hosting systems with periods up to \(\sim\)30 days are still pseudo-synchronized but have a wider range of eccentricities. For these groups, the rotational period is influenced by the tidal interaction. Therefore, the age of the primary cannot be determined from the relation between the surface rotation period and stellar age, known as gyrochronology (Barnes, 2007). We do not see any relation between the rotational and orbital periods for systems with orbits longer than the solar rotation rate. Given that these stars have rotation periods shorter than the solar rate, we can estimate that these systems are typically younger than the Sun. The analysis depicted in Fig. 13 also reveals a fundamental difference between dwarfs and giants. As mentioned above, no oscillations are found in giants in circularized and synchronized systems (typically with \(P_{\rm orb}\lesssim\) 20 days Gaulme et al., 2014; Beck et al., 2018). The eight dwarfs and subgiants in circularized and synchronized systems on much shorter periods (stars a-h), however, _do_ oscillate. A detailed analysis of this finding is beyond the scope of this paper. For 28 of the main-sequence stars, the catalogs by Garcia et al. (2014) and Santos et al. (2021) provide an estimate of the average photospheric activity. The \(S_{\rm PH}\) value is the mean, Figure 14: Photospheric activity of main-sequence stars in binary candidates of _Gaia_ DR3 as a function of the rotational period. Red and blue markers indicate rotation periods reported by García et al. (2014) and Santos et al. (2021), respectively. If measurements in both references are given, the Santos et al. (2021) value is shown. For comparison, the minimum and maximum \(S_{\rm ph}\) value of solar cycle 23 are shown as dashed horizontal lines. The red box frames the distinct group stars of short rotation periods and mostly super-solar levels of stellar activity. The labels a-h provide cross identification with the stars in Fig. 13 and Table 2. Figure 13: Synchronization and circularization of binary systems, hosting solar-like oscillators. The top and middle panel show the surface rotation periods of the primary and orbital eccentricities for binary systems as a function of their orbital period, respectively. The bottom panel presents the ratio \(P_{\rm orb}/P_{\rm motion}\) as a function of the orbital eccentricity. Yellow, orange and blue dots indicate dwarfs, sub-giants and red giants, observed with _Kepler_, respectively. Grey pentagons indicate systems reported previously in the literature. Filled and open symbols denote oscillating and non-oscillating primaries, respectively. The inclined lines represent the resonance ratios \(P_{\rm orb}\cdot P_{\rm rot}\) as indicated at the right of the line. The solid magenta line indicates the synchronisation of the surface rotation with the orbit (\(P_{\rm orb}=P_{\rm rot}\)). The grey shaded area depicts the region in which the dynamical tide cannot be excited (2-\(P_{\rm orb}\sim\)P\({}_{\rm rot}\)). The yellow shaded area indicates the range of the solar siderial surface rotation. The solid vertical red line indicates the limiting period for synchronization and circularization on the main sequence (\(P_{\rm rot}\sim\) 10 d). The vertical dashed and dotted lines indicate the length and half the length of the _Gaia_ DR3 mission timebase. standard deviation of the photometric variation in a sliding boxcar, with a time base of five times the rotation period (Mathur et al., 2014; Garcia et al., 2014). The top panel of Fig. 14 compares these values with the minimum and maximum value of \(S_{\rm PH}\) from solar cycle 23, determined by Salabert et al. (2017). As can be seen from this depiction, about 80% of the systems exhibit a solar-like activity level. These are typically longer periodic systems. One separate group of eight highly active stars at short periods stands out. To better reference individual members of this group, we labeled them with letters from \(a\) to \(h\) in Fig. 14 (see red box). To separate if this activity is caused by tides or is simply a young, rapidly spinning star, we present these primaries in the context of their orbital parameters in the middle panel of Fig. 13. If tides are efficient and lead to a tidal spin up this leads to more efficient excitation (and dissipation) of tidal inertial modes and thus a more efficient synchronisation and circularisation. Six of the eight active stars are indeed found in short periodic and circularised systems (\(e\approx 0\), \(P_{\rm orb}\lesssim\)11 days). For these stars, the reason for the enhanced activity is very likely rooted in the tidal interaction. Two stars from this active group are found in wide orbits. If the orbital period listed in the TBO catalog of \(\sim\)1 000 days for star \(c\) is correct, even at such high eccentricity (\(e\approx 0.55\)) for such wide orbits, tides will not produce a lasting effect on the rotation. Similar can be assumed for star \(f\) with a period of \(\sim\)100 days, and an eccentricity of 0.3. Because no significant tidal interaction is present in these systems, their high activity is likely an effect of young and rapidly rotating stars (Skumanich, 1972). All other stars showing solar-like activity values are at orbital periods longer than 11 days. For the red giants, the picture is a different one. For the new systems, the orbital periods range from 30 to 1000 days, while their primaries rotate with periods between 50 to about 200 days. These rotation periods are typical for giants on the less-luminous part of the RGB. Typically, giants have low spot-filling factors. Only a few systems show the signature of spots in their light curves (Ceillier et al., 2017). The rapid rotators among them could be the product of stellar mergers (e.g. Tayar et al., 2015; Patton et al., 2023). In the bottom panel of Fig. 13, we show the ratio of the orbital period to the surface rotation period as a function of the orbital eccentricity. This form shows that hardly any of the systems are synchronized. Only a few systems with shorter periods and stronger equilibrium tides are nearly pseudo-synchronized. Also, their measured spot signature could originate from internal processes that trigger the dynamo (Charbonnel et al., 2017). ## 5 Orbital inclinations and eclipsing binaries The mass of a star is the single most fundamental parameter for understanding its structure and evolution. Depending on the type of star, different techniques can be applied to determine the stellar mass (see Serenelli et al., 2021, for a review). While many mass-determination techniques report high precision, the question of how accurate is the mass can only be answered from the cross-calibration of independent techniques. The most basic technique is the derivation of the dynamical masses of the stellar components in a binary system (for details see Prsa, 2018, and references therein). The precision of this technique relies on well-determined orbital parameters, radial-velocity amplitudes for both components (SB2), and the orbital inclination. The method's bottleneck is the knowledge of the inclination, which traditionally is determined through modeling eclipsing binary systems. Only for 17 eclipsing systems hosting an oscillating component dynamical masses are available from the literature (Gaulme et al., 2016; Benbakoura et al., 2021). From such an anlysis, Gaulme et al. (2016) suggested that the seismic scaling relations overestimate mass and radius by 15% and 5%, respectively. New systems with known orbital inclinations are needed to solve this dichotomy between seismic and dynamical masses. To increase the sample size, we tested if _Gaia_ DR3 contains oscillators in yet unknown eclipsing systems or systems with determined orbital inclinations. ### Gaia epoch and TESS time series photometry The probability \(\theta_{\rm Ecl}\) of a randomly orientated binary system to show eclipses is a function of the sum of the radii of the components (\(R_{1}\), and \(R_{2}\)) and the average distance \(a\) (assuming \(e\)=0) between (Deeg and Alonso, 2018). Using Kepler's third law, we can express this as a function of the orbital period and the sum of the Figure 15: Phasefolded TESS light curves of the _Gaia_ DR3 eclipsing binary system candidates, TIC 268157208 (top panel), TIC 235050452 (second panel), TIC 272357503 (third), TIC 33767523 (fourth), and TIC 293937699 (bottom). Relative _Gaia_ epoch photometry in the \(G\), \(G_{\rm BP}\), and \(G_{\rm RP}\) passbands are shown in green, blue, and red, respectively. For TIC 268157208, TIC 235050452, TIC 272357503 the period from the _Gaia_ DR3 eclipsing binary catalog was used to fold the light curve. For TIC 3376752 the orbital period from the SB1 solution, provided in the TBO catalog is used. For TIC 293937699 we used our orbital period, determined from the TESS data. mass of the components (\(M_{1}\), and \(M_{2}\)), \[\theta_{\rm{Ecl}}=\frac{R_{1}+R_{2}}{a}=\frac{R_{1}+R_{2}}{\sqrt[3]{\left(M_{1}+ M_{2}\right)P_{\rm{orb}}^{2}}}\,. \tag{8}\] The maximum radius of a solar-like star is less then 1AU. Due to the high dependence on the mass difference between both components, it is unlikely, that both targets reach the maximum expansion simultaneously. Typical binary systems, are found with values of \(a\) up to several thousand AU (corresponding to orbital periods of tens of thousands of years) which provides very low probabilites for detecting eclipsing binary systems. Finding eclipsing binaries becomes increasingly challenging with longer orbital periods, as the projected surface that is eclipsed is becoming smaller and requires a nearly perfect alignment at 90 degrees. On the contrary, observing a binary system through RV monitoring, a few well-spread spectroscopic measurements are sufficient to confirm the object's binary nature and even fit the orbital parameters. However, an eclipsing binary can only be detected during the eclipsing phases through direct observations of the eclipses in the photometry or the Rossiter-McClaughlin effect on the radial velocities. The successful photometric search for yet unknown eclipsing binary systems requires continuous monitoring. Consequently, almost all systems hosting an oscillating red giant detected in the _Kepler_ data have periods shorter than the mission duration of four years. As discussed by Beck et al. (2022), this introduces a bias where eclipsing systems identified by satellites tend to have short periods. Such relatively short-period binary systems are not wide enough for luminous red giants at the tip of the red-giant branch to remain in a detached configuration. The quasi-random single-epoch observing strategy of _Gaia_, originating from the scanning law, is not optimal for detecting long-periodic eclipsing binaries and explains why hardly any such eclipsing binaries have been found by the mission (see Fig. 4 in Gaia Collaboration et al.2023a). See appendix A of Eyer et al. (2017) for more details on the time sampling of _Gaia_. During a field-of-view transit of a star three photometric measurements are obtained within less than a minute: a broad-band visual \(G\), and two narrower blue \(G_{\rm{BP}}\) and red \(G_{\rm{RP}}\) pass bands (Riello et al., 2018, 2021). The latter two are derived from integrating low-resolution photo-spectroscopic measurements of the BP and RP instrument, respectively (for more details see sect. 3.3.6 of Gaia Collaboration et al.2016). This data is quasi-randomly sampled over the mission duration. Mowlavi et al. (2023) provided a catalog (vari_eclipsing_binary) of 2.2 million eclipsing binary candidates in _Gaia_ DR3, of which a subset of 86 918 stars were fitted for astrophysical parameters and published in gaiadr3.nss_two_body_orbit with nss_solution_type set to 'EclipsingBinary' (EB). The search of the catalog of eclipsing binaries for the full seismic sample returned five candidates, which are presented in Table 3 and Fig. 15. To validate these candidates, we extracted light curves from space photometry. All four targets were observed in multiple sectors by the TESS Mission. At the time of the analysis, data up to Sector 53 was available. The data were extracted from the _Full Frame Images_ (FFI) using mostly the point-spread function fitting module of the Eleanor package (Feinstein et al., 2019). These observations provide a cadence of 30 and 10 minutes, depending on which sector they were taken. The _Gaia_ multi-color epoch photometry (Riello et al., 2021) and monochromatic space photometry from and TESS mission for these four candidates is shown in Fig. 15. For the target TIC 268157208 (= KIC 8646982), a sub-quarter of Kepler data exists, which is \(\sim\)1.5 times the length of the proposed orbit. Due to the crowded field in which the target is located, we prefer the _Kepler_ data due to its smaller pixel plate scale. The _Kepler_ light curve was taken from the KEPSEISMIC database on the MAST archive3 (for details see Garcia et al.2011, 2014b). Footnote 3: [https://archive.stsci.edu/prepds/kepsesismic/](https://archive.stsci.edu/prepds/kepsesismic/) TIC 268157208 (= Gaia DR3 2079109044266147328) is reported in _Gaia_ DR3 as an eclipsing binary system with a period of 36.1 d. The clear eclipse in the _Gaia_ epoch photometry of about \(\sim\)10% is not found in _Kepler_ and TESS data and is clearly an artifact. For TIC 235050452 (= Gaia DR3 4797117284359411712), the _Gaia_ and TESS light curves show a good agreement in phase and amplitude of the long periodic variations with a period of 10.7 days. The sinusoidal flux modulation with variable amplitude in TESS data indicates rotational-modulated spots. For TIC 272357503 (= Gaia DR3 5214824569250240128), the _Gaia_ epoch photometry suggests a binary with a period of nearly 1 day and eclipses of about \(\sim\)2 to \(\sim\)4% in the red and the blue passband, respectively. The shape of the feature indicates a partial eclipse. The analysis of the TESS photometry does not exclude that the primary and secondary eclipses are very similar, and the system's period is \(\sim\)2 days. Such a system is too small to host a red giant. To determine more information about the components, radial velocities for this system are required. From _Gaia_ epoch photometry, the system TIC 293937699 (= Gaia DR3 5490280956749756928) shows a clear drop of the flux of about 5%. From TESS data, we can confirm the \begin{table} \begin{tabular}{l l l l l l l} \hline \hline TIC & Frequency (geom.) & \(\nu_{max}\) & \(\Delta\nu\) & Reference for & Data & Comment \\ & \([d^{-1}]\) & \([\mu H_{2}]\) & \([\mu H_{2}]\) & seismic values & & \\ \hline 235050452 & \(0.09336\pm 0.00002\) & \(98\pm 10\) & \(9.2\pm 0.1\) & Mackereth et al. (2021) & 6S & Surface or synchronized rotation? \\ 272357503 & \(0.99314\pm 0.00001\) & \(51\pm 13\) & \(5.7\pm 0.2\) & Mackereth et al. (2021) & 23S & \\ 23937699 & \(0.26058\pm 0.00006\) & \(47\pm 4\) & \(6\pm 20\) & Mackereth et al. (2021) & 16S & Eclips.Bin., actual period: 65.112 d \\ 33767523 & \(0.0467\pm 0.00003\) & – & – & – & 18S & \(P_{\rm{orb}}\) = 49.63 \(\pm\)0.02, \(e\) = 0.33 \(\pm\)0.01 \\ 268157208 & \(0.02770\pm 0.00008\) & \(47.49\) & \(4.804\) & Yu et al. (2018) & IQ, 3S & = KIC 8646982 \\ \hline \end{tabular} 1 \end{table} Table 3: Candidates from the eclipsing binary _Gaia_ DR3 catalog with at least one oscillating component. presence of the 5% eclipse, which is actually the secondary eclipse. In contrast, the well-pronounced primary eclipse with an eclipse depth of \(\sim\)15% was missed by chance by the sparse sampling of the epoche photometry. However, the period of \(65.112\pm 0.05\) days, determined from a period analysis of the TESS data, does not agree with the geometrical period of \(\sim\)3.8 d, reported in the _Gaia_ DR3 eclipsing star catalog. This system is similar to the cases with large residuals comparing the literature values for period and eccentricity from the SB9 catalog. TIC 293937699 is one of the few systems hosting an oscillating red giant in an eclipsing binary system. Because radial velocities have yet to be obtained for this system, we will postpone a deeper analysis of this system to a later paper. Similarly, TIC 33767523 (= Gaia DR3 4627969652492312320) is an eclipsing binary for which also the TBO catalog presents an orbital solution of \(P_{\rm orb}\) = 49.63 \(\pm\)0.02, and \(e\) = 0.33 \(\pm\)0.01. Typically, such systems are wide enough to allow an RGB star to oscillate. However, no oscillations are detected. This is probably due to the faintness of the (V \(\simeq\) 11 mag) target. ### Inclinations from astrometric solutions For systems that do not have edge-on orientations (\(i\) \(\simeq\)90deg), the orbital inclination cannot be determined from the light curve. The precise and time-resolved astrometry of the _Gaia_ mission now allows the determination of the orbital inclination and provides photometric or spectroscopic constraints on the primary and secondary mass. For 146 systems hosting an oscillating component, astrometric solutions were found to provide an orbital inclination. The solutions for astrometric orbits of Halbwachs et al. (2023); Holl et al. (2023) and listed in _Gaia_ DR3 provide the Thiele-Innes coefficients describing the orbital solutions and implicitly contain the inclination. The conversion from these orbital elements to the elements in the Campbell formalism, which explicitly contain the inclination was performed with a python tool4 provided by Gaia Collaboration et al. (2023a). Footnote 4: We used the standard conversion formalism (e.g. Halbwachs et al. 2023). The NSS software tools have been developed by N. Leclerc and C. Babusiaux and are available at [https://www.cosmos.esa.int/web/gaia/dr3-nss-tools](https://www.cosmos.esa.int/web/gaia/dr3-nss-tools) As shown in Fig. 16, inclinations are found for 9 main-sequence targets, as well as 31, and 106 giants from the _Kepler_ and TESS sample. We note that inclinations range from 0 to 180 (both plane on orientations). This notation allows distinguishing pro and retrograde movements with respect to the line of sight. Similar to the dynamical masses, these systems have periods starting at about 100 days and range beyond 1 000 days. While the analysis of the full sample by Gaia Collaboration et al. (2023a) shows a maximum in distribution of the inclination at 0 and 180 degrees, we find the maximum for stars in our sample around 90 degrees (Fig. 16) compared to literature. If an inclination for a system is given in _Gaia_ DR3, we list it in Table 6. Of the systems with an inclination from _Gaia_ DR3, only KIC 7103951 has been previously reported in the literature (Gaulme et al. 2020), but without measured radial velocities. Once these systems have accurate radial velocities from ground-based follow-up, these inclinations will be valuable information to extend the sample of calibrators for the scaling relations. The wide range of orbits and the sheer number of targets suggest that this sample contains a sufficient number of giants in the more advanced RC or 2RC status. ### Searching for eclipses in complimentary data From the inclinations reported in _Gaia_ DR3, we identified for binaries with a quasi edge-on orientation to search for additional eclipsing systems that originally have been missed from _Gaia_ epoch photometry. From the systems with known inclinations that are hosting a solar-like oscillator, we found 6 that fall into a range of 90\(\pm\)3 degrees, which are KIC 10732098, TIC 379953111, TIC 38843858, TIC 308539721, TIC 142053145, TIC 237973654. These systems have periods between \(\sim\)200 and \(\sim\)1420 days. To search for eclipses in these targets, we extracted light curves from TESS FFTs. Given that in most cases the orbital period exceeds the timebase of TESS data, it is not surprising that no eclipses are found. Next we searched the _All-Sky Automated Survey for Supernovae_ (ASAS-SN, Kochanek et al. 2017, and references therein5). ASAS-SN began surveying the entire sky in V-band in 2014 with \(2-3\) day cadence and swapped to nightly monitoring in the g-band in 2018. The existing timebase therefore exceeds multiple orbits for all of the candidate systems. However, most of the six targets are brighter than the ASAS-SN saturation limit at Johnson V around \(10-11\) mag. We start by cross-matching with the ASAS-SN V- and g-band variables catalogs (Jayasinghe et al. 2021; Christy et al. 2023; Rowan et al. 2023). While identifying several matches, the variability is consistent with rotational variability instead of eclipses. We then extracted the light curves of the sample mentioned above. Neither the light curve nor the phase curve, produced from the orbital period, reported by _Gaia_ DR3 revealed any signature of the eclipses. The range of three degrees around the edge-on configuration might be too Figure 16: Distributions of the inclinations and orbital period of astrometric binaries hosting a solar-like oscillating primary. The vertical dashed and dotted blue lines mark the full and half length of the _Gaia_ DR3 time base, respectively. The horizontal dashed line marks the inclination for the edge-on orientation, while the dotted lines indicate the plane-on orientation of an orbit. wide, given the decreasing angular size of the binary components to allow for eclipses. We conclude that these targets are most likely non-eclipsing. Given the wide orbits, the range of three degrees around the edge-on configuration might be too wide, given the decreasing angular size of the binary components to allow for eclipses. ## 6 Confirmation through radial-velocity monitoring APOGEE (_Apache Point Observatory Galactic Evolution Experiment_, Majewski et al., 2017) is an all-sky survey, consisting of two nearly identical multi-object fiber-fed spectrographs mounted on the northern 2.5 m Sloan Foundation Telescope at Apache Point Observatory and the southern 2.5 m Irenee du Pont Telescope of Las Campanas Observatory to perform near-infrared spectroscopy in the H-Band with a resolution of R\(\sim\)22 500. It typically visits a source multiple times in the course of the project. Several papers have utilized the millions of single, homogeneous spectra to successfully search for large quantities of binaries in the red-giant phase from RV variations (e.g. Badenes et al., 2018; Gaulme and Guzik, 2019; Daher et al., 2022) To test the binary-candidate detection from _Gaia_ DR3, we searched for significant radial-velocity variations in the spectra contained in APOGEE DR17 (Ahumada et al., 2020). Herefore, we adopt the simple significance criterion of Patton et al. (2023), which flags a source as potential binary if the scatter around the average radial velocity (VSCATTER) is greater than three times the average uncertainty of its RV measurement (VERR_MED) for a target with at least two visits. From the 382 giant oscillators in the _Kepler_ sample with orbit solutions in the TBO, 181 were visited multiple times, leading to 149 binary detections. DR17 is the first APOGEE data release to include a substantial set of observations of TESS targets in the Southern Continuous Viewing Zone. However, most targets were only visited once yet. Therefore, we only could test 7 binary candidates for RV variations, of which 5 exceed the significance limit for RV. For the 45 dwarfs and subgiants observed by the _Kepler_ mission and _Gaia_ DR3 reported orbital parameters, 27 sources had at least two spectra, of which 25 showed significant RV variations. All systems with at least two spectroscopic observations by the APOGEE project are listed in Table 3. In the last column, we indicate if a candidate system exceeded our significance threshold for a binary candidate. Because of the limited number of spectra typically only the binary nature can be confirmed. For systems that are particularly rich in RVs, we folded the APOGEE RVs with the period reported in _Gaia_ DR3. As illustrated by the selected systems in Fig. 17 for which a good agreement was found. Figure 17: Example radial-velocity curves from APOGEE spectroscopy of _Gaia_ DR3 binary candidates. The panels show the RV values phasefolded with the period from _Gaia_ DR3 which is also given in the annotated text. A non-detection of significant RV variation from APOGEE data does not prove a proposed binary candidate from _Gaia_ DR3 wrong. In many cases the multiple visits of a source in APOGEE occur in close temporal proximity, within a few nights. Therefore, the timebase for the spectroscopic observations can be short, compared to orbital periods of several hundred to thousands of days, leading to insignificant results from the RV variations. In particular, the RV variation over small ranges in phase can hardly show any variation for eccentric systems. An additional source of RVs is provided by Beck et al. (2017), who reported six binary systems with an oscillating solar-analog primary from RV monitoring with the Hermes spectrograph (Raskin et al. 2011), mounted on the 1.2 m _Mercator_ telescope on La Palma. The two systems KIC 4914923, and KIC 9098294 were reported to have a peak to peak RV amplitude of 2.11, and 41.35 km/s. KIC 4914923 is also confirmed as binary from APOGEE spectroscopy. The star KIC 3241581 is found in the non-linear solution. Indeed, the RV time series of Beck et al. (2016) points to an orbital period of \(\sim\)1500 d. ## 7 Extended science cases for this dataset This dataset can now be used to test science cases, related to red-giant stars in binary systems. ### Symbiotic binaries Symbiotic stars are interacting binary systems composed of a cool red-giant star and a hot white dwarf or, in some cases, even a neutron star. Such systems move on orbits typically between hundreds and a few thousands of days embedded in an environment of circumstellar gas. The spectra of these systems are often showing strong emission lines due to the photoionization of the nebula by the radiation of the hot component (Munari 2019, and references therein). Because these systems are highly photometrically variable (Merc et al. 2023), it is interesting to test how known or suspected symbiotic binaries perform in _Gaia_ DR3. We conducted a crossmatch, as described in Sec. 2, between the _Gaia_ DR3 database and the catalog of symbiotic binaries, published by Merc et al. (2019, 2019). Out of 141 confirmed galactic symbiotic systems within the range of the magnitude-limited sample (4 \(\leq\) G [mag] \(\leq\) 13), seven targets (\(\sim\)5%) were found in the _Gaia_ DR3 with orbital solutions (Table 4 and Fig. 18). For the six systems for which literature values for the orbital parameters exist, a good agreement is found. Only for AG Dra, a K-type giant and a hot white dwarf on a well-established circular orbit and an orbital period of \(\sim\)550 days (Fekel et al. 2000), _Gaia_ DR3 underestimates the orbital period by \(\sim\)9%. During the time covered by _Gaia_ DR3, AG Dra was in its active stage, showing at least two outbursts at the time when the data for _Gaia_ DR3 were collected (see Merc et al. 2019; Galis et al. 2019). Although the majority of symbiotic systems are found on orbits of 200 to 500 days (Merc et al. 2019, 2019), which are suited for _Gaia_, the detection rate for this class of interacting-binary systems is nearly an order of magnitude lower than for non-interacting, giant-hosting systems (Sec. 2). Because of the large variations that influence the photocenter as well as the stellar spectrum, these targets are difficult to determine in an automated way. In addition to the confirmed symbiotic stars, the crossmatch revealed that another eight out of 744 galactic symbiotic candi \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline \multicolumn{1}{c}{Star} & \multicolumn{1}{c}{2MASS} & \multicolumn{1}{c}{_Gaia_ DR3} & \multicolumn{1}{c}{\(e\)} & \multicolumn{1}{c}{\(P\)} & \multicolumn{1}{c}{Type} & \multicolumn{1}{c}{\(e_{\rm{Lk}}\)} & \multicolumn{1}{c}{\(P_{\rm{Lk}}\)} & \multicolumn{1}{c}{Ref} \\ \multicolumn{1}{c}{identifier} & & & \multicolumn{1}{c}{[d]} & & & & & [d] & \\ \hline StHA 32 & J04374563-0119118 & 3229441606998725888 & 0.12 \(\pm\) 0.03 & 618.47 \(\pm\)10.60 & SB1 & – & 612 & [1] \\ IV Vir & J14163429-2145500 & 6276714894852124032 & 0.03 \(\pm\) 0.02 & 279.98 \(\pm\) 1.13 & SB1 & & 0 & 281.6 \(\pm\) 1.2 & [2] \\ AG Dra & J16014101+6648101 & 164295552278445414 & 0.28 \(\pm\) 0.03 & 502.77 \(\pm\) 6.01 & SB1 & & 0 & 548.65 \(\pm\) 0.97 & [3] \\ Hen 3-1213 & J16351508-5142724 & 5943206543151802752 & 0.17 \(\pm\) 0.06 & 530.11 \(\pm\) 3.76 & SB1 & 0.183 \(\pm\) 0.034 & 533 \(\pm\) 2 [4] & [3] \\ YY Her & J18143419+20592913 & 452806307817919848 & 0.15 \(\pm\) 0.03 & 607.73 \(\pm\) 8.37 & SB1 & – & 589.5 \(\pm\) 0.3\({}^{a}\) & [5] \\ StHA 176 & J20224225-2107546 & 685925924984951521664 & 0.07 \(\pm\) 0.07 & 246.64 \(\pm\) 4.13 & SB1 & – & – & – \\ LT Del & J20355722+2011275 & 1817300516637652352 & 0.40 \(\pm\) 0.10 & 462.81 \(\pm\) 6.88 & SB1 & – & 465.6 & [6] \\ \hline TYC 1371-69-1 & J07573112+2017347 & 670455944074475008 & 0.01 \(\pm\) 0.01 & 119.13 \(\pm\) 0.06 & SB1 & 0.024 \(\pm\) 0.015 & 119.18 \(\pm\) 0.07 & [7] \\ GaSS 1-4 & J11121548-3207193 & 5403474822973970816 & 0.06 \(\pm\) 0.04 & 458.92 \(\pm\) 4.25 & SB1 & – & 235\({}^{a}\) & [8] \\ SkySyc 1-3 & J15265734-7003104 & 5796098502440628864 & 0.08 \(\pm\) 0.02 & 701.13 \(\pm\) 3.81 & SB1 & – & 482.78\({}^{a,b}\) & [9] \\ IGR J15293-5609 J15292939-5612133 & 5883707000513657216 & 0.09 \(\pm\) 0.04 & 31.50 \(\pm\) 0.03 & SB1 & – & – \\ GaSS 1-20 & J16005485-1628325 & 62503660951296892 & 0.09 \(\pm\) 0.14 & 10.51 \(\pm\) 0.00 & SB1 & – & – \\ SS 295 & J17073816-044485 & 4360702354583742080 & 0.17 \(\pm\) 10.0 & 471.11 \(\pm\) 1.34 & SB1 & – & 471.00\({}^{a,b}\) & [10] \\ Gaia DR3 217... & J2181096+5721342 & 217898819945779456 & 0.28 \(\pm\) 0.10 & 753.92 \(\pm\) 26.14 & SB1 & – & – \\ Gaia DR3 533... & J11240425-6013342 & 5339026227414066432 & 0.07 \(\pm\) 0.12 & 17.15 \(\pm\) 0.02 & SB1 & – & – \\ \hline CGCS 5926 & J23454464+6252511 & 2016034975622911360 & – & – & – & – & – & – \\ Gaia DR3 553... & J08070625-4308520 & 553325378183484672 & – & – & – & – & – \\ \hline \end{tabular} 1 \end{table} Table 4: Reported orbital parameters for confirmed and candidate symbiotic binary stars in _Gaia_ DR3. dates have orbital solutions listed in _Gaia_ DR3. Half of the eight candidates have periods reported in the literature. For those systems for which orbital periods are known, we again find good consistency between the literature values and the periods reported by _Gaia_ DR3. For one system the period of the spectroscopic solution in the TBO catalog is twice the photometric period, reported in the literature. Such a difference between the photometric and spectroscopic orbital period is often seen in the case when the giant fills or nearly fills its Roche radius and is ellipsoidally distorted. As a consequence, two minima per orbital period are observable in the light curve, and the period search might return half the value of the true period. Three sources in the sample of candidates have rather short Gaia orbital periods (10 to 32 days) that are substantially shorter than the minimum periods of \(\sim\)200 d found for symbiotic stars (Merc et al., 2019, 2019). If they are true orbital periods of these systems, this would rule out the symbiotic classification of these targets. Among the candidates from the New Online Database of Symbiotic Variables, there are also 337 targets newly identified as possible symbiotic stars from a supervised machine-learning classification by Rimoldini et al. (2023) of the color and variability in _Gaia_ DR3 of 12.4 million sources. Only three of the _Gaia_ symbiotic candidates are reported in _Gaia_ DR3 as binary candidates, whereby one of them has a period of \(\sim\)17 d. Also these numbers show a very low detection rate and that it is very challenging to identify symbiotic binaries from the existing observational data in _Gaia_ DR3. Comparing these 15 systems with the distribution of the orbital period shows that they mostly follow the distribution of other red-giant stars. As for the other red-giant binaries (see Fig. 7), the systems are found with periods less than 1 000 days. Searching also the non-linear and acceleration solutions, we find two additional systems, listed in the bottom panel of Table 4. ### Testing giants with anomalous peaks in the PSD Colman et al. (2017) published a collection of 168 oscillating red-giant stars, the power-spectral density (PSD) of which reveals anomalous peaks. These peaks occur with frequencies very different and outside of the classical power excess. Furthermore, the shape of these peaks in the PSD do not resemble a Lorentz profile but seem to resemble a delta function. This suggests that these peculiar peaks are not stochastically excited but correspond to a periodic variation. For about half of the cases, contamination with background stars was found as the most likely explanation. However, in 81 cases the source of the peculiar frequencies appears to coincide with the giant star. The authors suggested that such frequencies could be produced by the presence of close stellar components within the convective envelope of the red giant or due to a close binary in a hierarchical triple system. We searched the _Gaia_ DR3 TBO catalog for these systems to test if these are actually binary systems, where the found period indeed coincides with the anomalous peak. In total we found seven objects with peculiar peaks to be binary candidates in _Gaia_, which are listed in Table 5 and shown in Fig. 18. In two of these systems the peaks were identified as contamination and in two additional ones contamination could not be excluded. Three system were found to be possible physical associations. For all seven objects, the period of the anomalous peak is below 4.5 days, while all orbital periods are reported to be between 137 and 685 days on moderately eccentric orbits (0.03 \(\lesssim\) \(e\) \(\lesssim\)0.39). These periods are also too long to excite tidal forces. We therefore suggest that these anomalous frequencies are unlikely to be excited by binary interaction. ## 8 Discussion and conclusions In this work, we presented the successful search for solar-like oscillating stars in binary systems, revealed through photometric, spectroscopic, and astrometric solutions in the _Gaia_ DR3 catalog of Two-Body-Orbit solutions, and tested it for completeness and purity. To test the TBO, we used the SB9 catalog of orbital solutions. We introduced a magnitude-limited sample to account for observational biases due to partial saturation (4 \(\leq\) G [mag] \(\leq\) 13). Because the sample contains systems with periods of several ten-thousand days, which are too long to be resolved by the _Gaia_ DR3 baseline of 1034 days, we limited our sample further in periods (P\({}_{\rm orb}\) \(\leq\) 1100 days). We found an overall completeness factor of 28.3% for the complete SB9 catalog. The _Renormalised Unit Weight Error_ or _rnwe_ measures the astrometric likelihood for a source to be a single star. About 40% Figure 18: Orbital parameters of systems hosting symbiotic stars and red giants that exhibit anomalous peaks in the PSD (cyan pentagrams). For the symbiotic systems, red dots and magenta squares depict the confirmed and candidate symbiotic stars, respectively. The background density map depicts the distribution and mark the most common combinations of orbital solutions in the SB9. \begin{table} \begin{tabular}{c|c c c} \hline \hline KIC & \(P_{\rm peak}\) & \(P_{\rm orb}\) & \(e\) \\ & [d] & [d] & \\ \hline \multicolumn{4}{c}{Possible physical associations} \\ \hline 2449020 & 0.83 & 310 \(\pm\)2 & 0.29 \(\pm\)0.05 \\ 10936814 & 4.45 & 665 \(\pm\)6 & 0.02 \(\pm\)0.03 \\ 7596350 & 0.26 & 647 \(\pm\)11 & 0.04 \(\pm\)0.04 \\ \hline \multicolumn{4}{c}{presumed chance alignments} \\ \hline 5556726 & 0.48 & 172 \(\pm\)1 & 0.24 \(\pm\)0.02 \\ 12117138 & 4.40 & 685 \(\pm\)18 & 0.39 \(\pm\)0.03 \\ \hline \multicolumn{4}{c}{confirmed chance alignments} \\ \hline 2167774 & 0.35 & 137.0 \(\pm\)0.2 & 0.21 \(\pm\)0.06 \\ 1872210 & 0.67 & 540 \(\pm\)6 & 0.20 \(\pm\)0.07 \\ \hline \end{tabular} 1 of the detected binaries from the SB9 sample had _rnve_ values below 1.4, a conservative limit for the astrometric binary detection, and were detected by other means. Performing the same searches of the TBO catalog for the lists of identified solar-like oscillators from the NASA _Kepler_, and in the Southern Continuous Viewing Zone of the NASA TESS mission, we identified 970 binary system candidates that host solar-like oscillating stars, among which 954 systems are newly detected systems. The sample presented in this work increases the binary stars with oscillating components by an order of magnitude. The full wealth of asteroseismic information allows for a comprehensive study of the system and its oscillating component. From the search results, we obtain a magnitude-limited completeness factor of about 4% for the full red-giant star sample. Taking into account the unresolved binaries and the completeness factor, determined from our comparison of the SB9, we arrive at a binary yield similar to the expected value in the literature of 30-40% (Moe & Di Stefano, 2017; Offner et al., 2022). We assessed the mass and stellar radii ranges using the asteroseismic scaling relations. Our analysis showed that TESS suffers from noise and the limited length of the photometric time series, leading to an underestimated large-frequency separation. As a result, the masses for stars with radii larger than the typical radius of the red clump (\(\nu_{\rm max}\la 30\,\mu\)Hz, see Fig. 5, top panel) can be overestimated, leading to an excess of stars with \(M_{\star}\ga 2\,M_{\odot}\). To test if the orbits reported in the TBO catalog are physically plausible, we compared the orbital periods and the seismically inferred radii with the radius limit for the Roche-lobe overflow (Fig. 5, bottom panel). Except for a few reported systems with periods \(P_{\rm orb}\la 10\,\)days where the orbit would be smaller than the radius of the primary, all reported values were found to be physically possible. However, these systems could be actual binaries with a significantly underestimated period. An additional argument for the realistic orbital periods comes from the location of the datapoints in the parameter plane, which are all well separated from the Roche-lobe limit. This gap is expected because systems were selected based on the criterion that the primary is oscillating. As a system approaches the limit for the RLOF, increased strength of tidal interactions starts to suppress the oscillations. Because of the robust residuals centered at zero found in the comparison of the TBO catalog with the SB9, the large fraction of physically reasonable orbital periods, and the approximate agreement of the expected binary fractions for stars of about 1 \(M_{\odot}\), we consider most of the binary candidates reported in _Gaia_ DR3 TBO catalog as reliable new binary systems. The large amount of binary systems opens the door to studying binary star interaction and related activity. Using the seismically determined evolutionary stages, we could view the distribution of the orbital eccentricity and period as a function of stellar evolution. We showed that red-clump stars have lower eccentricities and are biased towards longer periods than systems hosting the less evolved RGB stars. We attribute the lower eccentricities as a result of the increased strength of the tidal interaction due to the larger radii at the tip of the RGB. The lack of periods below 500 d originates from phases of intense star-star interaction, such as the RLOF or common-envelope phase. For the oscillating dwarfs, we showed the correlation between high photospheric activity and tidal circularization and synchronization. We used the asteroseismic inferences for the oscillating giants to analyze the distribution of the orbital period and eccentricity as a function of the evolutionary state. Indeed, we could show differences that agree with the predictions for the tidally driven evolution of binary as they converge to the equilibrium state of circularized systems. For 146 systems the inclinations angles are reported in _Gaia_ DR3. We converted the notation of the value and associated it with oscillating primaries. If RVs for both components are reported from ground-based observations, these systems will provide additional valuable benchmark systems for calibrating the scaling relations. If in those systems rotational splitting of non-radial modes is measured, the inclination of the rotation axis can be measured and test the spin-orbit alignment. A first work in this direction, based on _Gaia_ DR3 was presented by Ball et al. (2023) With an increasing orbital period, the probability of detecting transits in a binary system decreases for geometrical reasons, which explains the small number of eclipsing binary systems found in the vast datasets of space photometry. From our search of the _Gaia_ variability catalog, we found one previously unknown eclipsing binary system hosting an oscillating red-giant primary. Analyzing the radial velocities derived from APOGEE and Hermes spectroscopy, we could independently confirm 149 binaries out of 181 systems, proposed by _Gaia_ DR3 and with multiple APOGEE spectra. This low number of the sample viewed in APOGEE originates from the limited sampling in the APOGEE observations and is expected to be improved with forthcoming DRs. For most of the systems, which had RV measurements, binarity could be confirmed. Therefore, we see the majority of the binary candidates, reported in _Gaia_ DR3 as of bona-fide candidates. Given the numbers, this work is a first encouraging step into binary ensemble seismology. The work presented was only based on a subsample of well-characterized stars. Forthcoming data from the TESS mission will soon provide new detections of solar-like oscillations systems. With the launch of the ESA PLATO (Rauer et al., 2014, and Rauer et al. 2023 subm.), scheduled for 2026, we will increase the sample of binary systems with a component characterized by seismology. Given the mission goals of observing bright stars (Nascimbeni et al., 2022), the expected yield of the seismic detection for the _Gaia_ binaries should be similar to those of the K2 mission described in this work,i.e. an increased number of potential binary systems with seismic characterization. The forthcoming data releases of the _Gaia_ mission6 will allow a more complete census of the binary population and therefore a closer estimate of the actual binary occurrence rate. The 66 months or \(\sim\)2000 days of DR4, which is projected to be made public before 2026, will double the current time base, allowing the detection of wider systems and substantially improving the residuals of the orbital parameters beyond 500 days. DR5 is planned to cover all data collected during the entire mission duration. Such extended baseline will increase the number and reliability of orbital solution around 1 000 days. Because most systems with a He-core burning giant (RC) have orbital periods in this regime, this binary census extension will help lift the current selection bias that produces an abundance of H-shell burning (RGB) primaries. The data set of the ESA _Gaia_ mission are truly a Rosetta stone for studying binary systems' evolution and tidal interaction with evolved stellar components. Footnote 6: [https://www.cosmos.esa.int/web/gaia/release](https://www.cosmos.esa.int/web/gaia/release) ###### Acknowledgements. The authors thank the people behind the ESA _Gaia_, NASA _Kepler_, and NASA TESS missions. PGB acknowledges support by the Spanish Ministry of Science and Innovation with the _Kamam y Cajal_ fellowship number RYC-2021-033137-1 and the number MRR4032204. Substantial research work for this paper was performed during a summer suborbital research stay of PGB at the _Ohio State University_ (OSU), Columbus, Ohio, generously supported by the _NAWI Graz: Mobility Grant 2022_. PGB thanks OSU for the hospitality and scientific exchange during his stay. PGB, DG, LS, LS, LS, and NM acknowledge the financial support by _NAWI Graz:_ JM acknowledges support from the Instituto de Astrofisica de Canarias (IAC) received through the 1AC cardy career visitor program. SM acknowledges support by the Spanish Ministry of Science and Innovation with the _Ramony Cajal_ fellowship number RYC-2015-17697. SM and DGR acknowledge support from the Spanish Ministry of Science and Innovation with the grant no. PID2019-170187GB-I00. RAG and SM acknowledge support from the PLATO CNES grant. PG was supported by the German space agency (Deutsche SEntrum fur Luft- und Raumfahrt) under PLATO data grant SOD05101. PG and J acknowledge NASA grant NNX1747G for partial support. KH acknowledges support through NASA ADAP grants (N0MSSC19K0594). LS acknowledges financial support through the _Marshallan Plan Foundation Austria_ with contract number 20561394292820227, MV acknowledges support from NASA grant 80NSSC18K1582 This work has made use of data from the European Space Agency (ESA) mission _Gaia_ [https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/gaia/close/consortium](https://www.cosmos.esa.int/gaia/close/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. This paper includes data collected with the Kepler & TESS missions, obtained from the M2ST data archive at the Space Telescope Science Institute (STScI). Funding for these missions is provided by the NASA Science Mission Directorate and by the NASA Explorer Program respectively. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. Software: Python (Van Rossum & Drake, 2009), numpy (Oliphant, 2006; Harris et al., 2020), matplotlib (Hunter, 2007), scipy (Virtanen et al., 2020), Astroquery (Ginsburg et al., 2019). This research made use of Astropy (Asbury Collaboration et al., 2013, 2018), a community-developed core Python package for Astronomy.
2305.06809
Collection Space Navigator: An Interactive Visualization Interface for Multidimensional Datasets
We introduce the Collection Space Navigator (CSN), a browser-based visualization tool to explore, research, and curate large collections of visual digital artifacts that are associated with multidimensional data, such as vector embeddings or tables of metadata. Media objects such as images are often encoded as numerical vectors, for e.g. based on metadata or using machine learning to embed image information. Yet, while such procedures are widespread for a range of applications, it remains a challenge to explore, analyze, and understand the resulting multidimensional spaces in a more comprehensive manner. Dimensionality reduction techniques such as t-SNE or UMAP often serve to project high-dimensional data into low dimensional visualizations, yet require interpretation themselves as the remaining dimensions are typically abstract. Here, the Collection Space Navigator provides a customizable interface that combines two-dimensional projections with a set of configurable multidimensional filters. As a result, the user is able to view and investigate collections, by zooming and scaling, by transforming between projections, by filtering dimensions via range sliders, and advanced text filters. Insights that are gained during the interaction can be fed back into the original data via ad hoc exports of filtered metadata and projections. This paper comes with a functional showcase demo using a large digitized collection of classical Western art. The Collection Space Navigator is open source. Users can reconfigure the interface to fit their own data and research needs, including projections and filter controls. The CSN is ready to serve a broad community.
Tillmann Ohm, Mar Canet Solà, Andres Karjus, Maximilian Schich
2023-05-11T14:03:26Z
http://arxiv.org/abs/2305.06809v1
# Collection Space Navigator: An Interactive Visualization Interface for Multidimensional Datasets ###### Abstract We introduce the Collection Space Navigator (CSN), a browser-based visualization tool to explore, research, and curate large collections of visual digital artifacts that are associated with multidimensional data, such as vector embeddings or tables of metadata. Media objects such as images are often encoded as numerical vectors, for e.g. based on metadata or using machine learning to embed image information. Yet, while such procedures are widespread for a range of applications, it remains a challenge to explore, analyze, and understand the resulting multidimensional spaces in a more comprehensive manner. Dimensionality reduction techniques such as t-SNE or UMAP often serve to project high-dimensional data into low dimensional visualizations, yet require interpretation themselves as the remaining dimensions are typically abstract. Here, the Collection Space Navigator provides a customizable interface that combines two-dimensional projections with a set of configurable multidimensional filters. As a result, the user is able to view and investigate collections, by zooming and scaling, by transforming between projections, by filtering dimensions via range sliders, and advanced text filters. Insights that are gained during the interaction can be fed back into the original data via _ad hoc_ exports of filtered metadata and projections. This paper comes with a functional showcase demo using a large digitized collection of classical Western art. The Collection Space Navigator is open source. Users can reconfigure the interface to fit their own data and research needs, including projections and filter controls. The CSN is ready to serve a broad community. ## 1 Introduction Large collections of digital artifacts with multidimensional associated metadata can be studied using browsable interactive visualizations that reflect or at least resonate with the intrinsic shape of their data (Manovich, 2012). Mapping the multidimensional topology of a collection into a multidimensional space can help to better understand the overall structure of a dataset and can uncover patterns hinting at underlying trends and dynamics. For example, a researcher or curator may visually explore the space, as constituted by some measure of artifact similarity, looking at different groups of similar objects to identify regions of interest for further quantitative and qualitative investigation. Multidimensional feature vectors further describe artifact properties in a heterogeneity of ways. This can include both categorical and numerical information. Numerical properties can be derived directly from meta data, such as in the case of artifact creation dates, or constructed through various feature extraction techniques. Neural network methods, for example, can encode measures of complex text semantics (Devlin et al., 2018), of visual image properties (Krizhevsky et al., 2017; Simonyan and Zisserman, 2014; He et al., 2016; Kolesnikov et al., 2020), of joint image-text pair embeddings (Sohl-Dickstein et al., 2015; Radford et al., 2021; Rombach et al., 2022), or of spectral audio features (Ren et al., 2018). Hand-crafted feature engineering approaches (Zhang et al., 2020) and more straightforward algorithmic approaches such as compression ensembles (Karjus et al., 2022) further promise to offer more interpretable vector representations. Dimensionality reduction techniques can be used to reduce high-dimensional data to a more manageable number of dimensions by remapping or projecting the multidimensional topology into a lower-dimensional coordinate space (Pearson, 1901; Van der Maaten and Hinton, 2008; McInnes et al., 2018; Amid and Warmuth, 2019; Tang et al., 2016). For visual interpretation of multidimensional embedding spaces, such projection methods are used to present the data in two or three dimensions, which essentially can function as a more or less distorted reference topography of the original multidimensional topology. The challenge for dimensionality reduction techniques is to preserve complex relationships, while removing information. For example, objects close to each other in high-dimensional topological space should ideally also be close to each other in the low-dimensional topographic projection space. Multiple projection views can help to better understand multidimensional data (Amid and Warmuth, 2019). Two-dimensional static projections are visually comprehensible, but can only provide a limited view into multidimensional data. Alternatively, to gain intuition over a high-dimensional vector space, it can be helpful to interpret and compare many different projections. Interactive components in graphical user interfaces (GUI) can further help to gain intuition over the complex interactions between many dimensions. Graphical user interface elements such as range sliders are particularly useful to navigate through multiple features and dimensions, and to query and filter the data (Williamson and Shneiderman, 1992; Ahlberg and Shneiderman, 1994). ## 2 Related Work Most closely related to the Collection Space Navigator are multiple interactive visualization experiments and prototypes which have emerged in recent years, and which mediate high-dimensional embeddings through low-dimensional projections. They either try to provide an intuitive understanding of multidimensionality and embedding methods by visualizing datasets commonly used for Machine Learning tasks (Smilkov et al., 2016; Labs, 2019; Custer, 2020), or offer explorative interfaces to similarity spaces of cultural collections (Diagne et al., 2018; Duhaime, 2017; Oygard, 2017; Glinka et al., 2017). These interactive visualization projects aim to provide overview and deeper insight into their collections, tailored to specific datasets. The VIKUS Viewer (Pietsch, 2017; Pietsch et al., 2020) offers a more general framework for exploring cultural collections. It allows not only to view a collection as a similarity map of its image embeddings, but also to dynamically filter metadata such as time and categories. The Selfiexploratory of the Selfiexploratory project (Manovich et al., 2015) similarly uses a number of range sliders to filter a multidimensional dataset of images. The CSN user interface aligns with classic conventions of cultural cartography and modern examples of interactive data visualization and scholarly figure design. Following long-established conventions, the CSN combines a cartesian projection with an auxiliary index and call-out details (Nolli, 1748). Following a modern paradigm of interactive figure design (Bertin, 2010; Victor, 2011), the CSN further allows for a deeper functional user experience (UX) and eventually understanding of multidimensional data. The navigation paradigm of the CSN range sliders, which function as Dimension Filters, further resonates, in particular, with the recent state-of-the-art of understanding mathematical multidimensionality via interactive animation (Sanderson, 2017). The CSN combines these foundations with the paradigm of a scatter plot of images as made popular in Cultural Analytics since 2012 (Manovich 2012a). The CSN is also functionally similar to network visualization applications, such as Cytoscape (Shannon et al. 2003) or Gephi (Bastian et al. 2009), which focus on depicting another (yet related) form of multidimensionality in node-link diagrams of complex networks (Bohm et al. 2022). Even more broadly, the authors of the TensorFlow Embedding Projector (Smilkov et al. 2016) suggest to include multipanel projections, i.e. more than one simultaneous projection panel. This would also make sense as a possible extension of the CSN and would be in line with the prevalence of "multi-chart" figure panels in multidisciplinary science journals (Lee et al. 2017; Berghaus 1845). ## 3 The Collection Space Navigator ### Motivation We developed the Collection Space Navigator as a flexible browser-based research tool applicable across various use cases and research domains: 1. Researching large collections of digital objects (e.g. images, videos, audio, text, 3D models) with the ability to identify patterns and similar groups based on metadata and vector embeddings; Figure 1: The Collection Space Navigator (CSN). The central _Projection Area_ displays a x-y scatter plot of images based on the selected projection (e.g. UMAP, t-SNE), with filtered images greyed out, and mouse-over highlight. The _Object Panel_ (left) shows a larger _Object Preview_ of the highlighted image, together with _Object Info_ based on selected metadata; _Object Appearance_ visualizes clusters (optional), sets the projection thumbnail size (zoomed-out and zoomed-in). The _Control Panel_ (right) allows for selection of _Data and Projections_; custom interactive _Dimension Filters_ and _Advanced Filters_ facilitate dataset exploration, analysis, and understanding (see text); the filtered object metadata and current projection view can be downloaded via _Export Filtered Data_. 2. Understanding multidimensionality and projection methods by comparing different embedding spaces and dimensionality reduction techniques through intuitive navigation; 3. Presenting entire media collections online and communicating research findings with diverse audiences. While prototypes and use cases exist for each of these aspects as discussed above, we are not aware of a tool that meets all of these requirements. Our contribution therefore lies in the combination and extension of existing interfaces to work towards a more universal and open modular research and curation system. ### Design principles To make complex interactions in multifaceted datasets comprehensible while meeting the diverse needs of researchers, we formulated three design principles for the tool: 1. Providing an open modular system that adapts to different research needs, domains and datasets while preventing information overload; 2. Providing a complete overview of the collection while encouraging immersive exploration of the objects; Figure 2: Examples of various 2D projections and visualization features in the CSN tool. a) UMAP projection with large thumbnails, providing a comprehensive view of the image content; b) UMAP projection with medium size thumbnails and cluster colors of categorical data selected in _Object Appearance_; c) UMAP projection with medium size thumbnails and filtered-out objects in grey; d) UMAP projection with small thumbnails and cluster colors, providing a more compact representation; e) t-SNE projection, showing an alternative dimensionality reduction technique; f) Simple x/y plot, here showing PC1 over time for temporal analysis. The flexibility of the CSN tool provides the ability to effectively explore and compare data across different visualization methods via displaying multiple different 2D projections, flexible based on data import configuration, selectable in _Data Sor Projections_, combined with thumbnail sizes and cluster highlights as adjustable via the _Object View Settings_. 3. Providing a multitude of interaction mechanisms and modalities to foster intuition, such as zooming, panning, hovering, and sliding through feature dimensions. ### Components #### 3.3.1 Projection Area The central part of the interface is the _Projection Area_ (Figure 1 center). It maps the given input collection in its entirety as miniature images in an interactive 2D scatter plot, with coordinates defined by the chosen projection method. Basic navigation operations such as zooming or "drag and move" allow free exploration of the projection space. While the user sees only 2 dimensions in the central projection area, the CSN technically includes a third axis for depth. Moving along the depth axis (by zooming) effectively reveals overlapping objects. The appearance of the thumbnails can be adjusted in the _Object Panel_. #### 3.3.2 Object Panel The _Object Panel_ (Figure 1 left) has three collapsible sub-menus: _Object Preview_, _Object Info_ and _Object Appearance_. The _Object Preview_ section displays a larger version of the miniature thumbnail currently hovered on in the _Projection Area_. By default it simply shows a larger version of the same thumbnail, but it can be set to display a higher resolution version of the hovered image, stored either locally or remotely. The _Object Info_ section provides detailed information on the currently selected object. This aspect of the CSN is highly flexible, as the metadata fields that provide this information can be easily defined in the separate configuration file. Minimally it can display just the file name, but it can equally well include extensive metadata -- for example in the case of art collections, the author, production year, location, genre, style, and other details. The _Object Appearance_ section contains options to control the visual appearance of the objects in the _Projection Area_. A predefined group (from categorical metadata) can be selected from a drop-down list to show clusters. Objects of the same category are depicted with the same color border around their thumbnails. The size and scale of the miniature images are adjustable with convenient sliders. Size determines how large the thumbnails should be when fully zoomed out, while Scale affects the size when zoomed in. #### 3.3.3 Control Panel The _Control Panel_ (Figure 1 right) has four collapsible sub-menus: _Data & Projections_, _Dimension Filters_, _Advanced Filters_, and _Export Filtered Data_. The _Data & Projections_ section contains a _Dataset_ drop-down list of selectable datasets. For very large collections, we recommend providing a smaller subset by default and offering the entire set on demand (such subsets can be conveniently produced using the CSN configuration Python notebook). The section also contains a _Projection_ drop-down list of selectable projections and mappings. Switching between different projections, e.g. different embedding or reduction methods, smoothly rearranges the positions of the objects in the _Projection Area_. These intuitive animations can provide new insights into the intermediate state between two projections and expose their differences. The _Dimension Filters_ are optional interactive elements that function to filter the objects in the _Projection Panel_ (Figure 3). They control the range of the assigned variables, which could be dimensions of the embedding, metadata such as date or year of creation of an artwork, or inferred properties of the image such as colorfulness or contrast. Histograms above the range sliders provide additional statistical information and feedback on how changes affect the distribution of the mapping. They are constantly updated to reflect the distribution of the entire dataset as well as the distribution of filtered and unfiltered objects. Clicking on a histogram activates the _Bin Mode_: moving the cursor over the bars of the histogram temporarily displays only objects within the narrow range of the bar. A second click terminates this function and sets the filters back to the previous state. Additionally, hovering over the thumbnails in the _Projection Area_ highlights the corresponding vertical bar in each histogram. The _Advanced Filters_ section is a text field to construct and apply search and filter queries. By default, it handles basic query operators such as AND, OR, equals (==), does not equal (1=), as well as custom operators. Nested and complex search queries are also supported using round brackets. In the setup, using the configuration Python notebook, each metadata field can be defined as a Free Text Entry (enabling queries) or as a Categorical Selection (generating drop-down lists, i.e. GUI elements that allow simple search and selection). The _Export Filtered Data_ section in the _Control Panel_ allows downloading the metadata of the currently filtered objects as a CSV file, and the current projection view as PNG file. ## 4 Applications An important aspect of the CSN is the flexibility to combine different data types and different methods for feature extraction and dimensionality reduction. To initially test the CSN towards many potential applications in other disciplines, we utilized the application in several use cases within our multidisciplinary research lab. All these cases vary in terms of research questions, approaches, and datasets. Here we briefly describe three examples which (i) analyze embedding spaces, (ii) study multidimensional metadata, and (iii) explore and curate large sets of generated images respectively. Mixed applications are of course possible. A growing diversity of application demos will be added to the project website1. Figure 3: Interactive Dimension Filters. Left: Unfiltered Dimension Filters, consisting of range sliders with interactive histograms above them, showing the distribution of all objects along the slider’s dimension with the bin of the currently selected object highlighted in red; Center: Reducing the range of one slider affects the distribution of all dimensions, reflected by the histograms; Right: _Bin Mode_ functionality is activated by clicking on a histogram, allowing the user to activate one bin at a time, with the Projection Area displaying the corresponding objects within the active bin. These interactive Dimension Filter features allow in-depth exploration and visualization of multi-dimensional data distributions allowing users to gain a deeper understanding of the relationships between data points. ### Analyzing aesthetic complexity in art collections Karjus et al. (2022) propose quantifying visual aesthetic complexity using "compression ensembles", an information theory driven method based on transforming images using visual filters and repeatedly compressing them. This approach produces embeddings analogous to those from deep learning based image embeddings, but with explainable and arguably more interpretable results in the aesthetic domain. This paper uses the same dataset as our examples in figures 1 to 3 and our associated initial demo application, i.e. a corpus of digitized images of visual art such as paintings and drawings, a subset of 74,028 works of the art500k project corpus (Mao et al. 2017), in turn primarily sourced from Wikiart, covering mainly Western art of the last 400 years. Here and in similar projects focusing on visual art, the CSN can serve both as a research and a presentation tool in several capacities. The visual and interactive overviews of the embedded artworks allow for a closer examination of art historical trends over time and in relation to their aesthetic complexity. Using multiple representations and filters can further help to build an intuitive understanding of the given embedding method. In the case of compression ensembles, the variables or dimensions are interpretable (Karjus et al. 2022), and can be decorrelated and grouped, for example using Principal Component Analysis. A concrete usage of the CSN in this circumstance is for example to display the dataset in the projection area using two principal components or specific compression dimensions as x and y coordinates, while filtering another compression dimension with the range sliders to visualize the respective interaction. ### Studying represented worldviews in historical newsreels Oiva et al. (forthcoming) propose a multidisciplinary framework for the study of historical newsreels, which can be extended to news broadcasts and audiovisual collections more broadly. The initially analyzed dataset consists of 1747 newsreels produced weekly over 50 years in the Soviet Union. Within the project, the digitized video clips, about 8-10 minutes each, are split into individual frames, i.e. static images, which form the unit of analysis either in their full granularity or coarse-grained, for example as a single frame per shot. The material consists of predominantly black and white footage of variable quality, transitioning to color towards the end of the period. Rich metadata accompanies the visual information. Taking all this into account, the authors aim to analyze and understand the material, including in particular the variation of intrinsic worldviews. Figure 4: Two distinct use cases of the Collection Space Navigator. Left: Historical newsreels with rich metadata; Right: Investigating text-to-image generation through Stable Diffusion with linguistic basic concepts as prompts. By using domain specific metadata filters and projections, the CSN tool allows for deeper exploration providing insight into visual and thematic patterns. In this project, the CSN is used both as a presentation tool to communicate to the audience, and a research tool for the authors (Figure 4), allowing for exploration of the film footage, to identify recurring patterns and shots, and to generate hypotheses based on a combination of the visual representation of movie frames based on image embedding vectors, and accompanying metadata. In this case, indeed, a stronger focus lies on the latter explicit metadata-dimensions, which can be mapped to range-sliders in a similar way as the numerical vectors in example 4.1. ### Exploration and curation of text-to-image models The advent of easily accessible text to image models such as DALL-E, Midjourney, and Stable Diffusion (Rombach et al. 2022). has brought artistic and photo-realistic image generation to the mainstream. This is very likely to transform creative industries in the near future. Yet the effectively near-infinite content capacity will require understanding of the models to be made use of, and efficient curation to work with already generated images. The CSN is well suited for such tasks, allowing for insight into large image sets using various visual similarity and metadata representations and filtering options. For the demo, we generated of images with Stable Diffusion 1.5 using the basic word list of the Automated Similarity Judement Program (ASJP) (Brown et al. 2008) as the input text prompts, and embedded them using the ResNet50 pretrained image model. This approach, querying simple terms such as colors, body parts and animals, allows for probing the "defaults" of the text2image models, while generating multiple versions of each prompts yields insight into variation in its capacity. The same approach could be used to compare different models, as well as investigate effects of complexity of the prompts. Naturally, more complex prompts such as those common in artistic text2image creation could be explored as well. ## 5 Conclusion The Collection Space Navigator (CSN) as introduced in this paper is a powerful and flexible data visualization tool working with large sets of images associated with multidimensional data, including embedding vectors and tabular metadata. The CSN is publicly available on Github, including a functioning demo as well as a How-To configuration and usage guide in form of an interactive Python notebook. As described above, the interface can be conveniently and readily tailored to the needs of a particular research project and dataset. An important aspect of the CSN in this regard is its flexibility to combine different vector and metadata types, a variety of projection methods, and a diversity of representations. This includes the bespoke configuration of different representation and filter controls, resulting in an elegant user interface that allows the user to focus on the material under investigation, without unnecessary distractions in the form of "default" controls. We have exemplified the CSN here using visual data. Yet, while input data is by default represented by small images or thumbnails in the Projection Area, the input does not need to be vectors derived from images. For example, embeddings or metadata of audio or text could be visualized equally well, represented by suitable thumbnails or labels. We imagine, such "unorthodox" usage could facilitate the exploration and further research on such non-visual spaces of meaning too. By making the code fully open source, we encourage further development in this and other directions, for example adding playback functionality to images associated with audio or video files. In sum, the Collection Space Navigator (CSN) is a new research tool that facilitates the work of artists, curators, scholars, and scientists, who study large collections of digital visual artifacts associated with metadata and/or vector embeddings. We release the CSN as open source under MIT License, to enable broad reuse and further development without restriction. We hope it will invigorate a diverse ecology of multidisciplinary research and curatorial practice, and help to deepen our collective understanding of multidimensional meaning spaces. ## Code availability and demo The CSN is released as open-source (MIT licence), with code and documentation available on GitHub at [https://github.com/Collection-Space-Navigator/CSN](https://github.com/Collection-Space-Navigator/CSN) A live demo, using the the same example data as in figures 1 to 4 in this paper, is available at [https://collection-space-navigator.github.io/CSN](https://collection-space-navigator.github.io/CSN) ## Author contributions, acknowledgments and funding T.O. and M.C. designed, co-authored, and developed the Collection Space Navigator. T.O., M.C., A.K. and M.S. contributed to the research design and co-wrote the manuscript. T.O., M.C. and A.K. collected data. T.O. and M.C. contributed equally to this work as first authors. The authors thank Sebastian Ahnert, Mila Oiva, and the entire CUDAN team for useful conversations and input. All authors are supported by the CUDAN ERA Chair project, funded through the European Union's Horizon 2020 research and innovation program (Grant No. 810961)
2304.09813
Pulse shape and voltage-dependent synchronization in spiking neuron networks
Pulse-coupled spiking neural networks are a powerful tool to gain mechanistic insights into how neurons self-organize to produce coherent collective behavior. These networks use simple spiking neuron models, such as the $\theta$-neuron or the quadratic integrate-and-fire (QIF) neuron, that replicate the essential features of real neural dynamics. Interactions between neurons are modeled with infinitely narrow pulses, or spikes, rather than the more complex dynamics of real synapses. To make these networks biologically more plausible, it has been proposed that they must also account for the finite width of the pulses, which can have a significant impact on the network dynamics. However, the derivation and interpretation of these pulses is contradictory and the impact of the pulse shape on the network dynamics is largely unexplored. Here, I take a comprehensive approach to pulse-coupling in networks of QIF and $\theta$-neurons. I argue that narrow pulses activate voltage-dependent synaptic conductances and show how to implement them in QIF neurons such that their effect can last through the phase after the spike. Using an exact low-dimensional description for networks of globally coupled spiking neurons, I prove for instantaneous interactions that collective oscillations emerge due to an effective coupling through the mean voltage. I analyze the impact of the pulse shape by means of a family of smooth pulse functions with arbitrary finite width and symmetric or asymmetric shapes. For symmetric pulses, the resulting voltage-coupling is not very effective in synchronizing neurons, but pulses that are slightly skewed to the phase after the spike readily generate collective oscillations. The results unveil a voltage-dependent spike synchronization mechanism in neural networks, which is facilitated by pulses of finite width and complementary to traditional synaptic transmission.
Bastian Pietras
2023-04-19T16:58:53Z
http://arxiv.org/abs/2304.09813v2
# Pulse shape and voltage-dependent synchronization in spiking neuron networks ###### Abstract Pulse-coupled spiking neural networks are a powerful tool to gain mechanistic insights into how neurons self-organize to produce coherent collective behavior. These networks use simple spiking neuron models, such as the \(\theta\)-neuron or the quadratic integrate-and-fire (QIF) neuron, that replicate the essential features of real neural dynamics. Interactions between neurons are modeled with infinitely narrow pulses, or spikes, rather than the more complex dynamics of real synapses. To make these networks biologically more plausible, it has been proposed that they must also account for the finite width of the pulses, which can have a significant impact on the network dynamics. However, the derivation and interpretation of these pulses is contradictory and the impact of the pulse shape on the network dynamics is largely unexplored. Here, I take a comprehensive approach to pulse-coupling in networks of QIF and \(\theta\)-neurons. I argue that narrow pulses activate voltage-dependent synaptic conductances and show how to implement them in QIF neurons such that their effect can last through the phase after the spike. Using an exact low-dimensional description for networks of globally coupled spiking neurons, I prove for instantaneous interactions that collective oscillations emerge due to an effective coupling through the mean voltage. I analyze the impact of the pulse shape by means of a family of smooth pulse functions with arbitrary finite width and symmetric or asymmetric shapes. For symmetric pulses, the resulting voltage-coupling is little effective in synchronizing neurons, but pulses that are slightly skewed to the phase after the spike readily generate collective oscillations. The results unveil a voltage-dependent spike synchronization mechanism at the heart of emergent collective behavior, which is facilitated by pulses of finite width and complementary to traditional synaptic transmission in spiking neuron networks. ## I Introduction Self-organization in large neural networks crucially relies on rapid and precise, in short, highly effective neuronal communication. Brisk synaptic interactions allow for emergent collective behavior and can orchestrate neural synchronization, which is believed fundamental to cognitive functions and consciousness. A key player in the synaptic transmission process is the spike--as has been nicknamed the action potential of a neuron. As a central information unit of the brain it has risen to fame and fortune [1, 2, 3]. Spikes are believed critical for information processing and coding, and more so as the basis of communication between neurons. Once the membrane potential of a neuron exceeds some threshold, the soma quickly depolarizes and the neuron "spikes". Straight off, a fast electrical impulse travels along the neuron's axon to the presynaptic knobs, where it triggers various biochemical processes to release neurotransmitters and eventually induce a postsynaptic current in the connected cell [4, 5, 6, 7]. In developing a mechanistic understanding of the collective behavior of large neural networks, incorporating a high degree of biological detail is challenging. For computational and mathematical convenience, chemical synaptic transmission is therefore often reduced to the causal effect of a presynaptic infinitely narrow Dirac \(\delta\)-pulse ("spike") that initiates a postsynaptic response. In the same spirit, spiking neuron models have been devised to focus upon the subthreshold membrane properties and exclude the mechanisms responsible for generating action potentials (i.e., the voltage-dependent sodium and potassium channels). This tactic has proven useful to understand the information processing capabilities of neurons and to efficiently simulate neural networks with synaptic interactions modeled by \(\delta\)-spikes. On an individual level, the \(\delta\)-spike assumption is probably most successfully caricatured in integrate-and-fire neuron models [8]. Also in biophysically realistic Hodgkin-Huxley-like conductance-based neuron models, the spike generation can be very rapid compared to relatively slow subthreshold integration. The separation of time scales becomes extreme--converging to a \(\delta\)-spike--if neurons are Class 1 excitable and near the onset of firing. Ermentrout and Kopell [9, 10] proved that any such Class 1 excitable neuron can be transformed into a canonical, one-dimensional phase model--the \(\theta\)-neuron. It is one of the simplest spiking neuron models and closely related to the quadratic integrate-and-fire (QIF) neuron. In network models of these spiking neurons, the default description of synaptic interactions is with \(\delta\)-spikes, but a growing number of studies have proposed synaptic transmission via pulses of finite width. This raises the question how pulses of finite width should be interpreted. They may represent the release of neurotransmitters at the presynaptic site, the conversion of neurotransmitter signals into postsynaptic currents at the postsynaptic site, or a combination of both. Despite widespread use, the nature of these pulses remains largely unclear. Meanwhile, the shape of the pulse is critical for neuronal synchronization and can either facilitate or impede col lective oscillations. Currently, however, there is no definitive understanding how the pulse shape affects collective dynamics. In this paper I revisit the use of pulse-coupling in networks of QIF and \(\theta\)-neurons to provide, first, a clear biological interpretation for pulses of finite width and, second, a comprehensive view on the impact of the pulse shape on the collective dynamics of globally coupled spiking neurons. A better understanding of these fundamental aspects of synaptic transmission will greatly contribute to our understanding of neural network behavior. ### The "pulse-problems" of \(\theta\)- and QIF neurons QIF and \(\theta\)-neurons are two paradigmatic spiking neuron models. Each one is a canonical model for biologically detailed conductance-based neurons and can be derived from realistic, high-dimensional neuronal dynamics [11]. On the individual level, QIF and \(\theta\)-neurons are closely related; they become equivalent if threshold and reset values of the QIF neuron are taken at infinity (Fig. 1). In network models, however, pulsatile synaptic interactions between QIF neurons are often modeled differently from those between \(\theta\)-neurons. This dichotomy stems in part from the difficulty to derive canonical network models from networks of synpatically coupled conductance-based neurons. Such a network reduction is more challenging, if at all possible, than a single neuron reduction [11; 12]. Therefore, pulse-coupled spiking neural networks often lack mathematical rigor, especially when refraining from \(\delta\)-spike interactions (but see Section II.1). Given the opposing approaches to pulse-coupling in networks of QIF or of \(\theta\)-neurons, it is inevitable to question the biological plausibility of pulses of finite width. Shall those pulses replicate the biochemical processes at the presynaptic site (converting the presynaptic action potential into the release of neurotransmitters), at the postsynaptic site (converting the neurotransmitter signal into a postsynaptic current), or at both sites? Moreover, and to retain the computational advantages of spiking networks, pulses are usually defined as functions of the state variable of the presynaptic neuron. This can be limiting because chemical synaptic transmission is a dynamic process that, in principle, involves both the presynaptic and the postsynaptic neuron. It would thus be consistent to restrict the interpretation of those pulses to the presynaptic site of a synapse, where the pulse reflects the conversion from an action potential into the release of neurotransmitter. The state variables of the QIF and of the \(\theta\)-neuron, however, do not describe realistic voltage traces in the course of an action potential--in contrast to conductance-based neuron models (from which they may have been reduced). It is therefore not clear if the pulse function can actually relate the state variables to voltage-dependent mechanisms that eventually trigger neurotransmitter release. In Sections II.1 and II.2, I will revisit pulsatile interactions between \(\theta\)-neurons and between QIF neurons, and propose biologically plausible interpretations for pulses of finite width with symmetric and asymmetric shapes. Next to a clear and biologically plausible interpretation, the other "pulse-problem" is the largely unexplored effect of the pulse shape on the collective dynamics of pulse-coupled QIF or \(\theta\)-neurons. Pulses of finite width have frequently been used in the literature to describe synaptic interactions especially between \(\theta\)-neurons, but also between QIF neurons. Most of the time, the pulses are assumed symmetric about the neuron's spike time without varying the shape much. The mechanisms by which the pulse shape can affect the network dynamics, promote synchrony, and induce collective oscillations even for instantaneous coupling, remain largely unknown. A few insights, though, can be borrowed from the vast results on coupled phase oscillators--the phase-representation of the \(\theta\)-model literally invites one to invoke phase oscillator theory; nonetheless, direct analogies should be regarded with care (see Section V.1). Traditionally, the Winfree model [13; 14] has paved the way to study synchronization of periodically firing neurons. It pinpoints a pulse-response coupling, where the pulse can be shown to result from the interplay between presynaptic action potential and a synaptic activation function ("voltage-dependent conductance") [15]. Furthermore, the Winfree model has allowed for exploring the effect of the response and pulse functions on synchronization properties of the network [16; 17; 18]. Similarly, coupled Figure 1: Reduction from (a) a high-dimensional, conductance-based neuron model close to a SNIC bifurcation to (b) the one-dimensional canonical \(\theta\)-neuron model with \(\eta<0\). The trajectory describing the action potential along the limit cycle is shaded in red and compressed around \(\pi\) in the \(\theta\)-neuron. Via the transform \(v=\tan(\theta/2)\), the \(\theta\)-neuron is equivalent to (c) the QIF neuron when reset and peak values are taken at infinity. active rotators [19; 20] served as a basis to study how the width of (symmetric) pulses affects collective dynamics [21]. In line with the results on the Winfree model, broad pulses were reported to entail collective dynamics that can be different from those generated by narrow pulses [17; 18; 21]; it remains unclear, however, how these results carry over to networks of \(\theta\)-neurons. A crucial tool for distilling the effect of the pulse shape on the collective dynamics of Winfree oscillators and active rotators has been an exact dimensionality reduction first proposed by Ott and Antonsen [22]. Although immensely powerful, the "Ott-Antonsen ansatz" requires the pulse function to be analytically tractable, which may come at the cost of biological realism. The introduction of the Ott-Antonsen ansatz in networks of \(\theta\)-neurons [23] has inspired a plethora of \(\theta\)-neuron network studies using symmetric and broad pulses ever since [24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44]. Despite the prevailing uncertainty of their biological interpretation, these pulses nowadays seem well established in the community. Nonetheless, a comprehensive picture how their width influences collective dynamics is still missing. At the same time, they are not versatile enough, either, to study the effect of pulse asymmetry. In a nutshell, a systematic investigation how the pulse shape--be it symmetric or asymmetric--affects the collective dynamics of \(\theta\)- or QIF neurons has remained elusive. ### Synopsis & outline I strive for resolving the "pulse-problems" of \(\theta\)- and QIF neurons: in the first part, I will provide a clear biological interpretation for pulses of finite width; in the second part, by I will analyze the impact of the pulse shape on the collective dynamics. As to their biological interpretation, I first show how pulses of small but finite width can be rigorously introduced in networks of \(\theta\)-neurons as the canonical pulse-coupled model for weakly connected Class 1 excitable neurons (Section II.1). These narrow pulses can represent detailed, though only instantaneous, synaptic transmission from pre- to postsynaptic neurons through, e.g., conductance-based synapses. Due to the close connection between \(\theta\)- and QIF neurons, the interpretation of narrow pulses also carries over to pulse-coupling in QIF neurons. Regardless of this interpretation, voltage-dependent pulses \(p(v)\) can directly be introduced in QIF neurons, where they are meant to replicate the transmission process at the presynaptic site. Crucially, the synaptic activation function \(p\) needs to be modified to account for the artificial shape of the QIF's action potential (Section II.2). By capitalizing on the correspondence between the QIF and the \(\theta\)-model, the voltage-dependent pulses \(p(v)\) are conveniently approximated by pulses \(p_{r,\varphi,\psi}(\theta)\) with parameters \(r,\varphi,\psi\) that allow for interpolating between discontinuous \(\delta\)-spikes and continuous pulses of finite width with symmetric or asymmetric shapes (Section II.3). The family of smooth pulse functions \(p_{r,\varphi,\psi}(\theta)\) is admissible to an exact reduction of globally coupled spiking neurons in the thermodynamic limit [45], which builds on recent advances in coupled oscillator theory [46; 47; 48], and allows for a comprehensive analysis of the impact of the pulse shape on the collective dynamics. The resulting mean-field model exactly describes the collective dynamics in terms of the firing rate \(R\) and the mean voltage \(V\) (Section III). Taking the population average yields an expression of the mean pulse activity \(P_{r,\varphi,\psi}=\langle p_{r,\varphi,\psi}\rangle\) that is fully determined by \(R\) and \(V\). For instantaneous synaptic transmission, the exact mean-field dynamics converges towards an invariant two-dimensional manifold [45], on which the ordinary differential equations for the firing rate and voltage ("RV dynamics") are closed in \(R\) and \(V\)[49] and amenable to mathematical analysis. In Section IV, I study the resulting two-dimensional RV dynamics and analyze how the different pulse parameters--width \(r\), asymmetry \(\varphi\), and shift \(\psi\) --affect the region of collective oscillations. I prove that collective oscillations emerge due to an effective coupling through the mean voltage \(V\) (Section IV.1). This voltage-dependence readily arises for global pulse-coupling as \(P_{r,\varphi,\psi}=P_{r,\varphi,\psi}(R,V)\) explicitly depends on \(V\), except for the limit of \(\delta\)-spikes, \((r,\varphi,\psi)\to(1,0,\pi)\). In other words, pulse-coupling generally facilitates collective oscillations through an effective voltage-coupling, but if neurons interact via \(\delta\)-spikes, the recurrent input no longer depends on the mean voltage and collective oscillations become impossible. Moreover, the pulse shape determines the effectiveness of the pulse-mediated voltage-coupling and, thus, plays a crucial role for the emergence of collective oscillations. For symmetric pulses, collective oscillations are confined to a small parameter region and require unrealistically strong inhibition the narrower the pulse (Section IV.2). Additionally, broad symmetric pulses have a nongeneric effect on the collective dynamics that is not present in narrow pulses, see also [17; 18; 21]. In contrast to symmetric pulses, narrow pulses that are slightly skewed to the phase after the spike readily generate collective oscillations in networks of inhibitory neurons (Section IV.3), whereas pulses that are slightly skewed to the phase before the actual spike generate collective oscillations among exhibitory neurons (Section IV.4). Together, the results shed new light on the voltage-dependent spike synchronization mechanism that is typically not captured in traditional mean-field, or firing rate, models [50], but which crucially underlies collective oscillations in neural networks. In the Discussion in Section V, I first review previous approaches to pulse-coupling in networks of spiking neurons, which may have led to misconceptions about the interpretation and the effect of pulses of finite width in networks of \(\theta\)- and QIF neurons (Section V.1). I then revisit the three pulse parameters in more detail and draw connections to delayed synaptic interactions and electrical coupling via gap junctions (Section V.2). Finally, I return to the question whether instantaneous pulses of finite width can replace more complex synaptic transmission in spiking neuron networks including synaptic kinetics and conductance-based synapses--I argue in the negative (Section V.3). Conclusions, final remarks and an outlook will be given in Section VI. Mathematical details can be found in Appendices A to G. ## II Pulses in spiking neuron networks ### Pulses in the canonical model of weakly connected Class 1 excitable neurons Despite more than 20 years since their formal derivation from biologically plausible and biophysically detailed neural models, pulse-coupled neural networks are still often regarded as toy models. Yet, a mathematical theorem by Izhikevich [51] states that, under a handful of rather general conditions, a large class of realistic conductance-based neural network models can be reduced to a canonical model of pulse-coupled \(\theta\)-neurons, which exhibits qualitatively the same dynamical properties as all weakly connected networks of Class 1 excitable neurons. Here I show how Izhikevich's result can be refined such that synaptic interactions between \(\theta\)-neurons are not mediated by \(\delta\)-spikes, but by (smooth) localized pulses of finite width. These narrow pulses describe the rapid effect of a presynaptic action potential, triggering a postsynaptic response in the connected neuron, and are a direct consequence of Theorem 1 below. The conditions for Theorem 1 to hold read as follows (see [51] also for a detailed discussion on their biological plausibility): 1. Neurons are Class 1 excitable1, i.e. action potentials can be generated with arbitrarily low frequency, depending on the strength of the applied current. This can be realized in a conductance-based neuron model close to a saddle-node bifurcation on an invariant circle (SNIC bifurcation). Footnote 1: Class 1 (or I) excitable neurons are also known as “Type I membranes” whose neuronal dynamics exhibit “Type I excitability”. 2. Neurons are weakly connected, i.e. the amplitudes of postsynaptic potentials (PSP) are much smaller than the amplitude of an action potential or than the mean excitatory PSP size necessary to discharge a quiescent neuron. 3. Synaptic transmission has an intermediate rate, which is slower than the duration of an action potential, but faster than the interspike period. 4. Synaptic connections between neurons are of conventional type, i.e. axo-dendritic or axo-somatic. 5. Synaptic transmission is negligible when presynaptic neurons are at rest, i.e. spontaneous release of neurotransmitters does not affect significantly spiking of postsynaptic neurons. Assumption (i) assures that an individual Class 1 neuron close to a SNIC bifurcation can be reduced to a "theta"-neuron with a phase variable \(\theta\) on the unit circle \(\mathbb{S}^{1}\), whose dynamics is governed by the Ermentrout-Kopell canonical model [9; 10; 52] \[\dot{\theta}=\tfrac{d}{dt}\theta(t)=(1-\cos\theta)+(1+\cos\theta)\eta \tag{1}\] with excitability parameter \(\eta\). The SNIC bifurcation occurs for \(\eta=0\). For \(\eta<0\), the neuron is in an excitable regime as the dynamics (1) exhibits a pair of stable and unstable equilibria. The stable equilibrium represents the neuron's resting state. The unstable equilibrium represents a threshold: when an external input drives the neuron across this threshold, \(\theta\) will move around the circle in the course of the neuron's action potential and approaches the resting state from below. The neuron is said to fire a spike when \(\theta\) crosses \(\pi\). For \(\eta>0\), the two equilibria have disappeared in a saddle-node bifurcation, leaving a limit cycle, and the neuron is in a tonically (periodically) spiking regime. The reduction from a general conductance-based Class 1 excitable neuron close to a SNIC bifurcation to the \(\theta\)-neuron (1) is sketched in Fig. 1 for \(\eta<0\): close to the saddle-node bifurcation, a small neighborhood of the resting potential is blown up and the trajectory describing the neuron's action potential along the limit cycle is compressed to an open set around \(\pi\) (shaded in red). When such a Class 1 neuron is embedded in a network of interacting neurons, the reduction to a canonical network model becomes arbitrarily more involved because any transformation that aims at simplifying the dynamics of an individual neuron immediately affects the form of the other neurons through the coupling terms [11; 12]; the reduction of a neural network has thus to occur for all neurons simultaneously. Assumptions (ii) to (v) guarantee that one can derive a canonical network model for pulse-coupled Class 1 neurons as follows: **Theorem 1**.: [51]_. Consider an arbitrary weakly connected neural network of the form_ \[\dot{X}_{j}=F_{j}(X_{j},\lambda)+\varepsilon G_{j}(X_{1},\ldots,X_{N};\lambda,\varepsilon) \tag{2}\] _satisfying that each (uncoupled) equation \(\dot{X}_{i}=F_{j}(X_{j},\lambda)\) undergoes a SNIC bifurcation for some \(\lambda=\lambda_{0}\), that each function \(G_{j}\) has the pair-wise connected form_ \[G_{j}(X_{1},\ldots,X_{N};\lambda_{0},0)=\sum_{k=1}^{N}G_{jk}(X_{j},X_{k})\] _and each \(G_{jk}(X_{j},X_{k})=0\) for \(X_{k}\) from some open neighborhood of the saddle-node bifurcation point. Then, there is \(\varepsilon_{0}>0\) such that for all \(\varepsilon<\varepsilon_{0}\) and all \(\lambda=\lambda_{0}+\mathcal{O}(\varepsilon^{2})\) there is a piece-wise continuous transformation that maps solutions of (2) to those of the canonical network model_ \[\begin{split}\theta^{\prime}_{j}=&(1-\cos\theta)+(1+ \cos\theta_{j})\eta_{j}\\ &+\sum_{k=1}^{N}w_{jk}(\theta_{j})\delta(\theta_{k}-\pi)+\mathcal{ O}(\sqrt{\varepsilon}\ln\varepsilon)\end{split} \tag{3}\] _where \({}^{\prime}=d/d\tau\), \(\tau=\varepsilon t\) is slow time, \(\delta\) is the Dirac delta function, and each function \(w_{jk}\) has the form_ \[w_{jk}(\theta_{j})=2\arctan\left(\tan\frac{\theta_{j}}{2}+s_{jk}\right)-\theta _{j} \tag{4}\] _with constants \(s_{jk}\) proportional to \(|G_{jk}|\)._ In the canonical pulse-coupled network Eq. (3), individual \(\theta\)-neurons are characterized by the parameter \(\eta_{j}\) that is determined through \(F_{j}\). The function \(F_{j}\) describes the isolated dynamics of the underlying neuronal model, which is given, e.g., in the form of a conductance-based Hodgkin-Huxley-like neuron. For two synpatically connected neurons \(k\) and \(j\), interactions arise according to Assumption (v) only when the presynaptic neuron, say neuron \(k\), fires a spike. Just then, its phase variable \(\theta_{k}\) crosses \(\pi\). At the same instant, the phase \(\theta_{j}\) of postsynaptic neuron \(j\) is incremented by an amount \(w_{jk}(\theta_{j})\) which depends on neuron \(j\)'s own state; the function \(w_{jk}(\theta)\) can thus be understood as the neuron's phase response curve (PRC). If the \(|s_{jk}|\) are small, one can linearize Eq. (4) and distill the infinitesimal PRC of the \(\theta\)-neuron from \(w_{jk}(\theta_{j})=(1+\cos\theta_{j})s_{jk}+\mathcal{O}(s_{jk}^{2})\), so that Eq. (3) reduces to \[\begin{split}\theta^{\prime}_{j}=&(1-\cos\theta_{j} )+(1+\cos\theta_{j})\big{[}\eta_{j}+\sum_{k=1}^{N}s_{jk}\delta(\theta_{k}-\pi) \big{]}\\ &+\mathcal{O}(\sqrt{\varepsilon}\ln\varepsilon)+\mathcal{O}(s^{2 })\.\end{split}\] ( \[3^{\prime}\] ) In Eq. (\(3^{\prime}\)), the second remainder \(\mathcal{O}(s^{2})\) represents higher-order coupling terms and the first remainder \(\mathcal{O}(\sqrt{\varepsilon}\ln\varepsilon)\) smooths the pulses \(\delta(\theta_{k}-\pi)\) emitted by the presynaptic neurons \(k\neq j\). In [11] it was shown that the event of a presynaptic action potential has a small but finite duration lasting \(4\sqrt{\varepsilon}\) units of slow time (when \(\theta_{k}\) crosses a \(2\sqrt{\varepsilon}\)-neighborhood of \(\pi\)). Assumption (ii) requires that \(\varepsilon\ll 1\) is small, so that the approximation of an emitted pulse by the Dirac \(\delta\)-function seems reasonable, cf. [Proposition 8.12, 11]. By the same token, however, it can equally be justified to introduce a pulse function \(p_{jk}(\theta_{k})\) of small yet finite width and rewrite Eq. (\(3^{\prime}\)) as \[\theta^{\prime}_{j}=(1-\cos\theta_{j})+(1+\cos\theta_{j})\big{[}\eta_{j}+\sum_ {k=1}^{N}p_{jk}(\theta_{k})\big{]},\] ( \[3^{*}\] ) where I neglected terms of order \(\mathcal{O}(s^{2})\). The pulse function \(p_{jk}(\theta_{k})\) in Eq. (\(3^{*}\)) is a smoothed \(\delta\)-pulse of (weak) strength \(s_{jk}\) that satisfies \(p_{jk}(\theta_{k})=0\) if \(|\theta_{k}-\pi|>2\sqrt{\varepsilon}\), see Fig. 2. Specifying \(p_{jk}\) any further is difficult and may become ambiguous because the pulse depends nonlinearly on the shape of the presynaptic action potential (governed by \(F_{j}\)) in interplay with the coupling function \(G_{jk}\). For example, the pulse \(p_{jk}\) that postsynaptic neuron \(j\) receives from a presynaptic neuron \(k\neq j\) may describe a conductance-based synapse in the underlying model Eq. (2) of the form \[G_{jk}(X_{j},X_{k})=-g_{\text{syn}}(X_{k},t)\big{[}X_{j,v}(t)-E_{\text{syn}} \big{]}; \tag{5}\] here, \(X_{j,v}\) denotes the voltage-component of vector \(X_{j}\), \(E_{\text{syn}}\) is a reversal potential, and \(g_{\text{syn}}(X_{k},t)\geq 0\) denotes the synaptic conductance that is activated by the action potential of neuron \(k\). As long as the temporal dynamics of \(g_{\text{syn}}(t)\) is short and does not violate Assumption (iii), Eq. (\(3^{*}\)) is a valid description for the smooth pulsatile synaptic transmission resulting from Eq. (5). Predicting the actual shape and exact timing of the pulse \(p_{jk}(\theta_{k})\), however, is laborious. This uncertainty about the pulse shape \(p_{jk}\) leaves room, under the given constraints, to define a variety of pulse functions \(p\) in Eq. (\(3^{*}\)) that can, but need not, be symmetric about the spiking event when \(\theta_{k}=\pi\); they can also be asymmetric, skewed and/or shifted. When modeling a conductance-based synapse (5) proportional to \((X_{j,v}-E_{\text{syn}})\), the resulting pulse function may even change signs (depending on the value of \(E_{\text{syn}}\)). In this manuscript, however, I will only focus on pulses \(p_{jk}(\theta)=s_{jk}p(\theta)\) with \(p(\theta)\geq 0\) non-negative. Likewise, I will include no habituation, no delay nor synaptic kinetics, nor any synaptic fatigue. The point here is to illustrate some of the consequences of the pulse function \(p(\theta)\) on the network's collective behavior. Figure 2: Sketch of (a) the coupling function \(G_{jk}\) of the conductance-based neuron model satisfying Assumption (v) and of (b) a smooth pulse function \(p_{jk}\) in the pulse-coupled network Eq. (\(3^{*}\)) of \(\theta\)-neurons that can be derived via Theorem 1 from weakly coupled Class 1 neurons with state variables \(X_{j}\in\mathbb{R}^{n}\) and resting potentials \(|X_{j}|=0\). _Remark 1_.: The reasoning above allows for a rigorous interpretation of smooth localized pulses in networks of \(\theta\)-neurons, which convert presynaptic action potentials into postsynaptic responses between weakly interacting Class 1 neurons. Already with narrow pulse-coupling, the \(\theta\)-neuron network can exhibit rich collective behavior (Section IV) with important consequences for the neurocomputational dynamics of more complex, conductance-based neuron network models. Towards a thorough understanding, it seems reasonable to also study pulse-coupled networks of \(\theta\)-neurons with arbitrary pulses beyond the limits imposed by the derivation of the canonical model Eq. (3\({}^{*}\)), see also Section V.3. ### Voltage-dependent pulses of QIF neurons Equation (3\({}^{*}\)) describes the dynamics of a network of \(\theta\)-neurons with instantaneous pulse-coupling, which has been rigorously derived from a universal class of weakly connected Class 1 excitable neurons. Assuming weak interactions, \(|p_{jk}|=\mathcal{O}(s_{jk})\), and homogeneous stereotypical action potential shapes, I now set \(s_{jk}=J/N\) for all \(j,k=1,\ldots,N\), so that \(p_{jk}(\theta_{k})=(J/N)p(\theta_{k})\) with a general pulse function \(p\) and coupling strength \(J\in\mathbb{R}\) re-scaled with respect to the network size \(N\). By identifying \(v_{j}=\tan(\theta_{j}/2)\), the \(\theta\)-neuron becomes equivalent to the QIF neuron (Fig. 1b and c) and Eq. (3\({}^{*}\)) can be transformed into a network model of QIF neurons, whose voltage variables \(v_{j}\) follow the subthreshold dynamics \[\frac{d}{d\tau}v_{j}=v_{j}^{2}+\eta_{j}+\frac{J}{N}\sum_{k=1}^{N}p_{k}(\tau). \tag{6}\] The pulses \(p_{k}\) received by postsynaptic neuron \(j\), \[p_{k}(\tau)=p\big{(}\theta_{k}(\tau)\big{)}=p\big{(}2\arctan v_{k}(\tau)\big{)}, \tag{7}\] are then implicitly defined in terms of the presynaptic voltage \(v_{k}\) through their corresponding \(\theta\)-phase. Because of the connection with the \(\theta\)-model (3\({}^{*}\)), the pulses \(p_{k}\) may already represent complex synaptic transmission through, e.g., conductance-based synapses of the form \(I_{\text{syn}}=g_{\text{syn}}(v_{j},t)[E_{\text{syn}}-v_{k}(t)]\), cf. Eq. (5). This pulse-interpretation becomes rigorous due to the exact correspondence between QIF and \(\theta\)-neurons when the QIF dynamics (6) is equipped with a fire-and-reset rule that takes peak and reset potentials at infinity (Fig. 1). There are, however, alternative biologically plausible interpretations for pulses of finite width between interacting QIF neurons because the equivalence with the \(\theta\)-model is not the only raison detre of the QIF model. For example, the QIF dynamics can be obtained from conductance-based neuron models with a parabolic-like voltage nullcline, which is a general feature of Hodgkin-Huxley-like neurons with positive feedback (so-called amplifying) ionic currents, through a quadratization procedure [53; 54]. Moreover, the subthreshold QIF dynamics (6) is the topological normal form of a saddle-node (SN, or fold) bifurcation [55]; the QIF model thus describes any neuronal model close to a SN bifurcation [56; 57]. Equipped with finite threshold and reset values, the QIF model is also the simplest spiking neuron model with a spike generation mechanism (i.e., a regenerative upstroke of the membrane potential), with a soft (dynamic) threshold and a spike latency [58]. The approximation with infinite threshold and reset values (as considered throughout this paper) then allows for valuable insights into the collective dynamics of QIF neurons as typical Class 1 excitable systems, though not necessarily claiming an explicit biophysical interpretation of the underlying neuronal dynamics. In any case, the QIF model definition (6) in terms of voltage variables \(v_{j}\) calls for an alternative interpretation of the pulses (7) that avoids the derivation from the canonical pulse-coupled \(\theta\)-model (3\({}^{*}\)). _Remark 2_.: The QIF dynamics (6) were obtained through the forward transformation \(v_{j}=\tan(\theta_{j}/2)\) from Eq. (3\({}^{*}\)), where the \(\theta\)-neuron represented the canonical model for a Class 1 excitable neuron close to a SNIC bifurcation. As a matter of course, one can also start with interacting QIF neurons, which do not necessarily represent the canonical model for Class 1 neurons, and obtain a network model of \(\theta\)-neurons as in Eq. (3\({}^{*}\)) through the inverse transformation \(\theta_{j}=2\arctan(v_{j})\), as has been proposed in [59; 60]. How can one introduce biologically plausible pulses of finite width between QIF neurons solely taking the QIF dynamics (6) into account? A natural candidate are voltage-dependent pulses \(p_{k}=p(v_{k})\) that can be identified with the synaptic conductance \(g_{\text{syn}}(v_{k},t)\), where the presynaptic voltage acts instantaneously on \(g_{\text{syn}}\), i.e. \(g_{\text{syn}}(v_{k},t)=g_{\text{syn}}(v_{k}(t))\). Such an instantaneous relation has, e.g., been manifested through the voltage-gated calcium influx at the presynaptic axon during an action potential [61] or through the sigmoidal relationship between neurotransmitter concentration and presynaptic voltage [4]. One may additionally consider first- or higher-order synaptic kinetics of \(g_{\text{syn}}(t)\) in response to a presynaptic pulse \(p(v_{k})\); for simplicity, however, I will focus on instantaneous interactions (but see Section V.3). In conductance-based neuron models, instantaneous voltage-dependent pulses during synaptic transmission have frequently been described by sigmoidal functions of logistic2 form [4; 62; 63; 64; 65; 66; 67] Footnote 2: An alternative formulation of Eq. (8) uses the hyperbolic tangent, \(t_{\infty}(v)=\{1+\tanh[(v-v_{t})/k_{t}]\}/2\), which coincides with \(s_{\infty}(v)\) when choosing \(v_{s}=v_{t}\) and \(k_{s}=k_{t}/2\). \[s_{\infty}(v)=1/\{1+\exp[-(v-v_{s})/k_{s}]\}, \tag{8}\] with presynaptic voltage \(v\) and steepness and threshold parameters \(k_{s}\) and \(v_{s}\). Exemplary pulse profiles for conductance-based Hodgin-Huxley-like neurons are shown in Fig. 3; see Appendix A for details on the employed Wang-Buzsaki model. The pulse shape depends on the width of the action potential and on the slope \(k_{s}\) of the sigmoidal pulse function \(s_{\infty}(v)\). Narrow action potentials and hard thresholds generate pulses that are almost symmetric about the spike time \(t=0\), see the green pulse in Fig. 3(c1). Typically, however, the pulses exhibit a fast rise shortly before and a slower decay after the spike (violet pulse in Fig. 3c2). By comparison, and as mentioned in Section I.1, QIF neurons have stereotyped and rather artificial action potentials (Fig. 4a), so that emitted pulses using the sigmoidal pulse function \(s_{\infty}(v)\) appear degenerate as they terminate sharply at the time of the spike (Fig. 4c1). To account for the hardwired artificial shape of the QIF's action potential, one can extend the pulses ad hoc and replace \(v\) by its absolute value \(|v|\), see the violet pulse function in Fig. 4(b2). This tactic generates pulses \(p_{\parallel}(v):=s_{\infty}(|v|)\) that are symmetric about the spike time \(t=0\) (violet pulse in Fig. 4c2). Such a symmetric pulse may be a reasonable approximation of pulses in conductance-based neuron models with narrow action potentials and a sigmoidal function with a hard threshold \(k_{s}\to 0\) (green pulse in Fig. 3c1). For general asymmetric pulses as in Fig. 3(c), however, one has to come up with a different solution that breaks the symmetry of \(p_{\parallel}(v)\) with respect to \(v=0\). For instance, one can combine two sigmoidal functions as \[p_{\tilde{g}}(v)=\frac{1}{1+e^{(v-v_{s-})/k_{s-}}}+\frac{1}{1+e^{-(v-v_{s+})/k_ {s+}}} \tag{9}\] with thresholds \(v_{s+}>0,v_{s-}<0\) and possibly different steepness parameters \(k_{s\pm}>0\). For \(k_{s+}<k_{s-}\), the QIF Figure 5: Smooth voltage-dependent pulses of (a) a periodically spiking QIF neuron with frequency \(f\approx 41.7\)Hz. (b) The pulse functions (9) shown in Fig. 4(b2) can be approximated around the resting potential by the phase-dependent pulse function (10) (red and blue traces; rescaled for convenience). The resulting smooth pulses in (c) capture the nature of symmetric (c1) and asymmetric (c2) pulses quite accurately. Parameters for \(p_{r,\varphi,\psi}(2\arctan v)/c\) are: (b1) \(r=0.985,\varphi=0,\psi=\pi,c=115\); (b2) \(r=0.985,\varphi=\pi/12,\psi=\pi,c=35\). Figure 3: Voltage-dependent pulses of the conductance-based Wang-Buzsaki (WB) neuron. (a) Voltage traces of a periodically spiking WB neuron with narrow (black) or broad (red) action potentials (APs), both have frequency \(f\approx 1/(24\text{ms})=41.7\)Hz. (b) Sigmoidal pulse function (8) with hard (green) and soft (blue) thresholds. The APs in (a) are transformed via (b) into pulses: (c1) a narrow AP and hard threshold result in a square-like pulse (green) that is symmetric about the spike time \(t=0\), whereas a soft threshold smooths and skews the pulse (blue); (c2) similar to (c1) but for the wide AP. Parameters of pulses in (b): \(v_{s}=2\)mV, \(k_{s}=5\) (green) as in [4]; \(v_{s}=2\)mV, \(k_{s}=14\) (blue). Figure 4: Voltage-dependent pulses of the QIF neuron. (a) Voltage trace of a periodically spiking QIF neuron with frequency \(f\approx 41.7\)Hz. (b) Pulse functions consisting of (b1) one sigmoidal function as in Fig. 3(b) or of (b2) a combination of two sigmoidals as in Eq. (9). (c1/c2) The action potential of the QIF neuron is transformed into the pulses using the pulse functions in (b1/b2). Pulse parameters: (b1) Eq. (8) with \(v_{s}=80\)mV, \(k_{s}=10\) (blue) and \(k_{s}=20\) (red). (b2) \(p_{\parallel}(v)\) with \(v_{s}=80\)mV, \(k_{s}=20\) (violet); \(p_{\tilde{g}}(v)\) with \(v_{s-}=-50\)mV, \(k_{s-}=12.5\), \(v_{s+}=80\)mV, \(k_{s+}=10\) (green). neurons emit asymmetric pulses with a steep upstroke and a moderate downstroke (cf. green pulse function in Fig. 4b2 and the resulting pulse \(p_{\bar{\theta}}(v(t))\) in Fig. 4c2). ### An accessible family of pulse functions In anticipation of an exact low-dimensional description for the collective dynamics of QIF neurons, it is advantageous to refrain from the explicit voltage-dependence of the pulses (9) and to formulate them in the corresponding \(\theta\)-phase description [45]. This allows not only for a directly comparable treatment of networks of \(\theta\)- and QIF neurons, but it also overarches the opposing approaches to pulse-coupling as introduced in Sections II.1 and II.2 according to Remarks 1 and 2, see also Section V.3. By substituting \(v=\tan(\theta/2)\) in Eq. (9), the voltage-dependent pulse \(p_{\bar{\theta}}(v)\) becomes phase-dependent. However, the pulse \(p_{\bar{\theta}}(\tan(\theta/2))\) is only implicitly defined in \(\theta\), which hampers analytic tractability. As an alternative, I propose to use pulses \(p(\theta)\) of the form (7) that start from the \(\theta\)-phase description and a priori assume an implicit voltage-dependence; in special cases the implicit voltage-dependence can actually become explicit again, see, e.g., Eq. (11) below. Importantly, the explicit pulses (9) can be approximated quite accurately around the resting potential by pulses \(p(\theta)=p(2\arctan v)\) that employ the \(\theta\)-phase transformation of the QIF neuron (Fig. 5b). Specifically, I propose the following family of accessible, analytically favorable, smooth pulse functions \[p_{r,\varphi,\psi}(\theta)=1+\frac{1-r^{2}}{1-r\cos\varphi} \frac{\cos(\theta-\psi-\varphi)-r\cos(\varphi)}{1-2r\cos(\theta-\psi)+r^{2}}\, \tag{10}\] which provide adequate approximations of the QIF pulses that can be symmetric about the spike time (Fig. 5c1) or asymmetric with a steep upstroke and a more moderate downstroke after the spike (Fig. 5c2). The shape of the pulses (10) is determined by three parameters \(r\in[-1,1],\varphi\in[-\pi,\pi)\) and \(\psi\in[0,2\pi)\): \(r<1\) tunes the width of the pulse, \(\varphi\neq 0\) tunes its asymmetry, and \(\psi>\pi\) (\(\psi<\pi\)) shifts the pulse to the right (left) from the spike (at \(\theta=\pi\)). Crucially, these pulses admit an exact, low-dimensional description for the collective dynamics of globally coupled spiking neurons in terms of their firing rate and mean voltage (Section III), and thus allow for a comprehensive mathematical analysis how the pulse shape affects the synchronization properties of the network (Section IV). To provide more details on the pulse functions (10), first note that for \(\varphi=0\) and \(\psi=\pi\), Eq. (10) reduces to a "Rectified-Poisson" (RP) pulse \(p_{\mathrm{RP},r}(\theta):=p_{r,0,\pi}(\theta)\) with sharpness parameter \(r\in(-1,1)\)[18]. Via the transform \(\theta=2\arctan(v)\), one obtains the explicit voltage-dependence of the RP pulse \[p_{\mathrm{RP},r}(v)=\frac{2(1-r)v^{2}}{(1+r)^{2}+(1-r)^{2}v^{2}}\, \tag{11}\] see Fig. 5(b1,c1) for an example. The formulation in terms of the voltage \(v\) readily allows for using the pulses interchangeably in networks of \(\theta\)- and QIF neurons. In the same manner, one can also obtain the general voltage description of Eq. (10); as it is more convoluted, I refrain from presenting it here. The RP pulse (11) is a smooth generalization of the discontinuous \(\delta\)-spike: When a neuron spikes and its voltage \(v\) diverges, then \(\lim_{v\to\infty}p_{\mathrm{RP},r}(v)=2/(1-r)\). Thus, in the limit \(r\to 1\), \(p_{\mathrm{RP},1}(v)\) becomes a Dirac \(\delta\)-pulse and is zero except when \(v\) diverges. On the other hand, for \(r\to-1\), \(p_{\mathrm{RP},-1}(v)=1\) is flat and the pulse is always "on" independent of the neuron's actual state. In the case \(r=0\), the RP pulse reads \(p_{\mathrm{RP},0}(v)=2v^{2}/(1+v^{2})\), which corresponds to a cosine-pulse in the \(\theta\)-phase description, \(p_{\mathrm{RP},0}(\theta)=(1-\cos\theta)\). Traditionally, cosine-pulses have been generalized as Ariatnam-Strogatz (AS) pulses [16] \[p_{\mathrm{AS},n}(\theta)=a_{n}(1-\cos\theta)^{n}, \tag{12}\] where \(a_{n}=2^{n}(n!)^{2}/(2n)!\) is a normalization constant and \(n\in\mathbb{N}\) a sharpness parameter: for \(n=1\) one finds \(p_{\mathrm{AS},1}(\theta)=p_{\mathrm{RP},0}(\theta)\); for \(n\to\infty\), the AS pulse (12) converges to a Dirac \(\delta\)-pulse. AS pulses have widely been used in network models of \(\theta\)-neurons [23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69], but suffer from numerical and also analytical difficulties when the pulses become more and more localized, \(n\gg 1\). While RP pulses (11) share the properties of AS pulses--they are symmetric about the pulse peak at \(\theta=\pi\) and vanish at \(\theta=v=0\)--, RP pulses overcome the numerical and analytical shortcomings of the AS pulse (12), as will become clear below. In addition, the RP pulses can be readily extended to pulses that are skewed and/or whose peak is shifted away from \(\theta=\pi\) (where \(v=\tan(\theta/2)\) diverges) by introducing the asymmetry and shift parameters \(\varphi\neq 0\) and \(\psi\neq\pi\) in Eq. (10). Pulses of the form (10) satisfy some convenient analytic properties as they correspond to a rescaled Kato-Jones distribution [70]. The Kato-Jones distribution is a four-parameter family of unimodal distributions on the circle, whose general form reads \[p_{KJ}(\theta)=1+2\mathrm{Re}\Big{\{}C\sum_{n=1}^{\infty}(\xi e^{-i\theta})^{n} \Big{\}} \tag{13}\] with the complex-valued parameters \(C=ae^{i\varphi}\) and \(\xi=re^{i\varphi}\in\mathbb{C}\). The Kato-Jones distribution (13) generalizes the wrapped Cauchy distribution, which is obtained for \(C\equiv 1\). One obtains Eq. (10) when setting \(a=(1-r^{2})/(2r)/(1-r\cos\varphi)\geq 0\). This choice guarantees that the pulse is always non-negative, \(p_{r,\varphi,\psi}(\theta)\geq 0\), and vanishes at least once. Alternatively, one could reduce the number of parameters by enforcing Eq. (13) to vanish always at \(\theta=0\), which is in line with Theorem 1 and may entail pulses that change signs. In this manuscript, however, I focus on pulses \(p(v)\geq 0\) so that the distinction whether the pulse-coupling is excitatory or inhibitory is uniquely determined by (the sign of) the coupling strength \(J\). Taken together, the proposed family of pulses (10) satisfies the following properties: \(p_{r,\varphi,\psi}(\theta)\) is \(2\pi\)-periodic, non-negative, unimodal, vanishes at least at one point, and has normalized area in the course of an action potential, \(\int_{0}^{2\pi}p_{r,\varphi,\psi}(\theta)d\theta=2\pi\). Moreover, the pulses \(p_{r,\varphi,\psi}(\theta)\) are smooth, except for the limit \(r\to 1\), in which they converge to a Dirac \(\delta\)-pulse, \[p_{\delta,\psi}(\theta):=\lim_{r\to 1}p_{r,\varphi,\psi}(\theta)=2\pi\delta( \theta-\psi). \tag{14}\] The phase \(\theta=\psi\), at which the \(\delta\)-pulse is emitted, can be linked to a "threshold" voltage \(v_{\rm thr}=\tan(\psi/2)\) that, according to Ermentrout, "accounts for the possibility that the synaptic conductance begins before the presynaptic voltage reaches its maximal value"[10], cf. also [71]. If \(\psi>\pi\), then \(v_{\rm thr}<0\) and the \(\delta\)-pulse is emitted after the presynaptic voltage has reached its maximal value \(v_{p}\) and the neuron is recovering from the reset after its spike. ## III Exact Collective Dynamics of Globally Pulse-Coupled QIF Neurons To determine the effect of the pulse shape on the collective network dynamics, I will make use of a recently proposed exact reduction for globally coupled spiking neurons. I present the theory for networks of QIF neurons and refer to Appendix B for the corresponding \(\theta\)-neuron dynamics. Building on Eq. (6), I now consider the membrane potentials \(v_{j}\) of QIF neurons \(j=1,\ldots,N\), that follow the subthreshold dynamics [10, 58, 72] \[\tau_{m}\dot{v}_{j}=v_{j}^{2}+I_{0}+I_{\rm syn}+I_{j}\;.\] (15a) The membrane time constant \[\tau_{m}\] is typically of the order \[\tau_{m}=10\text{ms}\], \[I_{0}\] is a global input common to all neurons, \[I_{\rm syn}\] a global recurrent synaptic input to be defined below, and \[I_{j}\] describes neuron-specific, independent inputs \[I_{j}(t)=\gamma[c\eta_{j}+(1-c)\xi_{j}(t)]\] (15b) comprising heterogeneous and noisy inputs; \[c\gamma\] determines the degree of heterogeneity and \[(1-c)\gamma\] is the noise intensity. Quenched heterogeneity \[\eta_{j}\] (as in Eq. ( 6 )) is sampled from a normalized Cauchy-Lorentz distribution. \[\xi_{j}(t)\] describes independent Cauchy white noise with \[\langle\xi_{j}(t)\rangle_{t}=0\] and \[\langle\xi_{j}(t)\xi_{k}(s)\rangle_{t}=\delta_{j,k}\delta(t-s)\]. The parameter \[c\in[0,1]\] in Eq. ( 15b ) weights the relative deterministic and stochastic contributions to \[I_{j}\] on the microscopic level, but does not have an impact on the macroscopic dynamics [45, 73]. The subthreshold dynamics ( 15 ) are complemented by a fire-and-reset mechanism: upon reaching a threshold \[v_{p}\], the voltage \[v_{j}\] is reset to the potential \[v_{r}\] and neuron \[j\] is said to elicit a spike. Spike times \[T_{j}\] of neuron \[j\] are defined implicitly by \[v_{j}(T_{j})=v_{p}\]. As the quadratic term in Eq. ( 15 ) causes the voltage to diverge in finite time, I consider \[v_{p}=-v_{r}=\infty\], so that the QIF neuron is equivalent to the \[\theta\] -model [10] when identifying the voltage \[v_{j}\] with a phase \[\theta_{j}\] via the transformation \[v_{j}=\tan(\theta_{j}/2)\], see Fig. 1(b,c); neuron \[j\] spikes when \[\theta_{j}\] crosses \[\pi\] with positive speed. The spike times \(T_{j}^{k=1,2,\ldots}\) of neurons \(j=1,\ldots,N\) allow for linking the theoretical model, Eq. (15 ), with experimental observations via the population activity commonly defined in terms of the firing rate \[R_{N}(t)=\lim_{\tau_{r}\to 0}\frac{1}{N}\sum_{j=1}^{N}\sum_{k}\frac{1}{\tau_{r}} \int_{t-\tau_{r}}^{t}\delta(t-T_{j}^{k})dt, \tag{16}\] where the subscript \(N\) indicates the network size and \(\tau_{r}\) is a small time window over which to sum spikes. The firing rate \(R\) is at the heart of mean-field models in computational neuroscience, whose ultimate goal is to provide a self-consistent, at best exact (i.e. matching an underlying microscopic network model), dynamic description of the population activity that is closed in a few macroscopic variables. A singular example for such an exact mean-field model was proposed by Montbrio, Pazo and Roxin in [49] for QIF neurons; see [23, 26] for previous work on the related \(\theta\)-neurons, yet without providing an explicit differential equation for the firing rate \(R(t)\). Montbrio et al.'s approach yielded ordinary differential equations that describe the dynamics of the firing rate \(R\) and the mean voltage \(V=\langle v_{j}\rangle=\frac{1}{N}\sum_{j=1}^{N}v_{j}\). A major success of Montbrio et al.'s firing rate equations was to pinpoint a spike synchronization mechanism through the cooperative interplay between the neurons' voltage and firing dynamics that is central for collective network oscillations, inherent in almost all neuron network models, but not captured by traditional firing rate models by default. In the following, I will employ a reduction strategy that builds on the ideas in [49] and which has been developed further in [45]. ### Firing rate and voltage (RV) dynamics A remarkable feature of the globally coupled QIF neurons described by Eq. (15 ) is that their collective dynamics is captured by a low-dimensional system of differential equations [45, 49]. The low-dimensional description becomes exact in the thermodynamic limit of infinitely many neurons, \(N\to\infty\), which I adopt from now on. The macroscopic state of the QIF network is then given by the probability density function \(\mathcal{W}(v,t)\), so that \(d\mathcal{W}(v,t)dv\) indicates the fraction of neurons with membrane potential in the interval \([v,v+dv)\) at time \(t\). Correspondingly, in the \(\theta\)-phase description one can formulate the probability density \(\mathcal{P}(\theta,t)\) with variable \(\theta=2\arctan(v)\). The latter distribution density can be expanded in Fourier space as \(\mathcal{P}(\theta,t)=(2\pi)^{-1}\{1+\sum_{n\geq 1}Z_{n}(t)e^{-in\theta}+c.c.\}\), where the modes \(Z_{n}(t)\) are the Kuramoto-Daido order parameters [74, 75] and their dynamics are given in Appendix C. The two characteristic observables of neural networks--the firing rate \(R\) and mean voltage \(V\)--can conveniently be expressed as \[\pi\tau_{m}R-iV=1+2\sum_{n=1}^{\infty}(-1)^{n}Z_{n}=\Phi+\lambda\frac{\mathcal{M} (-\sigma)}{\sigma}, \tag{17}\] where \(\Phi(t),\lambda(t),\sigma(t)\) are complex-valued variables and \(\mathcal{M}(k)\) is a constant function that depends on the initial distribution \(\mathcal{W}(\theta,t=0)\) of the QIF neurons. As detailed in [45], see also Appendix C, the dynamics of the three complex variables \(\Phi,\lambda,\sigma\in\mathbb{C}\) are governed by \[\tau_{m}\dot{\Phi}=i\Phi^{2}-iI(t)+\gamma,\;\tau_{m}\dot{\lambda}=2i\Phi\lambda,\;\tau_{m}\dot{\sigma}=i\lambda, \tag{18}\] where \(I(t)=I_{0}+I_{\rm syn}(t)\); note that the macroscopic dynamics (18) are independent of the parameter \(c\) in Eq. (15b). In the presence of Cauchy white noise and/or Cauchy-Lorentz heterogeneity, \(\gamma>0\), \(\lambda\to 0\) asymptotically in time [45], and the collective dynamics becomes uniquely determined by \(\Phi\to\pi\tau_{m}R-iV\) or, equivalently, by the firing rate \(R\) and the mean voltage \(V\). More precisely, the collective dynamics converges to an invariant two-dimensional manifold [46; 49; 23], also called the Lorentzian manifold \(\{\lambda=0\}\), which contains all possible attractors of the full six-dimensional dynamics (18) and is given by the time-dependent total voltage density of the QIF neurons in form of a Cauchy-Lorentz distribution with mean \(V(t)\) and half-width \(\pi\tau_{m}R(t)\)[49], \[\mathcal{W}(v,t)=\frac{1}{\pi}\frac{\pi\tau_{m}R(t)}{[v-V(t)]^{2}+[\pi\tau_{m }R(t)]^{2}}\;. \tag{19}\] For the analysis of asymptotic regimes, it thus suffices to restrict the focus on the firing rate and voltage (RV) dynamics on the invariant Lorentzian manifold, which can be guaranteed by initializing the voltages \(v_{j}(0)\) according to a Cauchy-Lorentz distribution3. Setting \(\lambda\equiv 0\) in Eqs. (17) and (18) leads to the RV dynamics on the Lorentzian manifold [45; 49; 73] Footnote 3: For different choices of initial conditions \(\{v_{j}(0)\}_{j}\), one has to resort to the full six-dimensional dynamics (18) and define \(\mathcal{M}(k)\) appropriately [45]. \[\tau_{m}\dot{R} =\frac{\gamma}{\pi\tau_{m}}+2RV\;, \tag{20a}\] \[\tau_{m}\dot{V} =V^{2}-(\pi\tau_{m}R)^{2}+I_{0}+I_{\rm syn}\;, \tag{20b}\] that exactly describes the collective dynamics of large pulse-coupled networks of QIF neurons, Eq. (15). In the next step, I will incorporate smooth pulsatile synaptic transmission mediated by the pulses (10) and show how the global recurrent current \(I_{\rm syn}\) can be expressed in terms of the macroscopic variables \(R\) and \(V\), which leads to an exact mean-field model that is closed in the population firing rate \(R\) and mean voltage \(V\). ### Recurrent synaptic input in the RV dynamics In [49], QIF neurons interact with each other in a global ("all-to-all") and instantaneous fashion by emitting Dirac \(\delta\)-pulses at their spiking times \(T_{j}^{k}\) ("\(\delta\)-spikes"). Then, \(I_{\rm syn}\) in Eqs. (15) and (20) becomes proportional to \(\tau_{m}R(t)\) as given by Eq. (16). The \(\delta\)-spike assumption for recurrent, instantaneous coupling is, however, particularly limiting [76] not only for biological, but also for numerical reasons4[77; 78; 79] and also because the limit of infinitely narrow pulses does not commute with the thermodynamic limit of infinitely large networks [49; 80] nor does it allow for collective oscillations [49; 81; 42]. Here, I avoid those difficulties by modeling interneuronal communication with smooth pulses of finite width that unfold in the course of an action potential of presynaptic neuron \(j\) in a similar manner as neurotransmitters are released in response to the depolarization of the voltage \(v_{j}\) (Section II.2). Footnote 4: Dirac \(\delta\)-pulse interactions \(\dot{x}(t)=f(t,x(t))+g(t,x(t))\delta(t-t_{0})\) for some functions \(f\) and \(g\) should be understood throughout this manuscript as \(\dot{x}(t)=f(t,x(t))\) for \(t\neq t_{0}\) and \(x(t_{0}^{+})=x(t_{0}^{-})+g(t_{0},x(t_{0}^{-}))\) at \(t=t_{0}\)[77]. For globally coupled QIF neurons, the recurrent input \(I_{\rm syn}(t)=\langle s_{j}(t)\rangle\) is the population mean over all individual postsynaptic responses \(s_{j}(t)\) to a pulse \(p_{j}(t)\) emitted by presynaptic neuron \(j\). As discussed above, I consider voltage-dependent, smooth pulses \(p_{j}(t)=p\big{(}\theta_{j}(t)\big{)}=p\big{(}2\arctan v_{j}(t)\big{)}\) with the pulse function \(p(\theta)=p_{r,\varphi,\psi}(\theta)\) given by Eq. (10). The \(\theta\)-phase formulation of the pulses guarantees analytic tractability thanks to favorable properties of the corresponding probability density \(\mathcal{P}(\theta,t)\) of the ensemble of \(\theta\)-neurons [45], but does not limit the generality of results for general voltage-dependent pulses. Moreover, the pulses (10) allow for a smooth approximation of \(\delta\)-spikes, Eq. (14) with \(\psi=\pi\). Summing over all neurons \(j=1,\ldots,N\), yields the connection to the firing rate \(R\) in Eq. (16) as: \[P_{N,\delta}:=\frac{1}{N}\sum_{j=1}^{N}p_{\delta,\pi}(\theta_{j})=\frac{2\pi}{ N}\sum_{j=1}^{N}\frac{\delta(t-T_{j}^{k})}{2/\tau_{m}}=\pi\tau_{m}R_{N}, \tag{21}\] where the first equality follows from the change of variables formula for Dirac \(\delta\)-functions: \(\delta(t-T_{j}^{k})=\delta\big{(}g(t)\big{)}\big{|}\dot{g}(T_{j}^{k})\big{|}\) with \(g(t)=\theta_{j}(t)-\pi\) and \(\dot{g}(T_{j}^{k})=\dot{\theta}_{j}(T_{j}^{k})=2/\tau_{m}\). The sum over the spike times \(T_{j}^{k}\) is taken within a short time window of length \(\tau_{r}\) as in Eq. (16). Equation (21) explicitly links the mean presynaptic pulse activity \(P_{\delta,\pi}=\langle p_{\delta,\pi}\rangle\) to the firing rate \(R\), see also Eq. (74). Ironically, the \(\delta\)-pulse assumption again conceals an even deeper connection: in the thermodynamic limit \(N\to\infty\), the mean pulse activity \(P_{r,\varphi,\psi}\) is fully determined by the firing rate \(R(t)\)_and_ the mean voltage \(V(t)\). The description of \(P_{r,\varphi,\psi}\) in terms of \(R\) and becomes explicit on the Lorentzian manifold, where \[P_{r,\varphi,\psi}(t)=\big{\langle}p_{r,\varphi,\psi}\big{(}\theta_ {j}(t)\big{)}\big{\rangle}=\lim_{N\to\infty}\frac{1}{N}\sum_{j=1}^{N}p_{r,\varphi,\psi}\big{(}\theta_{j}(t)\big{)}\] \[=\int_{0}^{2\pi}p_{r,\varphi,\psi}(\theta)\mathcal{P}(\theta,t)d \theta=P_{r,\varphi,\psi}\big{(}R(t),V(t)\big{)}\;.\] In general, \(P_{r,\varphi,\psi}\) depends on \(R\) and \(V\) only through the three complex variables \(\Phi,\lambda,\sigma\), see Eq. (15) in Appendix E, but on the Lorentzian manifold \(\{\lambda=0\}\) it reduces to \[P_{r,\varphi,\psi}(R,V)=\mathrm{Re}\left\{\frac{(1-r^{2})(1+\pi\tau_{m}R-iV)e ^{-i\varphi}+(r-\cos\varphi)\big{[}1-re^{-i\psi}+(\pi\tau_{m}R-iV)(1+re^{-i \psi})\big{]}}{r(1-r\cos\varphi)\big{[}1-re^{-i\psi}+(\pi\tau_{m}R-iV)(1+re^{- i\psi})\big{]}}\right\}; \tag{22}\] see Appendices D to F for a rigorous derivation. Albeit complex in its general description in terms of \(R\) and \(V\) as well as of the parameters \(r,\varphi\) and \(\psi\), Eq. (22) dramatically simplifies for certain pulse shapes. For RP pulses \(p_{\mathrm{RP},r}=p_{r,0,\pi}\), see Eq. (11), that are symmetric (\(\varphi=0\)) about the peak phase when the neuron spikes (\(\psi=\pi\)), the mean presynaptic pulse activity \(P_{\mathrm{RP},r}=\langle p_{\mathrm{RP},r}\rangle=P_{r,0,\pi}\) reads: \[P_{\mathrm{RP},r}(R,V)=\\ \frac{2\pi\tau_{m}R[1+r+(1-r)\pi\tau_{m}R]+2(1-r)V^{2}}{[1+r+(1- r)\pi\tau_{m}R]^{2}+(1-r)^{2}V^{2}}. \tag{23}\] Taking the limit \(r\to 1\), yields the mean activity of \(\delta\)-spikes, \(\lim_{r\to 1}P_{r,0,\pi}=\pi\tau_{m}R\), which coincides with the population firing rate as already shown in Eq. (21). Dirac \(\delta\)-pulses do not necessarily need to be emitted at the instant a neuron spikes (when \(v\to\infty\)), but when \(v\) crosses a virtual threshold voltage \(v_{\mathrm{thr}}<\infty\). The corresponding peak phase is \(\psi=2\arctan(v_{\mathrm{thr}})\) and the mean presynaptic activity \(P_{\delta,v_{\mathrm{thr}}}:=\langle p_{\delta,2\arctan(v_{\mathrm{thr}})}\rangle\) for pulses of the form Eq. (14) becomes \[P_{\delta,v_{\mathrm{thr}}}(R,V)=\frac{\pi\tau_{m}R(1+v_{\mathrm{thr}}^{2})}{ (\pi\tau_{m}R)^{2}+(V-v_{\mathrm{thr}})^{2}}\;. \tag{24}\] Importantly, for the general family (10) of pulse functions \(p_{r,\varphi,\psi}\), the mean presynaptic pulse activity (22) explicitly depends on the mean voltage \(V\), i.e. \(\partial_{V}P_{r,\varphi,\psi}\neq 0\), except for the limit \((r,\varphi,\psi)\to(1,0,\pi)\). As I will show in Section IV.1, it is exactly this voltage-dependence that is crucial for collective oscillations in case of instantaneous pulse-coupling. The emergent macroscopic oscillations, however, are sensitive to the pulse shape: Skewing the pulses slightly to the phase after the spike yields robust collective oscillations of inhibitory neurons, which are not present for symmetric pulse-coupling (Fig. 6). A thorough analysis how the pulse shape affects network synchronization is the focus of Section IV. To anticipate, for RP pulses that are symmetric about the peak phase \(\theta=\pi\), collective oscillations are restricted to a narrow and rather unrealistic region in parameter space. By contrast, for (asymmetric) pulses with their peak phase after the actual spike, \(\theta>\pi\), collective oscillations emerge almost naturally in inhibitory networks (\(J<0\)) with excitatory input currents (\(I_{0}>0\)). This strongly reminds of interneuronal network gamma (ING) oscillations in inhibitory networks including synaptic kinetics with finite rise and decay times [82, 64, 83]. As a wrap-up, I have presented an exact low-dimensional description for globally pulse-coupled QIF neurons (15) in the thermodynamic limit, which equally holds for networks of \(\theta\)-neurons, see Eq. (16). For pulses \(p_{r,\varphi,\psi}\) of the general form (10), the mean pulse activity \(P_{r,\varphi,\psi}=\langle p_{r,\varphi,\psi}\rangle\) can conveniently be expressed in terms of the macroscopic variables. The time-asymptotic macroscopic dynamics of the QIF neurons is restricted to the invariant Lorentzian manifold, on which the collective behavior is exactly described by the RV dynamics (20). For instantaneous pulse-coupling, one has to substitute \[I_{\mathrm{syn}}=JP_{r,\varphi,\psi}(R,V) \tag{25}\] in Eq. (15), where \(J\) indicates the pulse-coupling strength and the mean pulse activity \(P_{r,\varphi,\psi}\) is given by Eq. (22). The RV dynamics (20) is then two-dimensional and closed in the two macroscopic variables--the firing rate \(R\) and the mean voltage \(V\). The reduction of the RV dynamics greatly facilitates the investigation how the pulse shape affects collective dynamics of pulse-coupled spiking neurons, because the mean-field equations (20) accurately reproduce the microscopic network dynamics (15) of globally coupled QIF neurons with instantaneous pulses \(p_{r,\varphi,\psi}\), see Fig. 6. Initial voltages \(v_{j}(0)\) of the microscopic network were chosen to follow a Cauchy-Lorentz distribution of half-width \(R(0)\) and centered at \(V(0)\) to obtain an immediate match between network and RV simulations. For arbitrary initial conditions, one has to resort to the six-dimensional dynamics (17 & 18) with a properly chosen constant function \(\mathcal{M}(k)\) for such a perfect agreement, see also [45]. In the following, I will capitalize on the fact that the reduced RV dynamics (20) perfectly capture network simulations of pulse-coupled spiking neurons, and analyze them with respect to emergent collective behavior for various choices of the pulse function (10). In doing so, I also show that the collective dynamics in Fig. 6 can be predicted by linear stability analysis, see + in Fig. 7(c4) and \(\times\) in Fig. 8(d) below. The main focus of Section IV lies on voltage-mediated synchronization, in general, and on collective oscillations in inhibitory networks, in particular, with instantaneous pulses whose peak occurs at the time of the presynaptic spike or shortly thereafter. ## IV Collective oscillations with instantaneous pulses Networks of inhibitory interneurons provide a mechanism for coherent brain oscillations, particularly in the gamma band, as reciprocal inhibition turned out to be effective for neuronal synchrony [84]. For Class 1 excitable neurons, such as \(\theta\)-neurons or the QIF model (15), collective oscillations--as a hallmark of synchrony--can be realized even with relatively fast inhibitory synapses [10]. Synaptic latency (including axonal delay) as well as rise and decay times contribute to determining synchronous firing patterns [83, 84, 85, 86, 64, 87, 68, 64, 88]. Exact RV dynamics corresponding to Eq. (20) have been successfully employed to describe collective oscillations of inhibitory QIF neurons interacting via \(\delta\)-spikes with first-order [89, 90] and second-order synaptic kinetics [91, 92, 93] or with delay [94, 95, 96, 97, 98, 99, 100, 101, 102, 103]; see also [97, 98, 100, 101, 102, 103] for extensions. For instantaneous synapses, however, \(\delta\)-spike-interactions do not allow for macroscopic oscillations [42, 49], whereas instantaneous pulses of finite width have been reported to induce collective oscillations in inhibitory networks of the equivalent \(\theta\)-neurons [24, 27, 36, 23, 42] interacting via symmetric AS pulses (12). For globally coupled QIF neurons with (non-smooth) pulses of finite width, collective oscillations have so far only been found in excitatory networks, cf. Eq. (29) in [81]. In Section IV.1, I will provide a mathematical proof that the RV dynamics (20) with instantaneous synaptic transmission can generate collective oscillations if the recurrent synaptic input depends explicitly on the mean voltage \(V\). Such an effective voltage-coupling occurs naturally for pulses of finite width, Eq. (23), but also for Dirac \(\delta\)-pulses emitted before or after, but not at, the presynaptic spike, see Eq. (24). In Sections IV.2 and IV.3, I will present detailed linear stability analyses for RP and more general pulses and substantiate the notion of Fig. 6(b) that asymmetric right-skewed pulses are a promising candidate for generating ING oscillations in inhibitory networks of instantaneously pulse-coupled neurons. In Section IV.4, I will complement the findings by investigating left-skewed pulses whose mean precedes the actual spike. ### Collective oscillations require voltage-coupling For networks of globally coupled spiking neurons with instantaneous synaptic transmission, the collective dynamics is exactly described by the two-dimensional RV dynamics (20) on the Lorentzian manifold that is closed in the firing rate \(R\) and mean voltage \(V\) and with recurrent synaptic input \(I_{\rm syn}(t)\) that depends instantaneously, and smoothly, on \(R\) and \(V\), i.e. \(I_{\rm syn}(t)=I_{\rm syn}(R(t),V(t))\). The onset of collective oscillations via a Hopf bifurcation can be determined using the eigenvalues \[\lambda_{1,2}=\frac{1}{2}\left[\rm tr(Jac)\pm\sqrt{\rm tr(Jac)^{2}-4det(Jac)}\right]\] Figure 6: Collective dynamics of inhibitory QIF neurons globally coupled via instantaneous pulses \(p_{r,\varphi,\psi}\) with \(r=0.95\) and \(\psi=\pi\). (a) Symmetric RP pulses (\(\varphi=0\), red curve in Fig. 7a) drive the network into an asynchronous state. (b) Asymmetric pulses slightly skewed to the phase after the spike (\(\varphi=\pi/12\), red curve in Fig. 8a) yield robust ING oscillations. Top: raster plots show spike times of \(1^{\prime}000\) QIF neurons obtained from network simulations according to Eq. (15) with \(N=10^{\prime}000\) neurons. Bottom: Excellent agreement between firing rates \(R_{N}\) (black) and \(R\) (orange) obtained from integrating the QIF network and the RV dynamics (20) with Eqs. (23) and (28) for (a) and (b), respectively. Network parameters are \(\gamma=1,I_{0}=20,J=-12\); see \(+\) in Fig. 7(c4) and \(\times\) in Fig. 8(d). The membrane time constant is \(\tau_{m}=10\) ms. Network simulations were performed using Euler-Maruyama integration with \(dt=5\cdot 10^{-3}\)ms and the firing rate \(R_{N}\) was computed using Eq. (16) with \(\tau_{r}=10^{-1}\)ms. of the Jacobian of the RV dynamics (20), \[\text{Jac}=\begin{pmatrix}2V^{*}&2R^{*}\\ -2\pi^{2}\tau_{m}^{2}R^{*}+\partial_{R}I_{\text{syn}}^{*}&2V^{*}+\partial_{V}I_{ \text{syn}}^{*}\end{pmatrix}\, \tag{26}\] evaluated at the fixed-point solution \((R^{*},V^{*})\) with \[V^{*}=-\gamma/(2\pi\tau_{m}R^{*})\quad\text{and}\quad I_{\text{ syn}}^{*}=I_{\text{syn}}(R^{*},V^{*})\ ; \tag{27}\] the Jacobian (26) is obtained in rescaled time \(\tilde{t}=\tau_{m}t\), and \(\partial_{x}=\partial/\partial x\) with \(x\in\{R,V\}\). Since \(R^{*}\geq 0\) because of the definition of the firing rate, Eq. (16), the fixed-point solution \((R^{*},V^{*})\) always satisfies \(V^{*}\leq 0\) because of Eq. (27), where the inequality becomes strict for \(\gamma>0\). A Hopf bifurcation of the fixed point \((R^{*},V^{*})\) occurs if \(\text{tr}(\text{Jac})=0\) and \(\text{det}(\text{Jac})>0\). From \(\text{tr}(\text{Jac})=0\), one has \(\partial_{V}I_{\text{syn}}^{*}=-4V^{*}\). Since \(V^{*}<0\) is negative for \(\gamma>0\), the necessary condition for collective oscillations in Eq. (20) emerging through a Hopf bifurcation of the fixed point \((R^{*},V^{*})\) reads: \[\partial_{V}I_{\text{syn}}^{*}:=\partial_{V}I_{\text{syn}}(R^{*},V^{*})>0. \tag{28}\] That is, collective oscillations require recurrent coupling through the mean voltage \(V\). Put differently, if the recurrent synaptic input \(I_{\text{syn}}\) is independent of \(V\), i.e. \(\partial_{V}I_{\text{syn}}^{*}=0\), then Eq. (28) cannot be satisfied for any fixed-point solution \((R^{*},V^{*})\in\mathbb{R}^{+}\times\mathbb{R}^{-}\), and the network of globally coupled QIF neurons will not synchronize. To show that collective oscillations are indeed possible if the first condition \(\text{tr}(\text{Jac})=0\) is satisfied, it is necessary to prove that the second condition \(\text{det}(\text{Jac})>0\) can also be satisfied. Assuming that \(\text{tr}(\text{Jac})=0\) holds and using the fixed-point equation in Eq. (27), one obtains \[\text{det}(\text{Jac})=(2\pi\tau_{m}R^{*})^{2}-\frac{\gamma^{2}}{(\pi\tau_{m} R^{*})^{2}}-\frac{4\gamma}{\pi\tau_{m}}\frac{\partial_{R}I_{\text{syn}}^{*}}{ \partial_{V}I_{\text{syn}}^{*}}\.\] By assumption, \(I_{\text{syn}}(R,V)\) is smooth, so for \(\partial_{V}I_{\text{syn}}^{*}>0\) the right-hand side is well-behaved and can be expressed as a smooth function in \(R^{*}\). While \(\text{det}(\text{Jac})<0\) for small firing rates \(R^{*}\to 0\), e.g. due to strong inhibitory input \(I_{0}\ll 0\), the determinant \(\text{det}(\text{Jac})\) will become positive for large firing rates \(R^{*}\gg 1\) (and/or for \(\partial_{R}I_{\text{syn}}^{*}\ll 0\)). As \(R^{*}=R^{*}(I_{0})\) depends smoothly on the external current \(I_{0}\)--the functional dependence of \(R^{*}\) on \(I_{0}\), i.e. the transfer function or \(fI\)-curve, can be obtained by setting the right-hand side of Eq. (20b) to zero and inserting (27), see also [89]--, one can expect that for strong excitatory drive, \(I_{0}\gg 1\), \(R^{*}\) increases sufficiently such that both conditions, \(\text{tr}(\text{Jac})=0\) and \(\text{det}(\text{Jac})>0\), are simultaneously satisfied and give rise to collective oscillations. Hence, only if the recurrent synaptic input \(I_{\text{syn}}\) explicitly depends on the mean voltage \(V\) can collective oscillations in the RV-dynamics (20) emerge through a Hopf bifurcation from a fixed-point solution \((R^{*},V^{*})\) with sufficiently large firing rate \(R^{*}\gg 0\). In other words, an asynchronous, typically high-activity state becomes unstable through an effective voltage-coupling and the spiking neurons start firing synchronously (as in Fig. 6b). Now, when does the recurrent input \(I_{\text{syn}}\) explicitly depend on the mean voltage \(V\)? This occurs naturally for electrical coupling through gap junctions, see [104] and the Discussion Section V.2. In the absence of gap junctions, and as shown before in Section II.2, an effective voltage-coupling can also be achieved with instantaneous pulses that are different from Dirac \(\delta\)-pulses emitted at the presynaptic spike, see also [81]. For instantaneous pulses \(p_{r,\varphi,\psi}\) given by Eq. (10), the synaptic current \(I_{\text{syn}}(t)=JP_{r,\varphi,\psi}(R,V)\) depends explicitly on the voltage \(V\), see Eq. (22), and thus generally allows for collective oscillations of globally coupled QIF neurons; one can achieve that Eq. (28) holds true by freely tuning the coupling strength \(J\). Whether collective oscillations occur in realistic parameter regimes, e.g., with global inhibition \(J<0\) and excitatory drive \(I_{0}>0\), crucially depends on the pulse-shape parameters \(r,\varphi,\psi\), as I will demonstrate in the following Sections IV.2 to IV.4. To anticipate whether particular pulse shapes \(p_{r,\varphi,\psi}\) require excitatory or inhibitory coupling for neural synchronization, one can compute \(\partial_{V}P_{r,\varphi,\psi}(R,V)\) and recall that any fixed-point solution \((R^{*},V^{*})\) satisfies \(R^{*}>0\) and \(V^{*}<0\) for \(\gamma>0\). Then, for RP pulses (11), one has \[\partial_{V}P_{\text{RP},r}(R^{*},V^{*})=\\ \frac{4(1-r^{2})[1+r+(1-r)\pi\tau_{m}R^{*}]V^{*}}{\left\{[1+r+(1-r) \pi\tau_{m}R^{*}]^{2}+(1-r)^{2}{V^{*}}^{2}\right\}^{2}}<0. \tag{29}\] Thus, to satisfy condition (28), \(J\partial_{V}P_{\text{RP},r}(R^{*},V^{*})>0\), RP pulses require inhibitory coupling (\(J<0\)) to generate collective oscillations. For shifted Dirac \(\delta\)-pulses (14), the sign of \[\partial_{V}P_{\delta,\text{thr}}(R^{*},V^{*})=\frac{2\pi\tau_{m}R^{*}(v_{\text {thr}}-V^{*})(1+v_{\text{thr}}^{2})}{[(\pi\tau_{m}R^{*})^{2}+(V^{*}-v_{\text{ thr}})^{2}]^{2}} \tag{30}\] depends on the virtual threshold value \(v_{\text{thr}}\). If a neuron emits the Dirac \(\delta\)-pulse shortly after its spike (when it is recovering from its reset \(v\leftarrow-\infty\)), then \(v_{\text{thr}}<0\) and \(\partial_{V}P_{\delta,\text{thr}}(R^{*},V^{*})<0\), so collective oscillations occur for inhibitory coupling (\(J<0\)). However, if the pulse is emitted before the neuron spikes, then \(v_{\text{thr}}\gg 0\), \(\partial_{V}P_{\delta,\text{thr}}(R^{*},V^{*})\) can become positive and collective oscillations require excitatory coupling (\(J>0\)). For other pulse shapes, it becomes more laborious to draw similar conclusions from \(\partial_{V}P_{r,\varphi,\psi}(R^{*},V^{*})\) alone. The general trend, however, which will become clear in the following, is that collective oscillations occur with inhibitory coupling when the mean of the pulse \(p_{r,\varphi,\psi}(\theta)\) coincides with the spike or is shifted to its right, whereas pulses whose mean is shifted to the left of the spike rather require excitatory coupling to synchronize the network. ### Rectified-Poisson (RP) pulses \((\varphi=0,\psi=\pi)\) For symmetric pulses (\(\varphi=0\)) of finite width (\(r<1\)) centered around the spike threshold (\(\psi=\pi\)), the mean pulse activity \(P_{\mathrm{RP},r}\) is given by Eq. (23). In Fig. 7(a), it can be appreciated that the smaller \(r<1\), the wider the RP pulse (11) about the peak phase \(\pi\) at which the presynaptic voltage diverges (bottom panel). The shape of the RP pulses (color coded for different \(r\)) is similar to AS pulses (12) (grey). The major advantage of RP pulses is that the mean \(P_{\mathrm{RP},r}\) is a simple function in \(R\) and \(V\), see Eq. (23), whereas no such closed form exists for arbitrary AS pulses with shape parameter \(n\in\mathbb{N}\), see Appendices D and E. It is thus straightforward to perform a linear stability analysis of the RV dynamics (20) with RP pulses, but not with arbitrary AS pulses. All bifurcation boundaries can be obtained analytically, except for that of the homoclinic, which is a global bifurcation Figure 7: Phase diagrams of the RV dynamics (20) for instantaneous RP pulses. (a) Comparison of pulse profiles: RP pulses of various width \(r\leq 1\) (color coded) are symmetric about the peak phase \(\theta=\pi\) corresponding to the instant where the voltage \(v\) diverges; see bottom panels for the link between \(v\) and the theta-phase \(\theta=2\arctan(v)\). The larger \(r\nearrow 1\), the narrower the pulse (note the different \(x\)-axis-scale in the right panel). AS pulses (12) with different width parameters \(n\in\mathbb{N}\) are shown in grey for comparison. The RP pulse with \(r=0\) coincides with \(p_{\mathrm{AS},1}\). (b) Bistability between low- and high-activity, asynchronous states (L/HAS) in excitatory networks (\(J>0\)). Saddle-node bifurcation boundaries are color-coded according to the pulse width \(r\) of the RP pulses shown in (a). (c1–4) Collective oscillations in narrow parameter regions (red, SYNC) for inhibitory networks (\(J<0\)) and decreasing pulse width \(r=0,0.5,0.9,0.95\). Insets in (c1–2) zoom in on the general bifurcation structure near the Bogdanov-Takens (BT) point, where saddle-node (blue), (supercritical) Hopf (red) and homoclinic (green) bifurcation curves meet. The \(+\) in (c4) denotes the parameters used for numerical simulations in Fig. 6(a). and has to be determined numerically [105; 106]. In the following, I set \(\gamma=1\) and refrain from a denormalization with respect to \(\gamma\) as it would affect the mean presynaptic activity \(P_{r,\varphi,\psi}\) in a nontrivial way; for different choices of \(\gamma>0\), I have not noticed any qualitative differences. For excitatory coupling (\(J>0\)), RP pulses are not capable of generating collective oscillations, as expected for reasonable parameter choices and as predicted by Eq. (29). Instead, the cusp-shaped region in Fig. 7(b) features bistability between two asynchronous states: a low-activity state (LAS) and a high-activity state (HAS). In line with the results for \(\delta\)-spikes in [49], the narrower the RP pulse (\(r\nearrow 1\)), the bigger the cusp-shaped region of bistability until it finally coincides with the one found for \(r=1\) (black); the boundaries of the bistability regions are SN bifurcations and meet at a codimension-2 bifurcation point (Cusp). For inhibitory networks (\(J<0\)) with positive input currents (\(I_{0}>0\)), see Fig. 7(c), the asynchronous steady state loses stability at a supercritical Hopf bifurcation (red curve) and gives rise to collective oscillations (red shaded region labelled as SYNC), which are destroyed in a homoclinic bifurcation (green curve) shortly after. Similar results were obtained in [42] for the case \(r=0\) and in [23; 36] for wide AS pulses (\(n=2,9\)). There is also a region of bistability between a LAS and a very low-activity state (VLAS), bounded by the (blue) saddle-node curve, that meets the Hopf and homoclinic curves at the Bogdanov-Takens (BT) point, which is another codimension-2 bifurcation. For larger \(r\), i.e. for narrower pulses, this BT point moves further down so that collective oscillations require strong inhibitory coupling \(J\ll 0\). On top of that, collective oscillations remain confined to a narrow region in parameter space, which becomes almost negligible for RP pulses that are reasonably localized with \(r>0.8\), see Fig. 7(c3-4). ### Right-skewed pulses (\(\varphi>0\) and/or \(\psi>\pi\)) Shifting (\(\psi\neq\pi\)) and/or skewing (\(\varphi\neq 0\)) the symmetric RP pulses critically affects the collective dynamics of globally coupled QIF neurons. By introducing a virtual threshold \(v_{\rm thr}=\tan(\psi/2)\) at which the emitted pulse is strongest, one can shift the pulse to the right (\(\psi>\pi\)) or to the left (\(\psi<\pi\)) of the QIF threshold \(v_{p}\) at infinity (Fig. 8a, blue curve); note that \(v_{\rm thr}\to v_{p}=\infty\) for \(\psi\to\pi\). By introducing an asymmetry parameter \(\varphi\neq 0\), one can skew the pulse (Fig. 8a, red and green curves). Asymmetric pulses \(p_{r,\varphi,\psi}(\theta)\) no longer take on their maximum at \(\theta=\psi\) (i.e. when \(v\) diverges), but their peak will be shifted to the right (\(\varphi>0\)) or to the left (\(\varphi<0\)). While this peak shift is in general rather small5, the main effect of the asymmetry parameter \(\varphi\) manifests in placing the mean clearly away from the threshold phase \(\psi=2\arctan(v_{\rm thr})\), see red dashed line in Fig. 8(a). Footnote 5: An asymmetric pulse \(p_{r,\varphi,\psi}\) with \(\varphi\neq 0\) and \(\psi=2\arctan(v_{\rm thr})\) has its peak when the presynaptic neuron’s voltage \(v\) reaches \(v_{max}=\frac{(1+r)v_{\rm thr}\cos(\varphi/2)+(1-r)\sin(\varphi/2)}{(1+r) \cos(\varphi/2)-(1-r)\sin(\varphi/2)}\), and the pulse vanishes at \(v_{min}=\frac{(1+r)v_{\rm thr}\sin(\varphi/2)-(1-r)\cos(\varphi/2)}{(1+r) \sin(\varphi/2)+(1-r)v_{\rm thr}\cos(\varphi/2)}\). The corresponding peak and trough phases \(\theta\) can be found via \(\theta_{max/min}=2\arctan(v_{max/min})\). As before, one can perform a linear stability analysis of the RV dynamics (20), now complemented with the mean presynaptic input \(P_{r,\varphi,\psi}\) given by Eq. (22) for general pulses (10). As motivated in Section II.2, I will first consider pulses \(p_{r,\varphi,\psi}\) whose mean is shifted to the right of the spike, that is, \(\varphi>0\) and/or \(\psi>\pi\). In Fig. 8(b), I shift the narrow RP pulse (\(r=0.95\), red in Fig. 7a) such that it reaches its peak when the presynaptic voltage crosses the virtual threshold \(v_{\rm thr}=-20\) after recovering from the reset at its spike. This shift has an immediate effect on the SYNC region of collective oscillations: While the SYNC region literally falls out of view for narrow RP pulses (with \(v_{\rm thr}=\infty\)), shifting the peak brings the SYNC region again closer to \(J=0\) and enlarges it by pushing the homoclinic bifurcation curve (green) away from the Hopf curve (red). In short, shifting the pulse facilitates neuronal synchrony. The actual bifurcation structure, however, becomes more involved. The cusp-shaped region of bistability between the LAS and the VLAS grows into the (\(I_{0}<0\))-region--albeit with negligible effects on the overall dynamics because the VLAS has a diminutive basin of attraction (not shown)--and the upper saddle-node curve (blue) coincides with the homoclinic curve beyond the Saddle-Node-Separatrix-Loop (SNL) bifurcation point. The violet curve to the right of the SNL point denotes a Saddle-Node on Invariant Circle (SNIC) bifurcation that terminates the collective oscillations. Furthermore, the supercritical Hopf bifurcation (red solid) becomes subcritical (red dashed) at a Generalized Hopf (GH) bifurcation point, from which a saddle-node of cycles (SN of cycles) bifurcation curve (orange) emanates, see inset in Fig. 8(b). In the red region bounded by the supercritical Hopf/SN of cycles curve and the homoclinic curve, there is bistability between the VLAS and SYNC; that is, the LAS loses stability and gives rise to collective oscillations at the Hopf bifurcation. For excitatory coupling (\(J>0\)), on the other hand, shifting the RP pulse hardly affects the cusp-shaped region of bistability between the LAS and the HAS. Introducing small right-skewed asymmetry (\(\varphi>0\)) simplifies the bifurcation scenario to great extent, while allowing for an even broader parameter region of collective oscillations. In Fig. 8(c), I consider pulses that are both shifted (\(v_{\rm thr}=-20\)) and skewed (\(\varphi=\pi/12\)), see the green pulse in Fig. 8(a). The Hopf bifurcation is now always supercritical and the bistability region between LAS and VLAS shrinks drastically, whereas the bistability region between the VLAS and SYNC is only slightly increased. Compared to the case \(\varphi=0\), also the bistability region between LAS and HAS shrinks for \(J>0\). To disentangle the effect of the asymmetry parameter, in Fig. 8(d) I consider only skewed, but not shifted, pulses \(p_{0.95,\pi/12,\pi}\). As shown in Fig. 8(a), for large \(r=0.95\) and small \(|\varphi|\ll 1\), an individual pulse (red curve) reaches its maximum for \(v_{max}=(r+1)/(r-1)/\tan(\varphi/2)\) almost immediately after the presynaptic spike at \(\theta=\pi\). However, the asymmetry parameter \(\varphi\) shifts the bulk of the pulse, and hence its mean, further away from \(\pi\), which has a non-trivial effect on the collective dynamics: one may have expected an overall picture of the collective dynamics to be somewhere in between that of non-shifted and shifted RP pulses; yet, the skewed pulse dramatically enhances the SYNC regime of collective oscillations, that now dominates the parameter region for \(J<0\) and \(I_{0}>0\). Moreover, the bistability regions within the triangle of Cusp, SNL and BT points become almost negligible. This scenario strongly resembles the case for synaptic dynamics with a dominant SYNC region, which is always bounded by a supercritical Hopf bifurcation [89; 93]. In fact, if instead of instantaneous pulse-coupling as in Eq. (25) one now considers first-order synaptic dynamics \(\tau_{d}\dot{S}=-S+P_{r,\varphi,\psi}\) with synaptic decay time constant \(\tau_{d}>0\), collective oscillations emerge already for \(\delta\)-spike-interactions, \((r,\varphi,\psi)\rightarrow(1,0,\pi)\). Consistent with the literature [89], for realistic membrane and synaptic time constants \(\tau_{m}=10\)ms and \(\tau_{d}=5\)ms, collective oscillations always emerge via a supercritical Hopf bifurcation (Fig. 8d, dashed grey line). Noteworthy, the parameter regions of collective oscillations almost coincide for instantaneous asymmetric pulses and for synaptic kinetics triggered by \(\delta\)-spikes. Figure 8: Phase diagrams of the RV dynamics (20) for right-skewed pulses. (a) Comparison of pulse profiles that are shifted (blue; \(\varphi=0,\psi=-2\arctan 20\)), asymmetric (red; \(\varphi=\pi/12,\psi=\pi\)) or both (green; \(\varphi=\pi/2,\psi=-2\arctan 20\)); pulse width is \(r=0.95\). The mean of the shifted pulse coincides with its peak phase \(\theta=\psi\) (blue dashed), whereas the mean of the asymmetric pulse (red dotted) is shifted to the right of its peak. (b–d) Phase diagrams of the collective dynamics for pulses shown in (a). Collective oscillations are found in the red shaded regions (“SYNC”) for inhibitory coupling (\(J<0\)) and excitatory drive (\(I_{0}>0\)). Asymmetric pulses yield larger SYNC regions, while bistability regions (between low-activity states (LAS) or collective oscillations and very low-activity states (VLAS)) are drastically reduced. Boundaries for codimension 1-bifurcations are colored lines: saddle-node (blue), supercritical Hopf (red solid), subcritical Hopf (red dashed), homoclinic (green), SNIC (violet), saddle-node of cycles (orange). Codimension 2-bifurcation points are marked by symbols: Cusp (circle), Bogdanov-Takens (BT, diamond), Saddle-Node Separatrix Loop (SNL, square). In (d) the \(\times\)-symbol denotes the parameter values for numerical simulations in Fig. 6(b) and the gray dashed curve is the supercritical Hopf boundary for the RV dynamics (42) with first-order synaptic kinetics, \(\tau_{m}=2\tau_{d}=10\)ms, \(\tau_{r}=0\)ms, in the limit of \(\delta\)-spikes \((r,\varphi,\psi)\rightarrow(1,0,\pi)\). ### Left-skewed pulses (\(\varphi<0\) and/or \(\psi<\pi\)) For biological plausibility, I previously considered pulses whose mean coincided with the QIF threshold at infinity or was shifted to the neuron's refractory phase after the spike (i.e. the mean of the pulse had a phase \(\theta\geq\pi\)). For the sake of completeness, here I briefly report on pulses whose mean is shifted to the phase before the spike. In Fig. 9(a), the red curve describes a left-skewed pulse that I obtained from mirroring the asymmetric red pulse in Fig. 8(a) by setting \(\varphi=\pi/12\mapsto-\pi/12\) while keeping \(r=0.95\) and \(\psi=\pi\), i.e. \(v_{\rm thr}=\infty\), as before. The mean is now shifted to the left (the dashed line indicates the spike at \(\theta=\psi\)), which changes the phase diagram drastically, cf. Fig. 9(b) vs. Fig. 8(d). First, the synchronous state of collective oscillations (SYNC) is no longer possible for inhibitory coupling, but requires excitation (\(J>0\)). Then, the cusp-shape bifurcation region that dominated the upper left part (\(J>0,I_{0}<0\)) in Figs. 7 and 8 has shrunk to a small triangular region that is bounded by the three codimension-2 bifurcation points (BT, SNL, and Cusp), see also the inset in Fig. 9(b). The Hopf bifurcation line emanating from the BT point separates the triangular bistability region into a lower part, where two asynchronous states (a low and a high activity state) coexist, and an upper part, where the high-activity asynchronous state (HAS) has lost stability and given rise to collective oscillations. Overall, the phase diagram strongly resembles the case of gap junction-coupling in interplay with \(\delta\)-spikes (Fig. 7 in [104]). What is more, the shape of left-skewed pulses has hardly any impact on the phase diagram. For advanced Dirac \(\delta\)-pulses \(p_{\delta,\psi}(\theta)\), Eq. (14) with \(\psi=2\arctan(v_{\rm thr})<\pi\), see the dark gray curve in Fig. 9(a) with virtual threshold \(v_{\rm thr}=20\), the phase diagram is only slightly shifted downwards and the triangular bistability region has become a bit smaller (dark gray lines in Fig. 9b). For comparison, I also consider square pulses \(p_{\rm sq}(v)\) similar to those in [81]. To comply with the normalization condition of \(p_{r,\varphi,\psi}\), I consider \[p_{\rm sq}(v)=\begin{cases}\pi/\vartheta,&\quad\text{if $v\geq v_{\rm on}$ or $v\leq v_{\rm off}$},\\ 0,&\quad\text{otherwise},\end{cases} \tag{31}\] where \(\vartheta:=(\theta_{\rm off}-\theta_{\rm on})/2\) with onset and offset phases \(\theta_{\rm off,on}=2\arctan(v_{\rm off,on})\) is chosen such that the square pulse \(p_{\rm sq}\) in the corresponding \(\theta\)-phase description is normalized to \(2\pi\), cf. Section II.3. For \(v_{\rm off}\to-\infty\) and \(v_{\rm on}>0\), Eq. (31) coincides with the pulses used in [81], but rescaled by \(\pi/\vartheta\); see the blue pulse in Fig. 9(a) with \(v_{\rm on}=20\) for an example. For \(v_{\rm on}>0\) and \(v_{\rm off}<0\), the square pulse \(p_{\rm sq}\) does not terminate at the spike, but actually encloses it and extends its impact to the neuron's relative refractory period. To obtain the mean pulse activity \(P_{\rm sq}=\langle p_{\rm sq}\rangle\), it is no longer possible to follow the approach of Section III.2 because the pulses \(p_{\rm sq}(v)\) do not permit an analytic continuation in the complex plane. Nonetheless, it is possible to obtain the RV dynamics for global square-pulse coupling on the Lorentzian manifold. To this end, one can average the square pulses (31) with respect to the Cauchy-Lorentz distribution density \(\mathcal{W}(v,t)\), Eq. (19), which results in the mean presynaptic pulse activity \[\begin{split}& P_{\rm sq}(R,V)=\int_{-\infty}^{\infty}p_{\rm sq }(v)\mathcal{W}(v,t)dv\\ &=\frac{1}{\vartheta}\left[\arctan\!\left(\!\frac{V-v_{\rm on}}{ \pi\tau_{m}R}\!\right)-\!\arctan\!\left(\!\frac{V-v_{\rm off}}{\pi\tau_{m}R} \!\right)\right],\end{split} \tag{32}\] which can readily be used for the recurrent input \(I_{\rm syn}=JP_{\rm sq}\) in the RV dynamics (20). For \(v_{\rm off}\to-\infty\), Eq. (32) Figure 9: Left-skewed pulses induce collective oscillations for excitatory, but not for inhibitory neurons. (a) Asymmetric (red), advanced (dark-gray) and square pulses (gray-blue), whose mean lies to the left of the QIF threshold at \(\theta=\pi\), where the voltage \(v\) diverges (bottom). (b) The phase diagram using the asymmetric pulse \(p_{r,\varphi,\pi}\) exhibits a dominant region for collective oscillation (SYNC, red shaded), which is bounded by a supercritical Hopf bifurcation curve (red) from below and by a SNIC (violet) and homoclinic (green) bifurcation from the left. In the triangular region between the Bogdanov-Takens (BT), Saddle-Node Separatrix Loop (SNL), and Cusp points, there is bistability between a high activity (HAS) and a low activity asynchronous state (LAS) below, and between the LAS and SYNC in the red shaded region above the Hopf curve, see the inset for a zoom. The dark-gray and gray-blue lines correspond to the phase diagram structure of the advanced and the square pulses depicted in (a). reduces to the global synaptic variable used in [81] rescaled by \(\pi/\vartheta\). Analyzing the RV dynamics (20) with the mean pulse activity (32) for the blue square pulse in Fig. 9(a) with \(v_{\text{on}}=20\) and \(v_{\text{off}}=\infty\), yields a very similar phase diagram compared to the asymmetric and advanced pulses, see the gray-blue lines in Fig. 9(b) and cf. [81]. Again, a large parameter region of collective oscillations occurs in the upper right quadrant for excitatory coupling \(J>0\), which could have been anticipated along the same lines as in Section IV.1 because \(\partial_{V}P_{\text{sq}}(R^{*},V^{*})=\pi\tau_{m}R^{*}/\vartheta/[(\pi\tau_{m} R^{*})^{2}+(V^{*}-v_{\text{on}})^{2}]>0\) is positive for all fixed-point solutions \(R^{*}>0\). In sum, left-skewed pulses--which may be biologically less plausible because their mean occurs before the neuron actually spikes--, first, do not allow for collective oscillations among inhibitory QIF neurons, but only among excitatory neurons, and, second, the resulting phase diagram is largely insensitive to the shape of the pulse, which is in stark contrast to symmetric RP pulses (Fig. 7) and to right-skewed pulses (Fig. 8). ## V Discussion The motivation for the work at hand has been to include and justify another degree of biological realism--namely, smooth pulsatile synaptic transmission--within the framework of an exact and low-dimensional mean-field theory for networks of globally coupled spiking neurons [45; 49], while guaranteeing that the RV dynamics (20) are admissible to a mathematical analysis that allows for comprehensive insights how the pulse shape affects the collective dynamics. Unless considering synaptic interaction via \(\delta\)-spikes, previous studies on pulse-coupled spiking \(\theta\)-neuron networks typically resorted to broad, symmetric, and rather inflexible Ariaratnam-Strogatz (AS) pulses (12) more for mathematical convenience, than for biological realism--which I tried to make clear in Section II.1. The main reason for using AS pulses seems, however, a historic one. To put this into context, I will review previous approaches to pulse-coupling between \(\theta\)-neurons in Section V.1. I also revisit an alternative approach to pulse-coupling between QIF neurons, which however has led to biologically contradictory results on the single neuron as well as on the network level, and which is restricted to the Lorentzian manifold. To overcome previous limitations and to analyze the effect of biologically plausible pulse shapes--both symmetric and asymmetric--on the collective dynamics of globally pulse-coupled QIF and \(\theta\)-neurons (even beyond the Lorentzian manifold), I have proposed a rather general family of pulse functions \(p_{r,\varphi,\psi}(\theta)\) that depend on the presynaptic voltage \(v\) via their \(\theta\)-phase description \(\theta(t)=2\arctan(v(t))\). In the limit \((r,\varphi,\psi)\to(1,0,\pi)\), the pulse reduces to the \(\delta\)-spike commonly used in network models of spiking neurons. Taking the mean over those \(\delta\)-spikes yields a direct connection with the population firing rate, \(\langle p_{1,0,\pi}\rangle=\pi\tau_{m}R\), see Eq. (21). Tuning either of the three parameters--pulse width \(r\), asymmetry \(\varphi\) and shift \(\psi\)--allows for diverse pulse shapes \(p_{r,\varphi,\psi}\) that may mimic realistic synaptic transmission between pre- and postsynaptic neurons by adjusting the timing of the synapse and crucially influence the collective dynamics of the network. In Section V.2, I discuss the three pulse parameters in more detail and also draw connections to delay and to electrical coupling via gap junctions. In Section V.3, I discuss whether instantaneous pulses of finite width can replace more complex synaptic transmission, such as synaptic kinetics and conductance-based synapses, beyond the interpretation inherent to Theorem 1 in Section II.1. Recall that Theorem 1 imposed certain constraints on the pulse width in order to establish the \(\theta\)-neuron network Eq. (3\({}^{*}\))--and its QIF-equivalent Eq. (6)--as the canonical pulse-coupled network model for weakly connected Class 1 excitable neurons. For the discussion in Section V.3, I will discard any restrictions on the pulse function \(p\), see also Remark 1. Instead, I will regard the \(\theta\)-model as the continuous analogue of the (discontinuous) QIF model, cf. Remark 2. In this way, the pulses \(p_{r,\varphi,\psi}(\theta)\) in the \(\theta\)-neuron network Eq. (3\({}^{*}\)) can be probed against the hypothesis that they may, or may not, correspond directly to complex synaptic interactions in the QIF neuron network Eq. (15). ### Previous approaches to pulse-coupling The phase-representation of the \(\theta\)-neuron suggests to invoke the theory of coupled phase oscillators to study synchronization and emergent collective behavior. Yet, insights from the literature on pulse-coupling in related phase models may not always carry over to \(\theta\)-neurons and one ought to be careful when drawing analogies. Revisiting the arguments that justified pulses of finite width between coupled phase oscillators will lead to the root of the "pulse-problem" in \(\theta\)-neurons. To this end, I consider a network of \(N\) globally coupled neurons described by variables \(\theta_{j}\in\mathbb{S}^{1}\), \(j=1,\ldots,N\), with the dynamics \[\dot{\theta}_{j}=\omega_{j}-b_{j}\cos(\theta_{j})+\frac{1}{N}\sum_{k=1}^{N}q( \theta_{j})p(\theta_{k}). \tag{33}\] The network dynamics (33) reduces to the network model (3\({}^{*}\)) of \(\theta\)-neurons for \(\omega_{j}=1+\eta_{j}\), \(b_{j}=1-\eta_{j}\) and \(q(\theta)=1+\cos(\theta)\) with excitability parameter \(\eta_{j}\). More generally, Eq. (33) describes the dynamics of active rotators [19; 20] with pulse-response coupling; \(p(\theta_{k})\) is a pulse-like function and \(q(\theta_{j})\) plays a role analogous to that of a "phase response curve" [14]. The parameter \(b_{j}\) determines whether neuron \(j\) is periodically spiking (\(|\omega_{j}|>b_{j}\)) or in an excitable regime (\(|\omega_{j}|\leq b_{j}\)). Setting \(b_{j}=0\) for all \(j\) leads to the seminal Winfree model [13; 14]. In the context of neural oscillators [15], the Winfree model--Eq. (33) with \(b_{j}=0\)--describes the dynamics of periodically firing neurons (with frequency \(\omega_{j}\)) on their limit cycle, parameterized by the phase \(\theta_{j}\). In the course of an action potential, the presynaptic neuron \(k\) emits a pulse \(p(\theta_{k})\) that is modulated by the response function \(q(\theta_{j})\) of postsynaptic neuron \(j\). For Hodgkin-Huxley-like neurons, the pulses \(p(\theta)\) can be interpreted as instantaneous nonlinear conductances [15, 62]. By using graded potentials rather than fast spikes, synaptic interaction in these kinetics-based models is represented as a function of the presynaptic voltage. Analogously, in kinetic models of synaptic transmission, the neurotransmitter concentration in the synaptic cleft can be reduced to a sigmoidal function of the presynaptic voltage [4], see also [107, 108, 109]. According to these theories, a voltage-dependent pulse can comprise the biochemical processes--at the presynaptic site--from the initiation of an action potential until the release of neurotransmitters to the synaptic cleft. The resulting pulse can become arbitrarily broad and typically exhibits a steep increase until the neuron spikes and a moderate decrease afterwards [15]. Its asymmetric shape results from the interplay between the action potential shape and the synaptic activation, or neurotransmitter release, function. Asymmetric pulses were shown to be critical to synchronization in neural networks consisting of two neurons [110, 62, 111]; a comprehensive picture how asymmetric pulses affect the collective behavior of large neural networks, however, has been lacking. The argumentation that pulses \(p(\theta)\) in the Winfree model result from the interplay between action potential shape and synaptic activation function, may justify the use of broad pulses also in networks of \(\theta\)-neurons. This reasoning ignores, however, that the trajectory of an action potential in conductance-based neurons is radically compressed in the canonical \(\theta\)-model (Fig. 1) so the resulting pulse cannot become arbitrarily broad (Fig. 2). Besides, the Winfree model describes the phase dynamics of periodically firing conductance-based neurons obtained through a proper phase reduction [15], whereas the \(\theta\)-neuron (albeit a phase model) does not result from phase reduction. Instead, it canonically describes a particular dynamical regime of a neuron that does not need to be periodically spiking, but can also be excitable. Hence, adopting the Winfree model, Eq. (33) with \(b_{j}=0\), with broad and possibly asymmetric pulses \(p(\theta)\) for networks of \(\theta\)-neurons has to be regarded with care. Still, as long as \(\theta\)-neurons are in the periodically firing regime, the Winfree model may be useful for studying their collective dynamics, at least to some extent [11, 112, 51], and previous results about the effect of the response and pulse functions \(q(\theta_{j})\) and \(p(\theta_{k})\) on synchronization properties in the Winfree model can become applicable [16, 17, 18]. To account for excitable neuronal dynamics, the Winfree model has to be augmented by a term proportional to \(\cos(\theta_{j})\), yielding the network model (33) of coupled active rotators. A thorough analysis of Eq. (33) with general pulse and response functions \(p\) and \(q\) is lacking, though. So far, building on [20], O'Keeffe and Strogatz studied how the width of (symmetric) pulses affects the collective dynamics, while considering a flat response function \(q\equiv\mathit{const}\) and identical \(b_{j}=b\)[21]. They found that broad pulses may entail collective dynamics different from those generated by narrow pulses, which is consistent with results on the Winfree model by Pazo, Montbrio and Gallego [17, 18]. It remains unclear, however, how these results carry over to \(\theta\)-neuron networks including sinusoidal response functions and excitable dynamics. A crucial tool for pinpointing the effect of the pulse width on the synchronization properties of large populations of Winfree oscillators and active rotators has been an exact dimensionality reduction first proposed by Ott and Antonsen [46]. Since then, a plethora of studies have adopted the "Ott-Antonsen ansatz" to facilitate mean-field analyses. But to do so, they had to rely on analytically tractable pulse and response functions, e.g., resorting to symmetric and broad pulses as suggested by Ariaratnam and Strogatz [16], and thus often at the cost of biological realism. By that time, the "Ariaratnam-Strogatz" (AS) pulse had already found its way into networks of \(\theta\)-neurons [68, 69] as a mathematical tractable alternative to similar, mainly symmetric, pulses of finite width (see, e.g., [113, 114, 115, 59, 116]), which were introduced either for numerical convenience (to smooth the discontinuous \(\delta\)-spikes) or on the premise that instantaneous pulse-coupling reflected realistic synaptic potentials. Typically, however, the authors did not specify which synaptic processes they meant to replace by broad pulses. In Section V.3, I reexamine their premise and argue that pulses irrespective of their (finite) width cannot reflect real synaptic dynamics--at the postsynaptic site--and ought not to be used to replace synaptic transmission with synaptic kinetics or conductance-based synapses in networks of spiking neurons. Regardless, the combination of the Ott-Antonsen ansatz with the analytically tractable, though biologically debatable, AS pulses turned out to be pivotal to studying the collective dynamics of pulse-coupled \(\theta\)-neurons [23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44]. As of today, however, pulse-coupling in \(\theta\)-neurons has widely eluded a biological justification and the role of the pulse shape for the collective dynamics remained unclear. I hope that the ideas put forward in Section II contribute to the discussion in a productive way. In particular, I offered two alternative interpretations of pulses: As long as pulses of arbitrary shape are sufficiently narrow, they can reflect an instantaneous synaptic transmission process including both the presynaptic and the postsynaptic site (Section II.1). Alternatively, pulses between \(\theta\)-neurons can describe voltage-gated conductances, or neurotransmitter release, at the presynaptic site (Section II.2). This second interpretation builds on the change of coordinates (through the inverse transformation) \(\theta_{j}=2\arctan(v_{j})\) from the QIF model (Remark 2), which implicates, however, that the network of pulse-coupled \(\theta\)-neurons may no longer represent the canonical model for the universal class of weakly connected Class 1 excitable neurons. Pulse-coupling approaches in networks of (quadratic) integrate-and-fire neurons are by default simpler, as synaptic transmission is generally modeled with \(\delta\)-spikes. Other forms of pulse-coupling are rarely found in the literature, although the model formulation in terms of voltages \(v\)--as opposed to the phase description of the \(\theta\)-neuron--is in principle favorable for modeling voltage-dependent synaptic activation, which may entail pulses \(p(v)\) with variable shapes. The shape of the emitted pulse, however, depends not only on the activation function \(p\), but also on the action potential. This can become critical for integrate-and-fire neurons because their fire-and-reset mechanism implicates "action potentials" with a rather artificial shape: at the moment of the spike, the neuron's voltage is instantaneously reset and then starts integrating again (Fig. 1c). A sigmoidal activation function of the presynaptic voltage, similar to those used in conductance-based neurons, then lops off the pulse of an integrate-and-fire neuron right after its spike. The resulting pulse exhibits a moderate increase and a radical decrease much in contrast to those pulses generated by conductance-based neurons (cf. Figs. 3 and 4). Biological concerns aside, networks of heterogeneous QIF neurons with synapses exhibiting such a sigmoidal voltage-dependence were studied in [81]. The synaptic pulses of finite width turned out a necessary ingredient to synchronize QIF neurons and led to collective oscillations, whereas global coupling via instantaneous \(\delta\)-spikes is known to rule out collective oscillations of QIF neurons [42; 49]. Yet, the voltage-dependent pulses in [81] synchronized only excitatory, but not inhibitory neurons, which is at odds with the theoretical finding that Class 1 neurons (including the QIF model) tend to be much more easily synchronized by mutual inhibition than excitation [84]. A mechanistic explanation for this paradox has been elusive, but may fall back to the artificial shape of the QIF's action potential. The conventional sigmoidal description of voltage-dependent pulses had thus to be revised to compensate for the abrupt fire-and-reset mechanism of QIF neurons and, eventually, to reveal general principles of emergent collective behavior in spiking neuron networks. In Section II.2, I proposed such a compensation scheme by taking the QIF's action potential into account, which led to a variety of (a)symmetric pulses that, unfortunately, are analytically intractable to study the network dynamics in more detail. Therefore, I approximated those pulses by analytically more favorable pulses \(p_{r,\varphi,\psi}\) in Section II.3, which allowed for exact low-dimensional collective dynamics of globally pulse-coupled QIF neurons (Section III). As their attractors lie on the invariant "Lorentzian" manifold, it sufficed to study the RV dynamics (20) with the mean pulse activity \(P_{r,\varphi,\psi}=\langle p_{r,\varphi,\psi}\rangle\), which are two-dimensional and amenable to a comprehensive bifurcation analysis for instantaneous pulse-coupling through \(p_{r,\varphi,\psi}\). I would like to remark that in the framework of the RV dynamics (20) on the Lorentzian manifold, pulsatile coupling is not restricted to the (smooth) pulses \(p_{r,\varphi,\psi}\) proposed above. It is possible to design different voltage-dependent pulses \(p(v)\) that allow for an accessible mean field \(P(R,V)=\langle p\rangle=\int_{\mathds{R}}p(v)\mathcal{W}(v,t)dv\) closed in \(R\) and \(V\) thanks to the averaging with respect to the invariant Cauchy-Lorentz distribution density \(\mathcal{W}(v,t)\) of the total voltage density (19) of globally coupled QIF neurons. This approach was pursued in [81] using uniform (square) pulses \(p_{\text{sq}}(v)\) that are activated when a neuron's voltage exceeds the value \(v_{\text{on}}\leq\infty\) and terminated right at the spike time, see Eq. (31) with \(v_{\text{off}}=\infty\). By averaging the pulses \(p_{\text{sq}}(v)\) with respect to \(\mathcal{W}(v,t)\), one obtains the mean presynaptic pulse activity \(P_{\text{sq}}=\langle p_{\text{sq}}\rangle\) of the square pulses \(p_{\text{sq}}(v)\), Eq. (32), which can readily be used as the recurrent input \(I_{\text{syn}}\) in the RV dynamics (20). Importantly, this approach only applies to the dynamics on the Lorentzian manifold, where the voltages of globally coupled QIF neurons are known to be distributed according to Cauchy-Lorentz distribution density (19). To capture transient dynamics beyond the Lorentzian manifold, one has to resort to the exact low-dimensional system (18) and the mean pulse activity \(\langle p\rangle\) can no longer be found by averaging with respect to \(\mathcal{W}(v,t)\), but has to be determined in terms of the macroscopic variables \(\Phi,\lambda\) and \(\sigma\). This can be achieved, e.g., by following the strategy outlined in Appendix F. In general, this strategy requires certain assumptions on the pulse functions \(p(v)\)--which the smooth pulses \(p_{r,\varphi,\psi}\) given by Eq. (10) satisfy but \(p_{\text{sq}}(v)\) does not (see, e.g., Eq. (4) in [117] and compare with Appendix F)--but eventually allows for a rigorous and exact mean-field reduction, as well as for the mathematical analysis of the collective dynamics, of pulse-coupled networks of spiking neurons. ### Pulse parameters, delays and gap junctions In this section, I will discuss the three parameters--pulse width \(r\), asymmetry \(\varphi\), and shift \(\psi\)--in more detail that shape the smooth pulses \(p_{r,\varphi,\psi}\) and, in turn, also the collective dynamics of pulse-coupled neuron networks. The first parameter \(r\in[-1,1]\) tunes the width of the pulse and interpolates between continuous, flat (\(r=-1\)) and discrete, event-triggered (\(r=1\)) synaptic transmission. Even for large \(0\ll r<1\), the pulsatile transmission is smooth and the presynaptic neuron almost always sends out a signal. Still, the larger \(r\), the more localized is this signal around a particular (threshold) voltage \(v\approx v_{\text{thr}}=\tan(\psi/2)\) or, equivalently, a particular central phase \(\theta\approx\psi\). The effect of the emitted pulse (with large \(r\)) on the postsynaptic neuron becomes negligible for voltages farther away from \(v_{\text{thr}}\) (Fig. 7a, right panel). If \(v_{\text{thr}}\) coincides with the QIF spiking threshold \(v_{p}\) at infinity (\(v_{\text{thr}}=v_{p}\to\infty\)), and in absence of asymmetry (\(\varphi=0\)), the "Rectified-Poisson" (RP) pulses \(p_{r,0,\pi}\) resemble the "Ariaratnam-Strogatz" (AS) pulses \(p_{\text{AS},n}\) that have been frequently employed in the context of \(\theta\)-neuron networks, see, e.g., [23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 68; 69]. The usability of AS pulses, however, is hampered by the series representation of the mean pulse activity \(\langle p_{\text{AS},n}\rangle\). The larger \(n\), the narrower the AS pulse, and the more convoluted \(\langle p_{\text{AS},n}\rangle\), see Appendix D. In fact, \(\langle p_{\text{AS},n}\rangle\) does not allow for a closed-form description in the exact low-dimensional system (18). By contrast, the mean RP pulse activity \(P_{\text{RP},r}=\langle p_{r,0,\pi}\rangle\) can be expressed in terms of the three macroscopic variables \(\Phi,\lambda,\sigma\) that completely determine the collective dynamics. On the Lorentzian manifold, \(P_{\text{RP},r}\) simplifies to a concise function of the firing rate \(R\) and the mean voltage \(V\), see Eq. (23), and allows for a straightforward bifurcation analysis even for arbitrarily narrow pulses (Section IV.2). The second parameter, \(\psi\in[0,2\pi)\), shifts the pulse to the right (\(\psi>\pi\)) or to the left (\(\psi<\pi\)), but does not change its actual shape. For symmetric pulses (\(\varphi=0\)), the pulse is strongest at the phase \(\theta=\psi\), or when the presynaptic voltage \(v\) crosses the "virtual threshold" \(v_{\text{thr}}=\tan(\psi/2)\). The nomenclature of a virtual threshold becomes rigorous for Dirac \(\delta\)-pulses \(p_{1,0,2\arctan(v_{\text{thr}})}\), that are emitted at the moment when \(v=v_{\text{thr}}\). Depending on the sign of \(v_{\text{thr}}\), the pulse is shifted to the right or to the left of the QIF spiking threshold \(v_{p}=\infty\), which can be interpreted as an effective delay or advance of the postsynaptic response, cf. [10, 71]. Indeed, if the pulse reaches its peak when \(v=v_{\text{thr}}\ll 0\), then the effect on the postsynaptic response is strongest after the actual spiking of the presynaptic neuron. On the other hand, for \(v_{\text{thr}}\gg 0\), the emitted pulse is strongest already before the neuron has spiked. For threshold values after the spike, i.e. \(v_{\text{thr}}<0\), and as expected for delayed synaptic transmission [118, 119], collective oscillations can be found for inhibition (\(J<0\)) and (strong) excitatory drive \(I_{0}>0\) (Fig. 8b and c). The results are qualitatively identical for sufficiently narrow pulses with width \(0\ll r\leq 1\) and threshold value \(v_{\text{thr}}\ll 0\), so that it suffices to report the phase diagram for one set of parameters only (here, \(r=0.95,v_{\text{thr}}=-20\)). As a word of caution I stress that shifted pulses mimic truly delayed pulses only on the single neuron level; on the macroscopic level, they may lead to very distinct collective dynamics. Reminiscent of that is the cusp-shaped region of bistability between two (very) low-activity states for \(J<0\) in Fig. 8(b and c). Note also that in contrast to "real" time delays, instantaneous recurrent coupling with shifted pulses cannot store the neurons' history and transmit it unaltered after the delay. That is why the RV dynamics (20) remain low-dimensional, whereas real delays yield infinite-dimensional dynamics that can entail more complex collective behavior [94, 96]. For advanced (or left-shifted) pulses with threshold values before the spike (\(0\ll v_{\text{thr}}<\infty\)), collective oscillations occur for excitatory coupling (\(J>0\)) and cease via a supercritical Hopf bifurcation when decreasing the coupling strength (Fig. 9), cf. Eq. (30). This insight also explains why the square pulses (31) used in [81], whose mean is shifted to the phase before the actual spike, only induce collective oscillations for excitatory neurons but not in the biologically more plausible case of inhibition (Sections IV.4 and V.1). Moreover, the observed bifurcation scenario for left-shifted pulses resembles the case of gap junction-coupling, cf. Fig. 7 in [104]. Gap junctions are, in general, known to promote neural synchrony and facilitate collective oscillations [27, 104]. QIF neurons that are globally coupled via gap junctions of strength \(J\) (but not via pulses) have the subthreshold dynamics \[\tau_{m}\dot{v}_{j}=v_{j}^{2}+i_{0}+\frac{J}{N}\sum_{k=1}^{N}(v_{k}-v_{j})+I_ {j}. \tag{34}\] On the Lorentzian manifold, the exact RV dynamics corresponding to the microscopic dynamics (34) in the limit \(N\to\infty\) takes on the same form as Eq. (20) when identifying \(V=\langle v_{j}\rangle-J/2\) as a shifted mean voltage, \(I_{0}=i_{0}+J^{2}/4\) and recurrent synaptic input \(I_{\text{syn}}=JV\). Thus, in line with the results of Section IV.1, the onset of collective oscillations in the gap junction-coupled network (34) is neatly explained by an effective voltage-coupling. The similarity between the collective dynamics for advanced pulses on one hand and for gap junctions on the other hand, supports the notion that pulse-interactions indeed induce an effective voltage component in the recurrent coupling. Again a word of caution is due about the correct interpretation of temporally vs. effectively advanced pulses: With instantaneous pulse-coupling, the mean field \(\langle p_{r,0,2\arctan(v_{\text{thr}})}\rangle\) has an immediate effect on individual neurons. That is, emitting an inhibitory pulse before the actual spike may possibly hinder the same neuron to actually spike. By allowing for synaptic kinetics, as in Eqs. (36) or (37) below, one can alleviate this intricacy and indeed interpret the time at which a neuron crosses the virtual threshold \(v_{\text{thr}}\) as the activation time \(t_{a}\) of the postsynaptic response, cf. [10]. Finally, the asymmetry parameter \(\varphi\in[-\pi,\pi)\) skews the pulse \(p_{r,\varphi,\psi}(\theta)\) and shifts its bulk to the right (\(\varphi>0\)) or to the left (\(\varphi<0\)) of the central phase \(\theta=\psi\); note that \(\varphi\neq 0\) is only effective for pulses of finite width \(r<1\). Already a small value of \(0<|\varphi|\ll 1\) can have a large effect on the network dynamics and easily induce collective oscillations (Figs. 6b and 8d). If the pulses are slightly skewed to the phase after the spike (\(\varphi>0\)), collective oscillations emerge almost naturally for inhibition (\(J<0\)) with sufficient excitatory drive (\(I_{0}>0\)). The effect is similar to that of synaptic kinetics and, indeed, for these right-skewed pulses, one retrieves a fast rise and slower decay of the postsynaptic response (Fig. 4c2) as observed for first- and second-order synapses. For left-skewed pulses (\(\varphi<0\)), by contrast, the mean of the pulse is advanced and collective oscillations emerge only for excitation (\(J>0\); Fig. 9), similar to left-shifted pulses (\(\psi<\pi\), \(v_{\text{thr}}>0\)) or gap junctions. ### Can instantaneous pulses replace complex synaptic transmission? In contrast to the mathematical abstraction of \(\delta\)-spikes, smooth pulses of finite width have often been assumed to be biologically more realistic and to better approxi mate postsynaptic responses like those of conductance-based, Hodgkin-Huxley-like neuron models [69; 41; 23]. It seems daunting to include various levels of biochemical realism at a chemical synapse, so in general one jumps over the explicit modeling of (1) how an increase (depolarisation) in the voltage \(v_{\rm pre}\) of a presynaptic neuron activates voltage-gated Ca\({}^{2+}\) channels, (2) how the Ca\({}^{2+}\)-influx induces the release of neurotransmitters that diffuse to the postsynaptic neuron and bind to specific receptors with different possible mechanisms [120], and (3) how the binding of neurotransmitters triggers the opening of ionic channels and eventually generates a postsynaptic current. Instead, the synaptic process is described phenomenologically--but not derived from first principles [107]--by the synaptic input \(I_{\rm syn}\) to the postsynaptic neuron, \[I_{\rm syn}=-g_{\rm syn}\big{(}v_{\rm pre}(t),t\big{)}\big{[}v_{\rm post}(t) -E_{\rm syn}\big{]}\, \tag{35}\] with reversal potential \(E_{\rm syn}\) and a synaptic conductance that is often represented as \(g_{\rm syn}(t)=\hat{g}_{\rm syn}s(t)\) with maximal synaptic conductance \(\hat{g}_{\rm syn}\geq 0\) and a gating variable \(s(t)\) that may be interpreted as the fraction of open channels releasing neurotransmitters. If the synaptic conductance is activated by a (sigmoidal-like) function \(f(v_{\rm pre})\) of the presynaptic membrane potential, as discussed in Section II.2, and follows first-order synaptic kinetics, the dynamics of the gating variable \(s(t)\) is given by \[\dot{s}=a_{r}f\big{(}v_{\rm pre}(t-\tau_{l})\big{)}(1-s)-a_{d}s. \tag{36}\] The constants \(a_{r}\) and \(a_{d}\) determine the rise and decay times of the postsynaptic response [62; 64; 108; 4] and a possible latency time \(\tau_{l}\) can account for finite axonal propagation times [108]. Depending on the shape of the presynaptic action potential, \(s(t)\) can actually begin to rise before the presynaptic voltage reaches its peak (corresponding to the neuron's spike time \(T_{\rm pre}\)), especially when the action potential is broad [10]. Alternatively to Eq. (36), the time course of \(s(t)\) can be described by the difference of two exponential functions, \[s(t)=A\big{[}e^{-(t-t_{a})/\tau_{r}}-e^{-(t-t_{a})/\tau_{d}}\big{]},\ t\geq t_{ a}, \tag{37}\] with amplitude \(A\) and activation time \(t_{a}\), which is typically around the spike time \(T_{\rm pre}\) of the presynaptic neuron plus some latency \(\tau_{l}\) (possibly due to finite axonal propagation speed and taking into account that the postsynaptic response may start before \(T_{\rm pre}\)). Biexponential synapses (37) with characteristic latency, rise, and decay time constants \(\tau_{l,r,d}\) are the gold standard in computational models of spiking neurons--they allow for mean-field approaches [82; 83; 88; 121; 83] and are typically reported in the experimental neuroscience literature (although there are no coherent definitions for \(\tau_{l,r,d}\)). In network models of spiking QIF or \(\theta\)-neurons, pulses \(p_{r,\varphi,\psi}\) of finite width (\(r<1\)), as described by Eq. (10), are an ideal candidate for the presynaptic voltage-dependent activation, or release, function \(f(v_{\rm pre})\) in Eq. (36). The pulse is activated already shortly before the QIF neuron reaches the peak of its action potential (Fig. 4) and can last even through its recovery period, see Section II.2. The versatility of the pulses \(p_{r,\varphi,\psi}\) further allows to accentuate the synaptic activation, or the release of neurotransmitters, on either phase of the action potential through the asymmetry parameter \(\varphi\neq 0\) or the shift parameter \(\psi\neq\pi\). Thereby, it is possible to account for physiological conditions under which the opening of voltage-gated Ca\({}^{2+}\) channels, and consequently also neurotransmitter release, is advanced, e.g., at increased temperature [123; 124; 125; 126; 127]. When including synaptic kinetics of the form (36) that are activated by voltage-dependent pulses, one has to be careful how to incorporate the corresponding postsynaptic responses in mean-field models. Indeed, when summing over a large number of postsynaptic responses \(s_{j}\), the product \(f(v_{\rm pre,j})(1-s_{j})\) with the presynaptic pulses \(f(v_{\rm pre,j})\) presents a nonlinear problem that can only be resolved approximately6, even for the analytically tractable pulses \(f(v_{\rm pre,j})=p_{r,\varphi,\psi}(\theta_{j})\). Biexponential synapses (37) do not suffer from this shortcoming, which may explain their success in (mean-field models in) computational neuroscience. As the postsynaptic response \(s_{j}\) is typically activated at the presynaptic spike time \(T_{j}\), it is advantageous to rewrite (37) as Footnote 6: When summing the responses \(s_{j}\) over all presynaptic neurons \(j\) with individual responses given by Eq. (36), the mean response reads \(\hat{S}=\langle\hat{s}_{j}\rangle=a_{r}\langle p_{r,\varphi,\psi}\rangle-a_{d }S-a_{r}\langle p_{r,\varphi,\psi}(\theta_{j})s_{j}\rangle\). The last term \(\langle p_{r,\varphi,\psi}(\theta_{j})s_{j}\rangle\) represents an average of the product of the presynaptic pulse with the current state of the postsynaptic response. In general, \(p_{r,\varphi,\psi}(\theta_{j})\) and \(s_{j}\) are not independent but correlated, so that taking the mean of their product does not yield a closed equation in terms of \(S\) and \(\langle p_{r,\varphi,\psi}\rangle\). For \(\delta\)-spikes (in the limit \((r,\varphi,\psi)\to(1,0,\pi)\)) and under the Poissonian assumption that presynaptic spike trains have a coefficient of variation (CV) close to 1, one can approximate \(\langle p_{1,0,\pi}(\theta_{j})s_{j}\rangle\approx(\pi\tau_{m}R)S\), cf. [128], and possibly loosen the Dirac \(\delta\)-pulse assumption to obtain \(\langle p_{r,\varphi,\psi}(\theta_{j})s_{j}\rangle\approx\langle p_{r,\varphi, \psi}\rangle S\) for \(r\) close to 1. Yet, the Poissonian assumption, and hence the foregoing approximation, is difficult to justify in strongly correlated collective states, e.g., of regular synchrony [73]. \[\tau_{d}\dot{s}_{j}=-s_{j}+u_{j},\ \tau_{r}\dot{u}_{j}=-u_{j}+\tau_{0}p_{1,0,\pi}\big{(}\theta_{j}(t-\tau_{l})\big{)}, \tag{38}\] with normalization factor \(\tau_{0}\); one retrieves (37) for \(\tau_{0}=A(\tau_{d}-\tau_{r})/(2\pi\tau_{r}\tau_{d})\). To avoid infinite-dimensional time-delayed neuronal dynamics when \(\tau_{l}>0\), one can follow [10] and use shifted Dirac \(\delta\)-pulses \(p_{1,0,2\arctan(v_{\rm thr})}\) with a negative virtual threshold \(v_{\rm thr}<0\) that effectively delay the postsynaptic response. Likewise, one can use positive virtual thresholds \(v_{\rm thr}>0\) to account for activation times of the synapse before the actual spike time, \(t_{a}<T_{j}\). Since there are no nonlinear terms in (38), one can readily average the postsynaptic responses over the population and obtain a concise mean-field model. While I have shown how to interpret, and incorporate, the novel pulse function \(p_{r,\varphi,\psi}(\theta)\) in the traditional framework of synaptic transmission, one question remains: are instantaneous pulses of finite width adequate for replacing the complex processes involved in generating a postsynaptic current \(I_{\text{syn}}(t)\) of the form Eq. (35)? In other words, can instantaneous pulses \(p_{r,\varphi,\psi}\) approximate (the effect of) \(I_{\text{syn}}(t)\) sufficiently well as previously hypothesized? To answer this question, I will decompose the problem into two parts because \(I_{\text{syn}}\) comprises two distinct mechanisms: _synaptic kinetics_ (encoded in the dynamics of \(s\)) and _conductance-based synapses_ (due to the explicit voltage-dependence in \([E_{\text{syn}}-v_{\text{post}}(t)]\)). As I will argue below, instantaneous pulses of finite width are not suited to replace either synaptic kinetics (Section V.3.1) or conductance-based synapses (Section V.3.2) in networks of QIF or \(\theta\)-neurons, but should rather be used complementary to traditional synaptic transmission (Section V.3.3). #### v.3.1 Pulses of finite width do not replace synaptic kinetics To focus on the effect of synaptic kinetics, it is convenient to approximate the term \(E_{\text{syn}}-v_{\text{post}}(t)\approx v_{\text{eff}}\) in Eq. (35) by an effective potential \(v_{\text{eff}}\). This approximation is valid if \(v(t)\) spends most of the time near its rest state, and the recurrent synaptic input \(I_{\text{syn}}\) in the QIF dynamics (15) is given by \(I_{\text{syn}}(t)=JS(t)\) with \(J=\hat{g}_{\text{syn}}v_{\text{eff}}\) and \(S(t)=\langle s_{j}(t)\rangle\). Depending on the sign of \(v_{\text{eff}}\), the coupling \(J\) is excitatory (\(v_{\text{eff}}>0\)) or inhibitory (\(v_{\text{eff}}<0\)). The question is now whether the time course of \(s_{k}(t)\) as a postsynaptic response to the spiking of presynaptic neuron \(k\) can be equally well explained with an instantaneous pulse \(s_{k}(t)=p_{r,\varphi,\psi}(\theta_{k}(t))\) or with a \(\delta\)-spike-triggered biexponential synapse (37) with realistic latency, rise, and decay time constants. For simplicity, I consider two inhibitory QIF neurons coupled via shifted pulses \(p_{r,0,\psi}\) of finite width \(r=0.95\) (Fig. 8a, blue curve). The membrane time constant is fixed at \(\tau_{m}=10\)ms. Neuron 1 receives a constant input \(I_{1}=0.5\) and spikes at frequency \(100/(\sqrt{2}\pi)\approx 22.5\) Hz, whereas neuron 2 receives \(I_{2}=-1\) and remains quiescent at its resting potential \(v_{2,\text{rest}}=-1\). The shift parameter \(\psi\) is chosen such that the pulse is strongest when the voltage \(v\) of the presynaptic neuron crosses the virtual threshold \(v_{\text{thr}}=-20=\tan(\psi/2)\) after recovering from its reset. In this example, a pulse emitted by neuron 1 is strongest \(\sim 0.5\)ms after its spike and the postsynaptic current of neuron 2 is proportional to \(s_{1}(t)=p_{0.95,0,-2\arctan 20}(\theta_{1}(t))\). The postsynaptic potential (PSP) \(v_{2}(t)\) evolves according to \(\tau_{m}\dot{v}_{2}=v_{2}^{2}-1+Js_{1}(t)\) with \(J=-\pi\) and exhibits a rapid rise and more moderate decay with its peak at \(\sim\)1.08ms after the presynaptic spike. One can fit the voltage response \(v_{2}(t)\) of neuron 2 to that produced by a biexponential synapse (37) activated at the time of the presynaptic spike at \(t=0\) with \(\tau_{l}=0\)ms. The agreement between the voltage responses (PSPs) to the instantaneous pulse and to the biexponential synapse is remarkable, even though the perceived PSCs are quite different (Fig. 10a). Nonetheless, the fitted rise and decay times, \(\tau_{r}=0.025\)ms and \(\tau_{d}=0.55\)ms, are by far shorter than those reported in the literature (e.g., \(\tau_{r}=0.5\)ms and \(\tau_{d}=0.5\)ms in [64, 83]). Furthermore, the duration from the presynaptic spike until the peak of the PSP differ by 0.2ms, (peak pulse response at 1.08ms vs. peak synaptic response at 1.28ms). This calls for introducing a latency time constant of \(\tau_{l}=0.2\)ms, which seems a reasonable value. For wide pulses (\(r=0.75\)), however, the fitted latency increases and the postsynaptic response will be prompted at very low presynaptic voltage thresholds \(v_{\text{thr}}\approx 0\) long before the actual spiking threshold. Besides unrealistic synaptic time constants, another shortcoming of instantaneous pulses is the impossibility to integrate rapidly incoming inputs. While this is not a problem for very fast synaptic kinetics (with short rise and decay time constants \(\tau_{r,d}\)), synaptic integration becomes important for realistic synaptic decay of around \(\tau_{d}=5\)ms. In the context of the two neurons from before, increasing the input of neuron 1 to \(I_{1}=50\) induces spiking at a faster frequency of \(100\sqrt{50}/\pi\approx 225\) Hz. In the case of synaptic kinetics, the postsynaptic neuron 2 integrates the subsequent spikes from neuron 1 and both PSC and PSP exhibit increased baseline levels. The inhibitory effect on the postsynaptic voltage \(v_{2}\) is thus significantly larger as compared to instantaneously transmitted pulses of finite width (Fig. 10b). In sum, instantaneous pulses of finite width can have a similar effect on individual postsynaptic voltage responses as biexponential synapses, but the comparison is misleading. First, the fitted time constants of biexponential synapses are far from realistic and, second, instantaneous pulses cannot integrate inputs, which is an important hallmark of synaptic kinetics. Hence, instantaneous pulses \(p_{r,\varphi,\psi}\) do not replace synaptic kinetics. #### v.3.2 Pulses do not describe conductance-based synapses Can instantaneous pulses of finite width substitute for conductance-based synapses? To study the additional voltage-dependence in Eq. (35), I will focus on instantaneous conductance-based synapses, that is, \(g_{\text{syn}}(t)=(\hat{g}_{\text{syn}}/N)\sum_{k}\delta(t-T_{k}^{k})\) is proportional to the spike train of the presynaptic neurons. The voltage dynamics (15) of globally coupled QIF neurons then becomes \[\tau_{m}\dot{v}_{j}=v_{j}^{2}+I_{0}-\hat{g}_{\text{syn}}\tau_{m}R(t)[v_{j}-E_{ \text{syn}}]+I_{j}(t) \tag{39}\] with \(I_{j}\) as in Eq. (15b) and the same fire-and-reset rule at infinity as before. For positive maximal synaptic conductance \(\hat{g}_{\text{syn}}>0\), the reversal potential \(E_{\text{syn}}\) determines whether the (net effect of the) recurrent coupling is excitatory (\(E_{\text{syn}}>0\)) or inhibitory (\(E_{\text{syn}}<0\)). As in Section V.3.1 about synaptic kinetics, one can compare the postsynaptic voltage response to a presynaptic periodically spiking neuron when the neurons interact via (\(\delta\)-spike-activated) conductance-based synapses or via (instantaneous) pulses of finite-width. With \(I_{1,2}\) as before and setting \(\hat{g}_{\text{syn}}(E_{\text{syn}}-v_{\text{rest}})=J\) with \(v_{\rm rest}=-\sqrt{-I_{2}}=-1\), the PSC for the conductance-based synapse (39) is identical to a \(\delta\)-spike of strength \(J\), and the voltage responses can only be approximated by sufficiently narrow pulses \(p_{r,\varphi,\pi}\) with \(r\to 1\). When augmenting the conductance-based synapse (39) by second-order synaptic kinetics, that is, \(R(t)\) in Eq. (39) is replaced by \(s_{j}(t)\) according to Eq. (38), the resulting postsynaptic response almost coincides with the one by the biexponential synapse (Fig. 10). Therefore, the pulse-approximation of the second-order conductance-based synapse suffers from the same shortcomings--unrealistic synaptic time constants and impossibility of synaptic integration--as is the case for the biexponential synapse. On top of that, the pulse-approximation of conductance-based synapses becomes problematic in networks of (heterogeneous) QIF neurons even though the PSP amplitude of an individual neuron receiving conductance-based synaptic input matches the PSP for narrow pulse-coupling (Fig. 10). But when a group of neurons receive heterogeneous inputs, they have variable resting potentials and, consequently, the resulting PSP amplitudes differ across the network. The PSPs may even vary in time as they depend on the current state of each neuron. This feature of neural networks with conductance-based synapses is hardly possible to implement by pulses with a predefined shape that is moreover common to all neurons. In addition, there are significant differences between instantaneous pulse-coupling and conductance-based synapses with respect to the collective dynamics. As shown in Section IV, instantaneous pulses of finite width generate, in general, collective oscillations in large networks of QIF neurons (albeit the parameter regions for symmetric pulses may seem degenerate). By contrast, globally coupled QIF neurons with instantaneous conductance-based synapses (39) do not support collective oscillations that emerge via a Hopf bifurcation from an asynchronous state. The proof follows the same lines as in Section IV.1: On the Lorentzian manifold, the RV dynamics (20) for the microscopic dynamics (39) are \[\tau_{m}\dot{R} =\frac{\gamma}{\pi\tau_{m}}+2RV-\hat{g}_{\rm syn}\tau_{m}R^{2}\;, \tag{40a}\] \[\tau_{m}\dot{V} =V^{2}-(\pi\tau_{m}R)^{2}+\hat{g}_{\rm syn}\tau_{m}R[E_{\rm syn}- V]+I_{0}\;. \tag{40b}\] The fixed-point solutions \((R^{*},V^{*})\) of Eq. (40) satisfy \(V^{*}=\hat{g}_{\rm syn}\tau_{m}R^{*}/2-\gamma/(2\pi\tau_{m}R^{*})\). A necessary condition for the oscillatory instability of \((R^{*},V^{*})\) via a Hopf bifurcation (Fig. 10). The \(\tau_{m}\)-dependence of \((R^{*},V^{*})\) is not a Figure 10: Pulse-coupling (red) does not replace traditional synaptic transmission via current-based (CuBa, blue) or conductance-based (CoBa, green) synapses with synaptic kinetics. Top: presynaptic voltage traces \(v_{j}\) of periodically spiking neuron \(j=1\) (blue) and excitable neuron \(j=2\) with constant input \(I_{2}=-1\) and resting potential \(v_{2,\rm rest}=-1\) (orange). Middle: Postsynaptic currents elicited by presynaptic neuron 1 according to a pulse \(Jp_{0.95,0,-2\arctan(20)}(\theta_{1})\), CuBa synapse \(Js_{1}(t)\) or CoBa synapse \(\hat{g}_{\rm syn}s_{1}(t)[E_{\rm syn}-v_{2}(t)]\), where \(s_{1}(t)\) follows Eq. (38) with \(\tau_{0}=1,\tau_{1}=0\). For the inhibitory synapse, \(J=-\pi\) and \(\hat{g}_{\rm syn}=1.12\pi/4\) are chosen such that the PSPs (bottom) almost coincide; \(E_{\rm syn}=-5\) and \(\hat{g}_{\rm syn}v_{\rm eff}\approx J\) with \(v_{\rm eff}=(E_{\rm syn}-v_{2,\rm rest})=-4\). Bottom: Postsynaptic potential/voltage response \(v_{2}(t)\) according to PSCs. (a) The voltage trace \(v_{2}(t)\) in response to a single spike (\(I_{1}=0.5\)) can be sufficiently well described by pulses and more complex synaptic dynamics with short synaptic time constants \(\tau_{r}=0.025\)ms and \(\tau_{d}=0.55\)ms. (b) Synaptic integration of rapidly incoming spikes (\(I_{1}(t)=50\) for \(t\leq 30\)ms and 0 afterwards) is characteristic for CuBa and CoBa synapses with realistic decay \(\tau_{d}=5\)ms, but not feasible with instantaneous pulses \(p_{r,\varphi,\psi}\). cation is that the trace of the Jacobian \(\mathrm{Jac}_{\mathrm{CoBa}}\) of (40) vanishes. Here, however, \[\mathrm{tr}(\mathrm{Jac}_{\mathrm{CoBa}})=-\frac{2\gamma}{\pi\tau_{m}R^{*}}-\hat{ g}_{\mathrm{syn}}\tau_{m}R^{*}<0\] is always negative because \(\gamma,\hat{g}_{\mathrm{syn}},\tau_{m}R^{*}>0\) by definition. Hence, collective oscillations never occur through a Hopf bifurcation in networks of globally coupled QIF neurons with instantaneous conductance-based synapses that are triggered by presynaptic spikes. In conclusion, pulses of finite width do not account for conductance-based synapses, either. #### v.3.3 Pulse-triggered synaptic kinetics Synaptic kinetics need not be triggered by \(\delta\)-spikes, but can also be initiated by general pulses \(p_{r,\varphi,\psi}\), which can be thought of as a combination of Eqs. (36) and (38): Replacing the \(\delta\)-spikes in Eq. (38) by general pulses \(p_{r,\varphi,\psi}\), leads to the microscopic synaptic dynamics \[\tau_{d}\dot{s}_{j}=-s_{j}+u_{j},\quad\tau_{r}\dot{u}_{j}=-u_{j}+p_{r,\varphi, \psi}\big{(}\theta_{j}(t)\big{)}. \tag{41}\] The response dynamics (41) triggered by a narrow and possibly asymmetric pulse is more general than the conventional \(\delta\)-spike-triggered biexponential synapse (37). First, it connects the response with the presynaptic action potential in a continuous manner, and thereby avoids the open discussion at which instant the synaptic response is actually triggered: Does the activation time \(t_{a}\) in Eq. (37) denote the peak voltage of the action potential or a seemingly arbitrary threshold value? And second, the pulse-triggered second-order dynamics (41) can be used to fit more complex, experimentally verified impulse responses, e.g., for hippocampal neurons with an impulse response given by a multi-exponential function [129] \[s(t)=s_{\mathrm{max}}\{1-\exp[-(t-t_{a})/\tau_{r}]\}^{x}\exp[-(t-t_{a})/\tau_{ d}],\] or, in the realm of biomechanics, for motor-unit twitches upon motoneuron discharge, whose impulse response is given by a generalized alpha-function [130; 131; 132] \[s(t)=s_{\mathrm{max}}(t-t_{a})^{x}\exp[-(t-t_{a})/\tau_{d}].\] In both cases, \(x\) is a real-valued parameter (and not an integer), so that the dynamics of \(s(t)\) cannot be described with (analytically tractable) differential equations. On the network level, pulse-triggered synaptic kinetics (41) further permit a concise mean-field reduction of the collective dynamics as before. For global coupling of strength \(J\), the recurrent synaptic input is given by \(I_{\mathrm{syn}}=JS(t)\) with \(S(t)=\langle s_{j}(t)\rangle\). Setting \(\tau_{0}=1\) and \(\tau_{l}=0\) in Eq. (41) ensures that the macroscopic fixed points satisfy \(S^{*}=P_{r,\varphi,\psi}(R^{*},V^{*})\), which allows for a direct comparison with instantaneous synaptic transmission in the limit \(\tau_{r}=\tau_{d}\to 0\), see Fig. 8(d) for an example; a comprehensive comparison, however, is beyond the scope of this paper. The exact, augmented RV dynamics of globally coupled QIF neurons with second-order synaptic kinetics (41) read on the Lorentzian manifold \[\tau_{m}\dot{R} =\frac{\gamma}{\pi\tau_{m}}+2RV\;, \tag{42a}\] \[\tau_{m}\dot{V} =V^{2}-(\pi\tau_{m}R)^{2}+I_{0}+JS(t)\;,\] (42b) \[\tau_{d}\dot{S} =-S+U,\] (42c) \[\tau_{r}\dot{U} =-U+P_{r,\varphi,\psi}(R,V)\;, \tag{42d}\] with the mean presynaptic activity \(P_{r,\varphi,\psi}\) fully determined in terms of \(R\) and \(V\) according to Eq. (22). For instantaneous rise time \(\tau_{r}\to 0\), Eq. (42) describes the RV dynamics with first-order synaptic kinetics (with exponential decay \(\tau_{d}\)). For \(\tau_{r}=\tau_{d}\), the second-order synaptic kinetics of the RV dynamics (42) reduces to that of the so-called alpha-synapse. For the sake of completeness, I also present the pulse-triggered conductance-based RV dynamics with synaptic kinetics by combining Eqs. (40) to (42): \[\tau_{m}\dot{R} =\frac{\gamma}{\pi\tau_{m}}+2RV-\hat{g}_{\mathrm{syn}}RS\;, \tag{43a}\] \[\tau_{m}\dot{V} =V^{2}-(\pi\tau_{m}R)^{2}+\hat{g}_{\mathrm{syn}}S\big{[}E_{ \mathrm{syn}}-V\big{]}+I_{0}\;,\] (43b) \[\tau_{d}\dot{S} =-S+U,\] (43c) \[\tau_{r}\dot{U} =-U+P_{r,\varphi,\psi}(R,V)\;; \tag{43d}\] similar conductance-based RV dynamics were reported in [81], but with a synaptic variable \(S(t)\) for non-smooth pulses and first-order synaptic kinetics (\(\tau_{r}=0\)); see also [97; 98; 99; 91; 92]. Preliminary results suggest that the additional synaptic kinetics in the augmented RV dynamics (42) or (43) blur the effect of the pulse shape \(p_{r,\varphi,\psi}\) on the collective dynamics, especially if the pulse is narrow (Fig. 11). It may hence suffice to resort to the conventional \(\delta\)-spike-interactions, setting \((r,\varphi,\psi)\to(1,0,\pi)\), when studying also synaptic kinetics. A comprehensive analysis of the augmented RV dynamics (42) and (43) shall clarify this hypothesis, which I leave for future work. ## VI Conclusions & Outlook Spiking neural networks are well-established in the neurosciences and a powerful tool in understanding cortical information processing, which originates from the exchange of action potentials between neurons. For computational advantages, mathematical tractability, and rigorous analysis, these networks use simple spiking neuron models that replicate the essential features of real neural dynamics, while interactions between neurons are modeled with infinitely narrow pulses ("spikes") that do not capture the more complex dynamics of real synapses. In support of the spike-assumption, recent studies have led to believe that the shape of the action potential is indeed dispensable both on an individual level--as (generalized leaky) integrate-and-fire (GLIF) models, which do not provide access to (realistic) action potentials, were shown to preserve intrinsic properties of single neurons and characteristic features of their spike generation observed in experiments [133, 134, 135]--as well as on a network level--as experimentally observed population activity was faithfully captured not only by multi-compartment Hodgkin-Huxley-like models, but equally well by GLIF point neurons [136, 137, 138]. It may be true that the shape of the action potential hardly affects the network dynamics, at least for uncoupled neurons. But as soon as neurons are synpatically connected, the spotlight turns on the action potential. The shape of pulsatile synaptic transmission between neurons directly depends on the action potential and can, ultimately, critically affect the network dynamics. By taking the shape of these interactions explicitly into account, the here proposed modeling framework enables a new perspective on pulse shape and voltage-dependent synchronization of spiking neuron networks that has remained concealed for \(\delta\)-spike-interactions. In the first part of this paper, I have proposed a rigorous and biologically plausible interpretation of smooth pulsatile synaptic transmission in networks of spiking QIF and \(\theta\)-neurons. Pulses represent the interplay between a presynaptic action potential and a synaptic activation function \(p\), that needs to account for the simplified spiking behavior of the QIF and \(\theta\)-neuron models. Eventually, pulses should be interpreted as a continuous generalization of the conventionally used \(\delta\)-spikes to install neurotransmitter-based chemical synapses. When pulse-coupled networks of \(\theta\)-neurons are meant to replicate (weakly) connected Class 1 excitable neurons, the pulses must be sufficiently narrow as they reflect an instantaneous synaptic transmission process that can include both the presynaptic and the postsynaptic site; this interpretation carries over to QIF neurons (with infinite reset and threshold values) through the forward transformation \(v=\tan(\theta/2)\). Alternatively, and not necessarily on the premise that the neurons are Class 1 and weakly connected, QIF neurons emit pulses of arbitrary shape that can describe voltage-gated conductances, or the release of neurotransmitters, at the presynaptic site; this interpretation carries over to \(\theta\)-neurons via the inverse transformation \(\theta=2\arctan v\). In the second part, I have put forward an exact low-dimensional macroscopic description for large networks of globally coupled QIF or \(\theta\)-neurons interacting via smooth pulses of various shapes that approximate the previously justified pulses. The modeling framework allows for incorporating the recurrent synaptic input, mediated by a general family of pulse functions \(p_{r,\varphi,\psi}\), in terms of a few macroscopic variables. Thereby, one obtains system (18) of three complex-valued ordinary differential equations that exactly describes the collective dynamics in the thermodynamic limit. In the presence of (independent Cauchy white) noise or (Cauchy-Lorentz distributed) heterogeneity, the collective dynamics converges to an invariant manifold [45], the so-called Lorentzian manifold [49], on which the recurrent synaptic input is fully determined by the population firing rate \(R\) and the mean voltage \(V\). On this manifold, the firing rate and voltage dynamics--the "RV dynamics" Eq. (20)--are closed in \(R\) and \(V\), remain two-dimensional for instantaneous pulse-coupling, and can readily be analyzed with respect to emergent collective behavior. For instantaneous synaptic interactions, I have proved that collective oscillations can only emerge when the recurrent input includes a voltage component. This is the case, e.g., for electrical coupling via gap junctions. In absence of gap junctions, the recurrent synaptic input can incorporate a voltage component--and hence allows for collective oscillations--if pulses transmitted via chemical synapses have a finite width or if the pulse peaks at a moment different from the neuron's spike time. This insight Figure 11: Collective oscillations among inhibitory QIF neurons due to first-order synaptic kinetics of (a) current-based and (b) conductance-based synapses are affected little by the pulse shape \(p_{r,\varphi,\psi}\). The Hopf bifurcation boundaries of RP pulses (\(r=0.95,\varphi=0,\psi=\pi\); blue) and of delayed \(\delta\)-pulses (\(r=1,\varphi=0,\psi=2\arctan(-20)\); violet) coincide almost perfectly with the one for \(\delta\)-spikes (\(r=1,\varphi=0,\psi=\pi\); black dashed); for asymmetric pulses (\(r=0.95,\varphi=\pi/12,\psi=\pi\); red) the Hopf curve is slightly shifted to the right. Collective oscillations are found in the “SYNC” region to the right of the Hopf boundaries (the shaded region indicates collective oscillations for the red asymmetric pulses). The Hopf boundaries are supercritical and were detected from Eqs. (42) and (43) with \(\tau_{m}=2\tau_{d}=10\)ms and \(\tau_{r}=0\). Note that the \(y\)-axis in (b) denotes the reversal potential \(E_{\rm syn}\) of conductance-based synapses with maximal synaptic conductance \(\hat{g}_{\rm syn}=1\); \(E_{\rm syn}\) plays a remarkably similar role as the coupling strength \(J\) for current-based synapses in (a). strongly supports the voltage-dependent spike synchronization mechanism [89], i.e. a resonance in the neurons' membrane and spiking dynamics [139], that is crucial for collective oscillations and typically not captured by traditional firing rate models. Symmetric pulses centered about the spike time support collective oscillations in principle, but the parameter region where collective oscillations can be found appears somewhat degenerate due to the close vicinity of a Hopf bifurcation curve--where oscillations emerge supercritically--and a homoclinic bifurcation curve--at which oscillations are destroyed. Relaxing the symmetry condition of the pulse, or shifting the pulse peak away from the spike time, generates a wide region in parameter space where collective oscillations arise naturally and as the unique attractor of the network dynamics. I have showed in networks of inhibitory QIF neurons with excitatory drive (\(J<0,I_{0}>0\)), that shifting the bulk of the pulse only slightly to the phase after the actual spike yielded robust ING oscillations. Moreover, the collective dynamics generated by instantaneous asymmetric pulses resembled those generated by first-order synapses with \(\delta\)-spikes [89]. In general, however, the correspondence between pulses of finite width and more detailed models of synaptic transmission is elusive. Put differently, (narrow) pulse-coupling complements, but does not replace, synaptic kinetics and conductance-based synapses. As an outlook and to bring the presented formalism even closer to experimental data, I leave for future work the analysis of pulse-triggered synaptic kinetics or conductance-based synapses, see the augmented RV dynamics (42) and (43), or when incorporating finite threshold and reset values and asymmetric spikes with \(v_{p}\neq-v_{r}\) as considered in [140; 141], see also Appendix G. I mention in passing that one can also transform the QIF dynamics around realistic resting potentials \(\approx\!-70\)mV, see Appendix B, without changing the overall collective dynamics qualitatively. In the current work, I did not consider habituation nor activity-dependent modulation of synaptic transmission, which allowed me to disentangle the effect of the pulse shape on the collective dynamics. To do so, I introduced voltage-dependent pulses in Section II.2 that arise through the interplay of a synaptic activation function with the action potential and, thus, directly account for its shape. Now, the shape of the action potential, i.e. its waveform, is not only cell type- and temperature-dependent [142; 143; 5], but may also undergo dynamic changes through various plasticity mechanisms [143; 5; 144]. These can lead to action potential broadening or amplitude reduction, which subsequently affects neurotransmitter release and synaptic transmission and may hence directly, or indirectly, contribute to learning and memory storage [145]. The proposed pulse-coupling is versatile to incorporate dynamical changes of the action potential, which will open up new avenues for investigating network effects of (pre-)synaptic, intrinsic, and homeostatic plasticity complementary to previously proposed mean-field approaches [146; 147; 148; 149; 150; 144; 145; 146; 147; 148; 149; 151; 152; 153; 154; 155], and leverage more detailed network simulations as in [6] that combine the computational advantages of spiking neural networks with biological realism at the microscopic level, including a more realistic synaptic transmission process. ## Acknowledgements I want to thank A. Pikovsky, E. Montbrio, R. Cestnik and A. Daffertshofer for fruitful discussions. The project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101032806.
2310.04270
A Comprehensive Evaluation of Large Language Models on Benchmark Biomedical Text Processing Tasks
Recently, Large Language Models (LLM) have demonstrated impressive capability to solve a wide range of tasks. However, despite their success across various tasks, no prior work has investigated their capability in the biomedical domain yet. To this end, this paper aims to evaluate the performance of LLMs on benchmark biomedical tasks. For this purpose, we conduct a comprehensive evaluation of 4 popular LLMs in 6 diverse biomedical tasks across 26 datasets. To the best of our knowledge, this is the first work that conducts an extensive evaluation and comparison of various LLMs in the biomedical domain. Interestingly, we find based on our evaluation that in biomedical datasets that have smaller training sets, zero-shot LLMs even outperform the current state-of-the-art fine-tuned biomedical models. This suggests that pretraining on large text corpora makes LLMs quite specialized even in the biomedical domain. We also find that not a single LLM can outperform other LLMs in all tasks, with the performance of different LLMs may vary depending on the task. While their performance is still quite poor in comparison to the biomedical models that were fine-tuned on large training sets, our findings demonstrate that LLMs have the potential to be a valuable tool for various biomedical tasks that lack large annotated data.
Israt Jahan, Md Tahmid Rahman Laskar, Chun Peng, Jimmy Huang
2023-10-06T14:16:28Z
http://arxiv.org/abs/2310.04270v3
# A Comprehensive Evaluation of Large Language Models on ###### Abstract Recently, Large Language Models (LLM) have demonstrated impressive capability to solve a wide range of tasks. However, despite their success across various tasks, no prior work has investigated their capability in the biomedical domain yet. To this end, this paper aims to evaluate the performance of LLMs on benchmark biomedical tasks. For this purpose, we conduct a comprehensive evaluation of 4 popular LLMs in 6 diverse biomedical tasks across 26 datasets. To the best of our knowledge, this is the first work that conducts an extensive evaluation and comparison of various LLMs in the biomedical domain. Interestingly, we find based on our evaluation that in biomedical datasets that have smaller training sets, zero-shot LLMs even outperform the current state-of-the-art fine-tuned biomedical models. This suggests that pre-training on large text corpora makes LLMs quite specialized even in the biomedical domain. We also find that not a single LLM can outperform other LLMs in all tasks, with the performance of different LLMs may vary depending on the task. While their performance is still quite poor in comparison to the biomedical models that were fine-tuned on large training sets, our findings demonstrate that LLMs have the potential to be a valuable tool for various biomedical tasks that lack large annotated data. ## 1 Introduction The rapid growth of language models Rogers et al. (2021) in the field of Natural Language Processing (NLP) in recent years has led to significant advancements in various domains, including the biomedical domain Kalyan et al. (2022). Although specialized models like BioBERT (**B**idirectional **E**ncoder **R**epresentations from **T**ransformers for **B**i**omedical **T**ext Mining) Lee et al. (2020), BioBART (**B**idirectional and **A**uto-**R**egressive **T**ransformers for the **B**i**omedical **D**omain) Yuan et al. (2022), and BioGPT (**G**enerative **P**re-trained **T**ransformer for **B**i**omedical **T**ext Generation and Mining) Luo et al. (2022) have shown promising results in the biomedical domain, they require fine-tuning1 using domain-specific datasets. This fine-tuning process can be time-consuming due to the requirement of task-specific large annotated datasets. In contrast, zero-shot2 learning Wang et al. (2019) enables models to perform tasks without the need for fine-tuning on task-specific datasets. Footnote 1: Fine-tuning means providing good amount (e.g., thousands of samples) of training examples to re-train a pre-trained language model on a specific task. Footnote 2: Zero-shot learning means asking a trained model to complete a task without providing any explicit examples of that particular task. Large Language Models (LLM) Zhao et al. (2023) are a class of natural language processing models that have been trained on vast amounts of textual data, making it possible to understand and generate human-like language. In recent years, LLMs such as ChatGPT3 have demonstrated impressive performance on a range of language tasks, including text classification, question answering, and text summarization. One area where LLMs are not yet deeply investigated is the biomedical text processing and information retrieval domain. While there are vast amount of textual data available in the field of biomedicine, there still remains a scarcity of annotated datasets in this domain. Thus, it is difficult to build suitable models for biomedical tasks that lack large annotated datasets. In this regard, due to the strong zero-shot capabilities of LLMs across various tasks, LLM-powered automated tools can be useful for researchers and practitioners in the biomedical domain to find relevant information and extract insights from this vast corpus of unannotated data. However, despite being evaluated on various traditional NLP tasks, there is a lack of comprehensive studies that evaluate LLMs in the biomedical domain. To this end, this paper aims to evaluate LLMs across benchmark biomedical tasks. Footnote 3: [https://chat.openai.com/](https://chat.openai.com/) However, the evaluation of LLMs in the biomedical domain would require a proper understanding of the complex linguistic characteristics of biomedical texts. In addition, LLMs are sensitive to prompts Liu et al. (2023); Jahan et al. (2023). Thus, for biomedical tasks, the effective construction of prompts is important to best utilize these LLMs in biomedical applications. Under these circumstances, domain-specific knowledge in the biomedical domain could play a pivotal role in improving the performance of LLMs in biomedical tasks. In this regard, we study how we can effectively build prompts for LLMs to simulate common tasks in biomedical research, such as text classification, named entity recognition, relation extraction, text summarization, question answering, etc. Overall, this paper will contribute to our understanding of the capabilities and limitations of LLMs in biomedical text processing and information retrieval. Moreover, with a comprehensive evaluation of various powerful LLMs, this paper would lead to the development of new tools and techniques for researchers in this field, which could pave the way to build new applications in healthcare and biomedicine via leveraging LLMs. The major contributions from this study are summarized below: * A comprehensive evaluation of various LLMs in the biomedical domain, providing insights into their capabilities and limitations for various tasks. We particularly study the zero-shot capabilities of these LLMs in the Biomedical domain to address the lack of large annotated datasets in this domain. * Construction of task-specific prompts by understanding the complex linguistic structure of biomedical texts. Our findings based on the extensive performance analysis of the constructed prompts across various biomedical tasks will help researchers and practitioners when building LLM-based applications for the biomedical domain. * As a secondary contribution, we will release the data: (i) our constructed prompts for LLMs, and (ii) LLM-generated responses, to pave the way for future research on LLMs in the biomedical domain. ## 2 Related Work There are a large number of studies on various biomedical tasks, such as biomedical image analysis Liu et al. (2023); Hu et al. (2023); Zhou et al. (2023); Goyal et al. (2020); Morid et al. (2021); Rahman et al. (2021); Morid et al. (2021); Salvi et al. (2021), biomedical text processing Wang et al. (2021); Hossain et al. (2023), genomic sequence analysis O'Brien et al. (2018); Ji et al. (2021), disease diagnosis Kang et al. (2021); Ebrahimi et al. (2021); Ali et al. (2021); Amyar et al. (2020); Pahar et al. (2021); Ibrahim et al. (2021); Tsiknakis et al. (2021), drug discovery Shaker et al. (2021); Martinelli (2022); Pandiyan and Wang (2022), cancer research Fang et al. (2018); Nguyen et al. (2019); Pandiyan and Wang (2022); Goyal et al. (2020), vaccine development Soleymani et al. (2022); Khan et al. (2021); Alamoodi et al. (2021), etc. Biomedical text processing is closely related to these tasks as it serves as a critical component and enabler by providing automated methods for extracting information from the vast amount of textual data in the biomedical domain. In this section, we mainly review the existing state-of-the-art approaches for processing large amounts of biomedical textual data, that are the most related to our research. In the following, we first briefly review various language models used in recent years in the biomedical domain. Then we briefly review the LLMs that we study in this paper. ### Language Models for the Biomedical Domain In recent years, the effective utilization of transformer-based Vaswani et al. (2017) NLP models like BERT Devlin et al. (2019) and GPT Radford et al. (2019) have led to significant progress in the biomedical domain Alsentzer et al. (2019); Beltagy et al. (2019); Gu et al. (2020); Lee et al. (2020); Peng et al. (2019); raj Kanakarajan et al. (2021). BERT leverages the encoder of the transformer architecture, while GPT leverages the decoder of the transformer. In addition to these models, sequence-to-sequence models like BART Lewis et al. (2019) that leverage both the encoder and the decoder of the transformer have also emerged as a powerful approach in various text generation tasks in the biomedical domain Yuan et al. (2022). It has been observed that domain-specific pre-training of these models on the biomedical text corpora followed by fine-tuning on task-specific biomedical datasets have helped these models to achieve state-of-the-art performance in a variety of Biomedical NLP (BioNLP) tasks Gu et al. (2021). This led to the development of various language models for the biomedical domain, such as BioBERT (Lee et al., 2020), ClinicalBERT (Alsentzer et al., 2019), BioBART (Yuan et al., 2022), BioElectra (raj Kanakarajan et al., 2021), BioGPT (Luo et al., 2022), etc. However, one major limitation of using such fine-tuned models is that they require task-specific large annotated datasets, which is significantly less available in the BioNLP domain in comparison to the general NLP domain. In this regard, having a strong zero-shot model could potentially alleviate the need for large annotated datasets, as it could enable the model to perform well on tasks that it was not exclusively trained on. ### Large Language Models In recent years, large autoregressive decoder-based language models like GPT-3 (Brown et al., 2020) have demonstrated impressive few-shot learning capability. With the success of GPT-3 in few-shot scenarios, a new variant of GPT-3 called the InstructGPT model (Ouyang et al., 2022) has been proposed that leverages the reinforcement learning (Kaelbling et al., 1996) from human feedback (RLHF) mechanism. The resulting InstructGPT models (in other words, GPT-3.5) are much better at following instructions than the original GPT-3 model, resulting in an impressive zero-shot performance across various tasks. ChatGPT4 is the latest addition in the GPT-3.5 series models that additionally uses dialog-based instructional data during its training phase. Recently, more decoder-based large language models (LLMs) such as PaLM5(Chowdhery et al., 2022; Anil et al., 2023; Singhal et al., 2023), Claude6, LLaMA7(Touvron et al., 2023, 2023) etc. have been proposed that also achieve impressive performance in a wide range of tasks. All these LLMs including ChatGPT are first pre-trained on a large amount of textual data to predict the next token and then fine-tuned using a process called reinforcement learning from human feedback (RLHF) that leveraged both supervised learning and reinforcement learning techniques. The goal of RLHF was to improve the model's performance and ensure that it provided high-quality responses to user queries. The supervised learning phase of the RLHF process involved training the model on conversations in which human trainers played both sides: the user and the AI assistant. These conversations were collected from a variety of sources, including chat logs from customer service interactions, social media messages, and chatbots. The supervised learning phase aimed to train the model to produce high-quality responses that were contextually relevant to the user's query. Meanwhile, the reinforcement learning phase of the RLHF process aimed to further improve the model's performance by using human trainers to provide feedback on its responses. In this phase, human trainers ranked the responses that the model had created in a previous conversation. These rankings were used to create "reward models" that were used to fine-tune the model further by using several iterations of Proximal Policy Optimization (PPO) (Kaelbling et al., 1996). Footnote 4: [https://openai.com/blog/chatgpt](https://openai.com/blog/chatgpt) Footnote 5: [https://ai.google/discover/palm2/](https://ai.google/discover/palm2/) Footnote 6: [https://www.claudeai.ai/](https://www.claudeai.ai/) Footnote 7: [https://ai.meta.com/blog/large-language-model-llama-meta-ai/](https://ai.meta.com/blog/large-language-model-llama-meta-ai/) While these models have demonstrated strong performance in various NLP tasks (Qin et al., 2023; Bang et al., 2023; Yang et al., 2023), they have not been investigated in the biomedical domain yet. To this end, this paper aims to evaluate these powerful LLMs in the biomedical domain. ## 3 Biomedical Tasks Description In this section, we present the benchmark biomedical text processing tasks that we study in this paper. Below, we describe these tasks along with some examples. * **Biomedical Named Entity Recognition (NER):* * This task aims to extract the biomedical named entities, such as genes, proteins, diseases, chemicals, etc., from the literature to improve biomedical research. _Example: The patient has been diagnosed with a rare form of cancer and is undergoing chemotherapy treatment with the drug Taxol._ _Expected NER classifications:_ * _NER (Disease): "rare form of cancer"._ * _NER (Treatment): "chemotherapy"._ * _NER (Drug): "Taxol"._ * **Biomedical Entity Linking:** This task involves recognizing and linking biomedical named entities in unstructured text to their correct definitions, e.g., to the corresponding entries in structured knowledge bases or ontologies. _Example:_ _The patient has been diagnosed with a rare form of cancer and is undergoing chemotherapy treatment with the drug Taxol._ _Expected Entity Linking:_ _A biomedical entity linking system may link the drug Taxol to the following link:_ [https://chemocare.com/druginfo/taxl](https://chemocare.com/druginfo/taxl). * **Biomedical Relation Extraction:* * Biomedical relation extraction task aims to analyze textual data by identifying which gene/variants are responsible for which diseases, which treatment/drug is effective for which disease, as well as identifying drug-drug interactions, etc. _Example:_ _The patient has been diagnosed with a rare form of cancer and is undergoing chemotherapy treatment with the drug Taxol._ _Expected Relation Extractions:_ * _Relation (Treatment of a Disease):_ _"chemotherapy" is a treatment for "rare form of cancer"._ * _Relation (Drug used in Treatment):_ _"Taxol" is a drug used in "chemotherapy"._ * **Biomedical Text Classification:** Given a text \(S\), the goal is to classify the text into a specific category. One example to classify a given sentence in one of the 10 hallmarks of cancer taxonomy has been demonstrated below: _Example:_ "Heterogeneity in DNA damage within the cell population was observed as a function of radiation dose." _Expected Result:_ Genomic Instability and Mutation. * **Biomedical Question Answering:* * The biomedical question-answering task involves retrieving the relevant answer for the given question related to the biomedical literature, such as scientific articles, medical records, and clinical trials. This task is of great importance as it can help healthcare professionals, researchers, and patients access relevant information quickly and efficiently, which can have a significant impact on patient care, drug development, and medical research. _Example Question:_ _What is recommended for thalassia patients?_ _Candidate Answer 1: Chemotherapy may be used to:_ _Cure the cancer, shrink the cancer, and prevent the cancer from spreading._ * _Candidate Answer 2:_ _Regular blood transfusions can help provide the body with normal red blood cells containing normal hemoglobin._ _Expected Answer:_ The candidate answer 2 should be retrieved as a relevant answer [1, 2]. * **Biomedical Text Summarization:** By generating quick summaries, biomedical summarization would help reduce the time spent reviewing lengthy electronic health records / patient queries in healthcare forums / doctor-patient conversations, resulting in improving the efficiency of the healthcare system. _Example Biomedical Text: Patient is a 62-year-old female with a medical history of hyperlipidemia, osteoarthritis, and previous cerebrovascular accident. She presented with sudden onset of dizziness and palpitations that began a day ago. An electrocardiogram was immediately conducted, which indicated the presence of atrial fibrillation. She was promptly hospitalized for monitoring and commenced on anticoagulation therapy with warfarin and rate-controlling medications like beta-blockers._ _Expected Summary:_ _A 62-year-old female with a history of hyperlipidemia, osteoarthritis, and a previous cerebrovascular accident experienced sudden dizziness and palpitations. An ECG confirmed atrial fibrillation, leading to her hospitalization and treatment with warfarin and beta-blockers._ ## 4 Our Methodology In this section, we first present our methodology on how we design the prompts for different tasks (see Figure 1), followed by describing the LLMs that we study in this paper. Afterward, we describe our evaluation methodology. ### Prompt Design For a given test sample \(X\), we prepare a task instruction \(T\) and concatenate the text in the test sample with the task instruction to construct the prompt \(P\). Then the prompt \(P\) is given as input to generate the response \(R\). Below, we demonstrate the prompt \(P\) that we construct for different tasks depending on the respective dataset. (i) NER:For NER, we construct prompts to identify the biomedical named entities in a given text in the BIO format. In our prompts, the description of the BIO format is also added along with the task instructions. For NER, we use the BC2GM [14] and JNLPBA [15] datasets for gene/protein entity recognition, BC4CHEMD [13] and BC5CDR-CHEM [12] for drug/chemical entity recognition, BC5CDR-Disease [12] and NCBI-Disease [15] for disease type entity recognition, LINNAEUS [16] and s800 [11] for species type entity recognition. We show the sample prompts for this task in Table 1. (ii) Entity Linking:To identify whether LLMs can link named entities to their correct definitions [10] based on their pre-training knowledge, we follow the work of yuan2020learning for the generative entity linking task by asking LLMs to identify the correct concept names for the named entities. For evaluation, we use the BC5CDR [12] dataset for the entity linking of disease/chemical type named entities, the NCBI [15] dataset to link diseases, and the COMETA [1] dataset to link clinical terms. Our sample prompts for this task are shown in Table 3. (iii) Relation Extraction:We construct prompts to identify the possible relation between entities mentioned in a given text depending on the dataset. For this purpose, we construct prompts for chemical-disease-relation in the BC5CDR dataset [12], drug-target-interaction in the KDDTI dataset [15], and drug-drug-interaction in the DDI dataset [1]. Our prompts for these datasets are demonstrated in Table 2. (iv) Text Classification:The goal of this task is to classify the type of the given text. In this paper, we use two datasets: (i) the HoC (the Hallmarks of Cancer corpus) dataset [1], and (ii) the LitCovid dataset. The HoC dataset consists of 1580 PubMed abstracts where the goal is to annotate each sentence in the given abstract in one of the 10 currently known hallmarks of cancer. Whereas in the LitCovid dataset, each article is required to be classified in one (or more) of the following 8 categories: Prevention, Treatment, Diagnosis, Mechanism, Case Report, Transmission, Forecasting, and General. Our prompts for these text classification datasets are shown in Table 4. (v) Question Answering:For the question-answering task, we also evaluate the performance of ChatGPT on multiple datasets: (i) the PubMedQA dataset [10], and (ii) the MEDIQA-2019 dataset. In the PubmedQA dataset, we give the question, the reference context, and the answer as input to the LLMs to determine whether the answer to the given question can be inferred from the provided reference context with LLMs being prompted to reply either as _yes_, _no_, or _maybe_, as required by the task. In the MEDIQA-2019 dataset, we ask the LLMs to determine whether the retrieved answer for the given question is relevant or not [10]. Our prompts for this task are shown in Table 5. (vi) Text Summarization:The biomedical text summarization task requires the generation of a concise summary of the given biomedical text. To this end, we evaluate LLMs across a wide range of diverse biomedical summarization tasks, Figure 1: An overview of various biomedical datasets and tasks we study in this paper. such as healthcare question summarization (_MeQSum_Abacha and Demner-Fushman, 2019) and _MEDIQA-QS_Abacha et al. (2021) datasets), medical answer summarization (_MEDIQA-ANS_Savery et al. (2020) and _MEDIQA-MAS_Abacha et al. (2021) datasets), and doctor-patient dialogue summarization (_iCliniq_ and _HealthCareMagic_ datasets Zeng et al. (2020); Mrini et al. (2021)) to generate short queries for healthcare forums describing patient's medical conditions. In addition, we use various datasets for biomedical literature summarization Luo et al. (2022); Goldsack et al. (2022), such as the Biomedical Text Lay Summarization shared task 2023 (BioLaySumm-2023) datasets Goldsack et al. (2023). For BioLaySumm-2023, since the gold reference summaries of the test sets are not publicly available as of the writing of this paper, we use the validation data for evaluation. Our sample prompts in these summarization datasets are shown in Table 6. ### Models In this paper, we evaluate the performance of 4 popular LLMs in benchmark biomedical datasets and tasks. Below, we describe these LLMs. (i) GPT-3.5:GPT-3.5 is an auto-regressive language model based on the transformer Vaswani et al. (2017) architecture that was pre-trained on a vast amount of textual data via supervised learning alongside reinforcement learning with human feedback. The backbone model behind the first version of ChatGPT was also GPT-3.5, and it is currently one of the base models, behind OpenAI's ChatGPT, alongside GPT-4. The initial training data for GPT-3.5 was obtained from a large corpus of text data that was crawled from the internet. This corpus included a wide range of publicly available text, including articles, books, and websites. Additionally, OpenAI collected data from GPT-3 users to train and fine-tune the model further Qin et al. (2023); OpenAI (2023). In this work, we used the OpenAI API for the _gpt-3.5-turbo-06138_ model for GPT-3.5. Footnote 8: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5) (ii) PaLM-2:PaLM-2 Anil et al. (2023) is also a transformer-based language model that exhibits enhanced multilingual and reasoning capabilities, along with improved computing efficiency. It is \begin{table} \begin{tabular}{l l l l} \hline \hline **Dataset** & **Type** & **Data Split** & **Prompt** \\ & & **(Train Valid / Test)** & \\ \hline BC3GM & NER (GENE/PROTGEN) & 12574/2519 / 5038 & Below, we provide a biomedical text: \\ BC4CHEMD & NER (DRUGCHMICAL) & 30682/ 1069/ 26364 & [TEXT] \\ BC5CH-GEM & NER (DRUGCHMICAL) & 4560/ 4581 / 4797 & You need to identify the [ENTITY] type named entities in the above text. To identify the named entities, please tag each token of the given text in the “BIO” format as either: ”BIO” or ”BIO”. Note \\ INPLOR-Diverse & NER (GENE/PROTGEN) & 14690/ 3856 / 3856 & either: ”BIO” or ”BIO”. Note \\ LINNAUUS & NER (SPECES) & 11935/ 4078/ 7142 & provides a way to label individual tokens in a given text to indicate whether they are part of a named entity, in the BIO format, each token in a text is labeled with a tag that represents it role in a named entity. For our case, there are three possible tags. \\ s600 & NER (SPECES) & 5733 / 830 / 1603 & B: it indicates the token is the beginning of the [ENTITY] type named entity (i.e., the token token is a [ENTITY] type named entity). It is not the token of a [ENTITY] type named entity (i.e., any token other than the fact token of a [ENTITY] type named entity). On it indicates that the token is outside any named entity, in other words, it has not part of any named entity. Below, each token of the biomedical text is provided (operated by new line). Now please assign the correct tag to each token. Return your result for each token in a newline in the following format \(\sim\) token: augmented_type [LIST of LINE SEPARATED TOKENS] \\ \hline \hline \end{tabular} \end{table} Table 1: Our Prompts in Different Named Entity Recognition (NER) Datasets. \begin{table} \begin{tabular}{l l l} \hline \hline **Dataset** & **Type** & **Data Split** & **Prompt** \\ & & **(Train Valid / Test)** & \\ \hline BC5CDR & Chemical-Diverse & 500 / 500 / 500 & Identify each pair of drugs and the drug-induced side-effects (e.g., disease) in the following passage: \\ & Relation Extraction & [PASSAGE] & \\ \hline KD-DTI & Drug-Target & 12K / 1K / 13K & Identify the drug-target interactions in the following passage (along with the intention type among the following: 'inhibitor', 'agonitor','modulator', 'activation', 'block', 'index', 'index', 'attragon', 'clawago', 'lastronic', 'lastronic', 'lastronic', 'lastronic', 'lastronic', 'lastronic', 'paid', 'partial', 'context','substance', 'bigland', 'other', 'unchanged', 'other', 'unchanged', 'paid', 'paid', 'exditean', the base model behind Google's BARD9, which is a competitor to OpenAI's ChatGPT. The computational efficiency in PaLM-2 is achieved by scaling the model size and the training dataset size in proportion to each other. This new technique makes PaLM-2 smaller than its predecessor, PaLM-1, while achieving better performance, including faster inference, fewer parameters to serve, and a lower serving cost. It is trained using a mixture of objectives, allowing it to learn various aspects of language and reasoning across a diverse set of tasks and capabilities, making it a powerful tool for various applications. In this work, we used the _text-bison@001_ model in Google's Vertex AI10 API for PaLM-2. Footnote 9: [https://bard.google.com/](https://bard.google.com/) Footnote 10: [https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text](https://cloud.google.com/vertex-ai/docs/generative-ai/model-reference/text) **(iii) Claude-2:** Claude-2 is also a general-purpose LLM based on the transformer architecture. It was developed by Anthropic11 and is a successor of Claude-1. Similar to other large models, it is trained via unsupervised pre-training, supervised fine-tuning, and reinforcement learning with human feedback. Internal red-teaming evaluation by Anthropic shows that Claude is more harmless and less likely to produce offensive or dangerous output. Experimental evaluation of Claude-1 and Claude-2 demonstrates that Claude-2 achieves much better performance than Claude-1 across various tasks. Thus, we also utilize Claude-2 in this work via leveraging Anthropic's _claude-2_ API. Footnote 11: [https://www.anthropic.com/index/claude-2](https://www.anthropic.com/index/claude-2) **(iv) LLaMA-2:** LLaMA-2 (Touvron et al., 2023b) is a recently proposed LLM by Meta12. One major advantage of LLaMA-2 over the previously mentioned LLMs is that it is also open-sourced. While another open-sourced version of LLaMA: the LLaMA-1 (Touvron et al., 2023a) model was released prior to the release of LLaMA-2, the LLaMA-1 model was only allowed for non-commercial usage. On the contrary, the recently proposed LLaMA-2 not only allows commercial usage, but also outperforms its earlier open-sourced version LLaMA-1 across a wide range of tasks. \begin{table} \begin{tabular}{l l l l} \hline \hline **Dataset** & **Type** & **Data Split** & **Prosupt** \\ & & **(Train / Valid / Test)** & \\ \hline HoC & Text & 9972/ / 9437 / 9487 & The 10 hallmarks of cancer taxonomy with their definitions are given below: () () Sustainative proliferative signaling: Cancer cells can initiate and maintain continuous cell division by producing their own growth factors or by sharing the sensitivity of receptors to growth factors. (ii) Enabling growth approaches: Cancer cells can bypass the normal cellular mechanisms that limit cell division and growth, such as the inactivation of tumor suppressor genes under linearity to antigrowth signals. (iii) Revisiting cell death: Cancer cells develop resistance to apoptosis, the programmed cell death process, which allows them to survive and continue dividing. (iv) Enabling replication immediately: Cancer cells can extend their ability to divide indelatively by maintaining the length of elements, the prequette and gaps on chromosomes. (v) Including angiogenesis: Cancer cells eliminate the growth of new blood vessels, providing the necessary nutrients and oxygen to support their rapid growth. (vi) Achieving invasion and metastasis: Cancer cells can invade surrounding tissues and migrate to distant sites in the body, forming secondary tissue death metastases. (vi) Depending cellular energetic metabolism: Cancer cells retire their metabolism to support rapid cell division and growth, often drive more on glycolysis even in the presence of oxygen (a phenomenon known as the Waterberg effect). (vii) Avoiding immune destruction: Cancer cells can avoid detection and elimination by the immune system through wireless mechanisms, such as downregulating cell surface makers or producing immunosuppressive signals. (vi) Tumor promoting inflammation: Chronic inflammation can promote the development and progression of cancer by applying growth factors, survival signals, and other molecules that facilitate cancer cell proliferation and survival. (vi) Genome instability and mutation: Cancer cells exhibit increased genomic instability, leading to a higher mutation rate, which is much more diverse the initiation and progression of cancer. Classify the sentence given below in one of the above 10 hallmarks of cancer taxonomy (if relevant). If cause the classified, cancer as “empty”: [SENTENCE] \\ \hline LaCorald & Text & 16126 / 2305 / 4607 & Choose the most appropriate topic(s) for the biomedical article on covid-19 given below from the following options: (i) Prevention, (ii) Treatment, (iii) Diagnosis, (iv) Mechanism, (v) Case Report, (vi) Transmitting, (vii) Forecasting, and (vii) General. [ARTICLE] \\ \hline \hline \end{tabular} \end{table} Table 4: Our Prompts in Different Text Classification Datasets. \begin{table} \begin{tabular}{l l l l} \hline \hline **Dataset** & **Type** & **Data Split** & **Prosupt** \\ & & **(Train / Valid / Test)** & \\ \hline HoC & Text & 9972/ / 9437 / 9487 & The 10 hallmarks of cancer taxonomy with their definitions are given below: () () Sustainative proliferative signaling: Cancer cells can initiate and maintain continuous cell division by producing their own growth factors or by sharing the sensitivity of receptors to growth factors. (ii) Enabling growth approaches: Cancer cells can bypass the normal cellular mechanisms that limit cell division and growth, such as the inactivation of tumor suppressor genes under linearity to antigrowth signals. (iii) Revisiting cell death: Cancer cells develop resistance to apoptosis, the programmed cell death process, which allows them to survive and continue dividing. (iv) Enabling replication immediately: Cancer cells can extend their ability to divide indelatively by maintaining the length of elements, the prequette and gaps on chromosomes. (v) Including angiogenesis: Cancer cells eliminate the growth of new blood vessels, providing the necessary nutrients and oxygen to support their rapid growth. (vi) Achieving invasion and metastasis: Cancer cells can invade surrounding tissues and migrate to distant sites in the body, forming secondary tissue death metastases. (vi) Investigating cellular energetic metabolism: Cancer cells retire their metabolism to support rapid cell division and growth, often drive more on glycolysis even in the presence of oxygen (a phenomenon known as the Waterberg effect). (vii) Avoiding immune destruction: Cancer cells can avoid detection and elimination by the immune system through wireless mechanisms, such as downregulating cell surface makers or producing immunosuppressive signals. (vi) Tumor promoting inflammation: Chronic inflammation can promote the development and progression of cancer by applying growth factors, survival signals, and other molecules that facilitate cancer cell proliferation and survival. (v) Genome instability and mutation: Cancer cells exhibit increased genomic instability, leading to a higher mutation rate, which is much more diverse the initiation and progression of cancer. Classify the sentence given below in one of the above 10 hallmarks of cancer taxonomy (if relevant). If cause the classified, cancer as “empty”: [SENTENCE] \\ \hline LaCorald & Text & 16126 / 2305 / 4607 & Choose the most appropriate topic(s) for the biomedical article on covid-19 given below from the following options: (i) Prevention, (ii) Treatment, (iii) Diagnosis, (iv) Mechanism, (v) Case Report, (vi) Transmitting, (vii) Forecasting, and (vii) General. [ARTICLE] \\ \hline \hline \end{tabular} \end{table} Table 3: Our Prompts in Different Entity Linking Datasets. This makes LLaMA-2 a breakthrough model in both academia and industry. Similar to other LLMs, LLaMA-2 is also trained via unsupervised pre-training, supervised fine-tuning, and reinforcement learning with human feedback. Note that the LLaMA-2 model has been released in various sizes: 7B, 13B, and 70B. While the 70B model has achieved the best performance across various benchmarks, it requires very high computational resources. On the other hand, although the 7B model requires less computational resources, it achieves poorer performance in comparison to the 13B and 70B models. Considering the performance and cost trade-off, we used the LLaMA-2-13B13 model in our work. Footnote 13: We used the following version of LLaMA-2-13B: [https://huggingface.co/meta-llama/llama-2-13b-chat-hf](https://huggingface.co/meta-llama/llama-2-13b-chat-hf), which achieves improved factual correctness than its based version. As we are benchmarking LLMs in the biomedical domain, we prioritize faithfulness for model selection. ### Evaluation Pipeline Since LLMs generate human-like responses, which can be lengthy, and may sometimes contain unnecessary information while not in a specific format, some tasks are very challenging to evaluate without any human intervention. For instance, in tasks like Relation Extraction, there can be multiple answers. Thus, it would be very difficult to automatically evaluate the performance of LLMs by comparing their response with the gold labels using just an evaluation script. Thus, in this paper, \begin{table} \begin{tabular}{p{56.9pt} p{113.8pt} p{113.8pt}} \hline \hline **Dataset** & **Type** & **Data Split (Train / Valid / Test)** & **Prompt** \\ \hline PubMoQA & Question Answering & 450 / 50 / 500 & For the question, the reference context, and the answer given below, it is possible to infer the answer for that question from the reference context? Only apply as either Yes or No or Maybe. \\ & & & Question: [QUESTION] \\ & & Reference context [REFRENCE CONTEXT] \\ & & & Answer: [ANSWER] \\ \hline MEDQA-2019 & Question Answering & 208 / 25 / 150 & A retrieved answer for the following question is given below. Identify whether the retrieved answer is relevant to the question or not. Answer as 1 if relevant, otherwise answer as 0. Question: [QUESTION] \\ & & & Retrieval Aware: [TEXT] \\ \hline \hline \end{tabular} \end{table} Table 5: Our Prompts in Different Question Answering Datasets. \begin{table} \begin{tabular}{p{56.9pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline **Dataset** & **Type** & **Data Split (Train / Valid / Test)** & **Prompt** \\ \hline iCiniq & Dialog Summarization & 24851 / 3105 / 3108 & Write a very short and concise one line summary of the following dialogue as an attentional question in a healthcare forum: [DALOGUE] \\ \hline HealthCare Magic & Dialog Summarization & 181122 / 22641 / 25642 & Write a very short and concise one line summary of the following dialogue as a question in a healthcare forum: [DALOGUE] \\ \hline McQSum & Question Summarization & 500 / / 500 & Rewrite the following question in a short and concise form: [QUESTION] \\ \hline MEDQA-QS & Question Summarization & - / 50 / 100 & Rewrite the following question in a short and concise form: [QUESTION] \\ \hline MEDQA-MAS & Answer Summarization & - / 50 / 80 & For the following question, some irrelevant answers are given below. Please write down a short concise answer by summarizing the given answers. [QUESTION] \\ & & & Answer : [ANSWER] \\ \hline MEDQA-ANS & Answer Summarization & - / - / 552 & Write a very short and concise summary of the following article based on the question given below: [ARTICLE] \\ \hline BiLajSumm-2023 (PLOS) & Lay Summarization & 24773 / 1376 / 142 & Write a very short and concise summary of the following biomedical article on the question given below: [ARTICLE] \\ & & & & \\ \hline BiLajSumm-2023 (iLig) & Lay Summarization & 4346 / 241 / 142 & Write down a readable summary of the following biomedical article using less technical terminology (e.g., lay summary) such that it can be understandable for non-expent sentences: \\ & & & & \\ \hline BiLajSumm-2023 (eLig) & Lay Summarization & 4346 / 241 / 142 & Write down a readable summary of the following biomedical article using less technical terminology (e.g., lay summary) such that it can be understandable for non-expent sentences: [ARTICLE] \\ & & & & \\ \hline BiLajSumm-2023 (PLOS) & Readability-controlled Summarization (Ly Summary) & 24773 / 1376 / 142 & Write down a readable summary of the following biomedical article using less technical terminology (e.g., lay summary) such that it can be understandable for non-expent audiences: [ARTICLE] \\ \hline BiLajSumm-2023 (PLOS) & Readability-controlled Summarization (Abswer) & 24773 / 1376 / 142 & Write down a readable summary of the following biomedical article \\ & & & & \\ \hline \hline \end{tabular} \end{table} Table 6: Our Sample Prompts in Different Text Summarization tasks. to ensure high-quality evaluation, we follow the work of Laskar et al. (2023), where they design different settings for the evaluation of LLMs for different tasks: 1. **Automatic Evaluation:** Where they evaluate some tasks, such as text summarization via leveraging automatic evaluation scripts. 2. **Human Evaluation:** Where they evaluate some discriminative tasks solely by humans, which cannot be evaluated directly based on automatic evaluation scripts. 3. **Hybrid (Human + Automatic) Evaluation:** Where they evaluate some tasks via leveraging both human intervention alongside evaluation scripts. More specifically, this is done by first applying evaluation scripts on the dataset to parse the results from the LLM-generated response, followed by utilizing human intervention if solely depending on the evaluation script cannot properly parse the results. _For discriminative tasks_, where parsing of the results from the generated response is required for evaluation, we follow the work of Laskar et al. (2023) and design an evaluation script for the respective dataset to first parse the results and then compare the parsed results with the gold labels. Subsequently, any samples where the script could not parse the result properly were manually reviewed by the human annotators. For NER, Entity Linking, Text Classification, and Question Answering, we evaluate the performance by leveraging this technique. However, we find that for relation extraction, human intervention is necessary since parsing scripts cannot properly identify the relations found in the generative responses. Thus, for relation extraction, all LLM-generated responses were manually evaluated by humans. This technique of solely utilizing humans to evaluate LLM-generated response when parsing is not possible was also used in recent literature Laskar et al. (2023); Jahan et al. (2023). In our human evaluation, at least two annotators compared the LLM-generated response against the gold labels. Any disagreements were resolved based on discussions between the annotators. _For generative tasks_, such as summarization, where the full response generated by LLMs can be used for evaluation instead of parsing the response, we evaluate using automatic evaluation metrics (e.g., ROUGE or BERTScore). ## 5 Experiments ### Evaluation Metrics We use different evaluation metrics for different tasks to ensure a fair comparison of different LLMs with prior state-of-the-art results. For this purpose, we select the standard evaluation metrics that are used in the literature for benchmarking the performance of different models. Thus, for the relation extraction and named entity recognition tasks, we utilize Precision, Recall, and F1 metrics, while for entity linking, we evaluate based on Recall @ 1. For Summarization, we utilize ROUGE Lin (2004) and BERTScore Zhang et al. (2019) metrics. For question answering and text classification, we utilize metrics like Accuracy and F1. ### Results Below, we discuss the results of LLMs in various tasks. Relation Extraction:We compare the performance of LLMs with the current state-of-the-art fine-tuned BioGPT Luo et al. (2022) model across 3 datasets for the relation extraction task. Based on the results for the relation extraction task presented in Table 7, we find that in the BC5CDR dataset, while LLaMA-2 achieves the highest recall, PaLM-2 performs the best in terms of Precision and F1. Meanwhile, in terms of F1, the zero-shot PaLM-2, Claude-2, and LLaMA-2 model even outperform the prior state-of-the-art fine-tuned BioGPT in this dataset, with an improvement of 17.61% by the best performing PaLM-2. In the KD-DTI dataset, though GPT-3.5 and Claude-2 achieve high recall, their overall F1-score was quite lower than BioGPT and PaLM-2. Meanwhile, zero-shot PaLM-2 again performs much better while achieving almost similar performance in comparison to the fine-tuned BioGPT in terms of the F1 score. In the DDI dataset, GPT-3.5 achieves state-of-the-performance across all three metrics (Precision, Recall, and F1), followed by Claude-2. Since in the DDI dataset, there can be only 4 types, we used more descriptive prompts in this dataset (e.g., providing the definition of different interaction types), which helped GPT-3.5 and Claude-2 to achieve better performance. However, more descriptive prompts were not helpful for PaLM-2 in this dataset. Nonetheless, the impressive results achieved by LLMs in comparison to the prior state-of-the-art results in BC5CDR and DDI datasets demonstrate that in datasets having smaller training sets (both datasets have less than 1000 training samples), LLMs are more effective than even fine-tuned models. Meanwhile, in the KD-DTI dataset that has about 12K training samples, most zero-shot LLMs still achieve comparable performance, with PaLM-2 slightly outperforming the state-of-the-art result. More interestingly, while other LLMs achieve quite poor precision scores in the KD-DTI dataset, PaLM-2 even outperforms the current state-of-the-art result in terms of precision. Text Classification:In terms of Text Classification (see Table 8), we again compare the performance with the current state-of-the-art BioGPT model. We find that all of the zero-shot LLMs perform significantly poorer than the state-of-the-art fine-tuned BioGPT model in both HoC and Lit-Covid datasets. In particular, the performance of Claude-2 was very poor, which was also significantly below than other LLMs. Among LLMs, we find that GPT-3.5 and PaLM-2 are generally better, with PaLM-2 being the best performing LLM in both the HoC dataset and the LitCovid dataset. Nonetheless, their performance is still much lower than the current state-of-the-art results. We also investigate the effect of prompt tuning by evaluating two new prompts that are less descriptive, i.e., without giving definitions of the HoC classes, or without naming the HoC classes. Below we demonstrate our findings for GPT-3.5 based on prompt variations: _(i) Prompting with only the name of each HoC class is given without any definitions, drops the F1 score to 46.93._ _(ii) Prompting without explicitly mentioning the name of 10 HoC classes, drops F1 to 38.20._ This indicates that for classification tasks, descriptive prompts are very helpful in improving the performance of LLMs (see Appendix A.1 for more details). Question Answering:For question answering, we evaluate the performance in two datasets (see Table 8). In terms of the question-answering task in the PubMedQA dataset, we find that the performance of all LLMs is much lower than the current state-of-the-art BioGPT model. While the closed-source LLMs (GPT-3.5, PaLM-2, Claude-2) perform almost similarly, none of them could achieve more than 60% accuracy. More interestingly, none of these closed-source LLMs could outperform the LLaMA-2 model that achieves the best performance among LLMs in this dataset. This is an interesting finding since the LLaMA-2 only has 13B parameters, which is much smaller than the closed-source LLMs. To further investigate how LLaMA-2 achieves superior performance in this dataset, we present the confusion matrix using a heatmap based on the prediction made by different LLMs in Figure 2. From the heatmap, we find that all LLMs except LLaMA-2 make mistakes while predicting the "no" type label, as in most cases the \begin{table} \begin{tabular}{c|c c c|c c c|c c c} \hline \hline \multirow{3}{*}{**Model**} & \multicolumn{6}{c}{**Dataset**} \\ \cline{2-10} & \multicolumn{3}{c}{**BCSCDR**} & \multicolumn{3}{c}{**KD-DTI**} & \multicolumn{3}{c}{**DDI**} \\ \cline{2-10} & **Precision** & **Recall** & **F1** & **Precision** & **Recall** & **F1** & **Precision** & **Recall** & **F1** \\ \hline **GPT-3.5** & 30.62 & 73.85 & 43.29 & 19.19 & 66.02 & 29.74 & **47.11** & **45.77** & **46.43** \\ **PaLM-2** & **51.61** & 57.30 & **54.30** & **40.21** & 36.52 & **38.44** & 35.47 & 16.48 & 22.50 \\ **CLaM-2** & **44.04** & 67.73 & 53.37 & 17.99 & **72.73** & **25.84** & 39.27 & 46.00 & 42.62 \\ **LLaM-2-13b** & 95.54 & **81.66** & 53.28 & 15.14 & 60.48 & 24.21 & 22.58 & 25.67 & 24.03 \\ \hline **State-of-the-Art (SOTA)** & 49.52 & 43.25 & 46.17 & 40.00 & 39.72 & 38.42 & 41.70 & 44.75 & 40.76 \\ \hline \hline \end{tabular} \end{table} Table 7: Performance on Relation Extraction datasets. All SOTA results are taken from the BioGPT (Luo et al., 2022) model. \begin{table} \begin{tabular}{c|c c|c c|c c c} \hline \hline \multirow{3}{*}{**Model**} & \multicolumn{3}{c}{**Text Classification Dataset**} & \multicolumn{3}{c}{**Question Answering Dataset**} & \multicolumn{3}{c}{**Entity Linking Dataset**} \\ \cline{2-10} & **BiC** & **LiC** & **PubMedQA** & **MediQA-2019** & **BCSCDR** & **Comita** & **NCBI** \\ \cline{2-10} & **F1** & **F1** & **Accuracy** & **Accuracy** & **Recall** \(\oplus\) 1 & **Recall** \(\oplus\) 1 & **Recall** \(\oplus\) 1 \\ \hline **GPT-3.5** & 59.26 & 29.63 & 54.40 & **73.26** & 54.90 & 43.45 & 52.19 \\ **PaLM-2** & **64.03** & **37.90** & 59.60 & 52.12 & 52.14 & 48.76 & 38.44 \\ **ClaM-2** & **34.93** & 7.60 & 57.20 & 65.13 & **76.01** & **53.29** & **70.21** \\ **LLaM-2-13b** & 41.82 & 11.34 & **64.00** & 56.01 & 66.52 & 40.67 & 59.17 \\ \hline **State-of-the-Art (SOTA)** & **85.12** & **86.20** & **78.20** & **79.49** & **93.26** & **81.77** & **89.90** \\ \hline \hline \end{tabular} \end{table} Table 8: Performance on Text Classification, Question Answering (QA), and Entity Linking datasets. The SOTA results for HoC and PubMedQA are taken from the BioGPT (Luo et al., 2022) model, while we take the SOTA results from Gutierrez et al. (2020) and He et al. (2020) for LitCovid and MediQA-2019, respectively. Note that all SOTA results for Entity Linking are taken from the BioBART (Yuan et al., 2022) model. LLMs (GPT-3.5, PaLM-2, Claude-2) ended up predicted with the "yes" type label instead, leading to an overall poor accuracy. In terms of the question-answering task in the MediQA-2019 dataset, we find that the accuracy from the PubMedQA dataset is increased for GPT-3.5 and Claude-2, while being decreased for the LLaMA-2 and PaLM-2; with the zero-shot GPT-3.5 achieving the best accuracy (73.26), which is comparable to the current state-of-the-art accuracy of 79.49 (He et al., 2020) by the ALBERT model (Lan et al., 2019) fine-tuned in this dataset. Entity Linking:From Table 8, we find that Claude-2 outperforms all other LLMs in all three entity linking datasets: BC5CDR, Cometa, and NCBI. In BC5CDR and NCBI, we find that LLaMA-2 is the second best performing model; while in Cometa, PaLM-2 is the second best performing model. Nonetheless, the performance of the second best performing models is still quite below in comparison to the Claude-2 model. This finding suggests that Claude-2 is more useful than other models in biomedical entity linking tasks by effectively retrieving the correct definition from its pre-training knowledge, although its performance is still much below compared to the current fine-tuned SOTA models. Named Entity Recognition (NER):From Table 9, we find that similar to Entity Linking, Claude-2 again outperforms the rest other LLMs across all NER datasets in terms of all evaluation metrics: _Precision_, _Recall_, and _F1_. However, the performance of LLMs is still much lower than the current SOTA results, with the performance of LLaMA-2 being significantly poorer than other LLMs. Such limitations of zero-shot LLMs in NER have also been observed in datasets from the general NLP domain (Laskar et al., 2023). These findings give a strong indication that generative LLMs need further improvement on sequence labeling tasks like NER using the traditional BIO formatting. Summarization:We present our results on the following summarization datasets: _Dialog Summarization_, _Question Summarization_, and _Answer Summarization_ in Table 10 and compare with BioBART (Yuan et al., 2022). For evaluation (Laskar et al., 2022), we use the widely used ROUGE (Lin, 2004) metric, and the BERTScore (Zhang et al., 2019) metric. For BERTScore, we use the RoBERTa-Large (Liu et al., 2019) model for implementation. For all LLMs, we consider the input context length of 2000 words. We observe that in terms of the ROUGE metric, all LLMs perform much worse than BioBART in datasets that have dedicated training sets, such as iCliniq, HealthCareMagic, and MeQSum. Meanwhile, they perform on par with BioBART in the MEDIQA-QS dataset. Among LLMs, we find that GPT-3.5 is generally the best performer in these datasets. More importantly, we find that it outperforms BioBART in both MEDIQA-ANS and MEDIQA-MAS datasets. Note that MEDIQA-ANS, MEDIQA-MAS, and MEDIQA-QS datasets do not have any dedicated training data and GPT-3.5 and other LLMs usually achieve comparable or even better performance in these datasets compared to the BioBART model fine-tuned on other related datasets (Yuan et al., 2022). This further confirms that zero-shot LLMs are more useful than domain-specific fine-tuned models in biomedical datasets that lack large training data. We also present our findings on the biomedical lay summarization task in Table 11 and readability controlled summarization task in Table 12. For the biomedical lay summarization task, we combine both abstract and article together and give as input to the models till the concatenated text reaches the maximum context length. For this task, we compare the performance of the LLMs in eLife and PLOS datasets. We find that among the LLMs based on the ROUGE scores, the Claude-2 model is the best performing one in the eLife dataset while GPT-3.5 is the best performing one in the PLOS dataset. However, none of the LLMs could outperform the current state-of-the-art in these datasets. While the performance of the LLMs is quite low in terms of ROUGE, they achieve much higher scores in terms of BERTScore, which is comparable to the state-of-the-art result. This shows a great discrepancy between the lexical matching based traditional ROUGE scoring and the contextual similarity-based BERTScore metric. The readability-controlled summarization task contains two sub-tasks: (i) abstract writing, and (ii) lay summary writing. Contrary to the previous task (i.e., biomedical lay summarization task), this time we only give an article as input without the abstract, as required by the task. We find that in writing the abstract of the given article, the Claude-2 model performs the best in terms of all ROUGE scores. However, in terms of BERTScore, GPT-3.5 slightly \begin{table} \begin{tabular}{c|c c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{10}{c}{Dataset} \\ \cline{2-13} **Model** & \multicolumn{6}{c}{**L**I**I**i**} & \multicolumn{6}{c}{**HealthCare/Magic**} & \multicolumn{6}{c}{**MeGSum**} & \multicolumn{6}{c}{**MEDQA-QS**} & \multicolumn{6}{c}{**MEDQA-MAS**} & \multicolumn{6}{c}{**MEDQA-ANS**} \\ \cline{2-13} **[0.1** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** \\ \hline **GPT-3.5** & **20.5** & **12.8** & **84.9** & **32.8** & **8.2** & **88.9** & **89.0** & 12.36 & 28.90 & **90.16** & 1.6 & 2.87 & **9.0** & **19.4** & **12.1** & **9.7** & **10.4** & **24.8** & **9.0** \\ **PAIL-2** & **20.9** & **10.2** & **10.2** & **10.6** & **87.0** & 9.5 & **82.0** & 88.31 & 10.4 & 20.7 & 98.79 & 11.5 & 26.00 & 90.53 & 8.6 & 13.5 & 25.2 & 24.1 & 18.9 & 85.4 \\ **Claud-2** & **28.8** & **11.0** & 23.7 & 89.0 & 24.4 & 7.4 & 20.3 & 88.2 & **17.3** & **27.9** & **89.9** & **23.6** & **13.5** & **27.9** & 90.1 & 33.4 & 6.2 & 11.1 & 85.6 & 28.6 & 8.7 & 17.6 & 85.9 \\ **LLaMA-2** & **20.0** & 7.2 & 15.2 & 85.8 & 16.7 & 5.1 & 12.9 & 85.3 & 21.2 & 7.3 & 17.1 & 85.23 & 8.6 & 17.6 & **13.7** & 11.2 & 13.2 & 86.9 & 82.0 & 9.6 & 17.4 & 85.3 \\ \hline **SOTA** & **91.1** & **48.5** & **59.4** & **94.1** & **16.7** & **26.1** & **4.2** & **91.9** & **95.6** & **30.1** & **53.2** & **93.2** & **12.4** & **29.7** & **90.3** & **12.9** & 11.3 & 29.3 & 86.1 & 21.6 & 9.3 & 19.2 & 85.7 \\ \hline \hline \end{tabular} \end{table} Table 10: Performance on various summarization datasets. Here, ‘R-1’, ‘R-2’, ‘R-L’ and ‘B-S’ denote ‘ROUGE-1’, ‘ROUGE-2’, ‘ROUGE-L’, and ‘BERTScore’, respectively. State-of-the-art (SOTA) results are taken from the BioBART (Yuan et al., 2022) model. Also, LLaMA-2 refers to its 13b version, similar to other tasks. \begin{table} \begin{tabular}{c|c c c c c|c c c c|c c c c} \hline \hline & \multicolumn{10}{c}{**Dataset**} \\ \cline{2-13} **Model** & \multicolumn{6}{c}{**Abstract**} \\ \cline{2-13} **Model** & \multicolumn{6}{c}{**L**I**i**} & \multicolumn{6}{c}{**L**I**i**} & \multicolumn{6}{c}{**L**I**i**} & \multicolumn{6}{c}{**L**I**i**} & \multicolumn{6}{c}{**L**I**i**} \\ \cline{2-13} **[0.1** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-1** & **R-2** & **R-1** & **R-1** & **R-2** & **R-1** & **R-1** & **R-1** & **R-2** & **R-1** & **R-2** & **R-1** & **R-1** & **R-2** & **R-1** & **R-1** & **R-2** & **R-1** & **R-1** & **R-2** & **R-1** & **R-1** & **R-2** & **R-1** & **R-1** & **R-2** & **R-1** & **R-1** & **R-2** & **R-1** & **R-1** & **R-1** & **R-2** & **R-1** & **R-1** & **R-1** & **R-2** & **R-1** & ** performs better than Claude-2. Interestingly, we find that in terms of the BERTScore, the GPT-3.5 model even outperforms the ROUGE-based SOTA models in both datasets. This further establishes the limitation of using ROUGE as a metric to evaluate LLMs for summarization [10]. We also investigate the performance using various input context lengths (since we cannot input the whole document at once to these LLMs except Claude-2). We use the following context lengths (in terms of number of words), PaLM-2: 2000 and 5000, GPT-3.5: 2000, 5000, and 10000, and Claude-2: 2000, 5000, and full input document. Since LLaMA-2 has a maximum context length of 4000 tokens (approximately 3000 words14), we exclude LLaMA-2 from this study. Our results for both tasks, biomedical lay summarization, and readability controlled summarization, can be found in Table 13 and Table 14, respectively. Based on our experiments, we find that increasing the context length decreases the performance of PaLM-2 in both tasks across all datasets. Moreover, increasing the context length also does not help GPT-3.5 or Claude-2 to gain any substantial performance gain. This can be explained based on the work of liu2021learning, where they find that LLMs tend to lose contextual information with the increase in sequence length, and especially they perform poorly in scenarios when they are required to generate responses based on utilizing the information that appears in the middle of the context. Footnote 14: [https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them](https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them) Based on our findings in these article summarization datasets, we find that using the context length of 2000 is good enough in terms of ROUGE and BERTScore metrics for both abstract and lay summarization. This context length should also be very helpful in terms of usage cost as well as time efficiency in comparison to using longer contexts. ## 6 Conclusions and Future Work In this paper, we evaluate LLMs in six benchmark benchmark biomedical tasks across 26 datasets. We observe that in datasets that have large training data, zero-shot LLMs usually fail to outperform the fine-tuned state-of-the-art models (e.g., BioBERT, BioGPT, BioBART, etc.). However, they consistently outperform fine-tuned models on datasets where the training data size is small. These findings suggest that LLMs can be useful in low-resource biomedical tasks. Moreover, our findings demonstrate that the performance of these LLMs may vary across different datasets and tasks, as we did not observe a single LLM outperforming others across all datasets and tasks. Thus, our findings will give a good direction while building task-specific biomedical systems using LLMs for real-world us \begin{table} \begin{tabular}{l|c|c c c c|c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Model**}} & \multicolumn{6}{c}{**Dataset**} \\ \cline{2-10} & \multicolumn{3}{c}{**Length**} & \multicolumn{3}{c|}{**\(\mathbf{\text{e}}\)\(\mathbf{\text{i}}\)\(\mathbf{\text{i}}\)\(\mathbf{\text{i}}\)**\(\mathbf{\text{i} age. We also demonstrate that LLMs are sensitive to prompts, as variations in prompts led to a noticeable difference in results. Thus, we believe that our evaluation in this paper will help future research while constructing the prompts for LLMs for various tasks. In the future, we will extend our work to investigate the performance of LLMs on more biomedical tasks Wang et al. (2021); Monteiro et al. (2023); Nath et al. (2021); Liu et al. (2023); Luo et al. (2023); Alsentzer et al. (2019); Shah and Ou (2021); Ho et al. (2021), as well as problems in information retrieval that require generating response based on open-domain knowledge Huang et al. (2005); Huang and Hu (2009); Yin et al. (2010). We will also explore the ethical implications (e.g., privacy concerns Khalid et al. (2023)) of using LLMs in the biomedical domain. Moreover, we will study whether fine-tuning open-source LLMs like LLaMA-2 could outperform existing fine-tuned state-of-the-art models in various biomedical tasks. ## Acknowledgement This research is supported by the research grant (RGPIN-2020-07157) from the Natural Sciences and Engineering Research Council (NSERC) of Canada, the York Research Chairs (YRC) program, and the generic research fund of York University. We also acknowledge Compute Canada for providing us with the computing resources to conduct experiments, as well as Anthropic for providing early access to the Claude-2 model for evaluation. ## Appendix A Appendix ### Effects of Prompt Variation We investigate the effects of prompt tuning in the HoC dataset by evaluating the performance of GPT-3.5 based on the following prompt variations: * Prompting with explicitly defining the 10 HoC classes achieves an F1 score of 59.26 (see Row 1 in Table 15). * Prompting without mentioning the name of any HoC classes, drops F1 to 38.20 (see Row 2 in Table 15). * Prompting with the name of each HoC class is given without providing the definition of each class, drops the F1 score to 46.93 (see Row 3 in Table 15). ### Average Performance Per Task To provide further insights, we demonstrate the task-specific average performance of each LLM: - See Figure 3 for Relation Extraction. - See Figure 4 for NER. - See Figure 5 for Entity Linking. - See Figure 6 for Text Classification. - See Figure 7 for Question Answering. - See Figure 8 for Summarization. Our findings demonstrate that more descriptive prompts yield better results. ### Sample Response Generated by LLMs Some sample prompts with the responses generated by LLMs are given as follows: - See Table 16 for Relation Extraction, Document Classification, and Question Answering tasks. - See Table 17 and Table 18 for Text Summarization. \begin{table} \begin{tabular}{p{28.5pt} p{113.8pt} p{113.8pt}} \hline \hline **\#** & **Prompt** & **F1** \\ \hline 1. & The 10 hallmarks of cancer taxonomy with their definitions are given below: (i) Sentiating proliferative signaling: Cancer cells can initiate and maintain continuous cell division by producing their own grown factors or by altering the sensitivity of receptors to growth factors. (ii) Examples growth suppressors: Cancer cells can bypass the normal cellular mechanisms that limit cell division and growth, such as the inactivation of tumor aggregates genes and their instability to injurgation signals. (iii) Resisting cell death: Cancer cells develop resistance to apoptosis, the programmed cell death process, which allows them to survive and continue drinking. (iv) Enabling replicative immutability. Cancer cells can extend their ability to divide incidentally by maintaining the length of telomeres, the protective end caps on chromosomes. (v) Indicating angiogenesis: Cancer cells stimulate the growth of new blood vessels, providing the necessary matrices and oxygen to support their rapid growth. (vi) Activating invasion and metastasis: Cancer cells can invade surrounding tissues and migrate to distinct sites in the body, forming secondary tumors called metastasis. (vi) Desergulating cellular energetic metabolism: Cancer cells require their metabolism to support rapid cell division and growth, often relying more on glycolysis even in the presence of oxygen (a phenomenon known as the Watcher effect). (vii) Avoiding immune destruction: Cancer cells can avoid detection and elimination by the immune system through various mechanisms, such as downregulation and self-attractive or producing immunoprecases signals. (ix) Tumor promoting inflammation: Chronic inflammation can promote the development and progression of cancer by applying growth factors, survival signals, and other molecules that facilitate cancer cell proliferation and survival. (v) Genome instability and mutation: Cancer cells exhibit increased genomic instability, leading to a higher mutation rate, which in turn drives the infinite and progression of cancer. (Classify the following sentence in one of the above 10 hallmarks of cancer taxonomy. If cancer be classified, answer as “empty”. [SENTENCE] & 38.20 \\ \hline 2. & Is it possible to classify the following sentence in one of the 10 categories in the hallmarks of Cancer taxonomy? If possible, write down the class. [SENTENCE] & 46.93 \\ \hline 3. & Classify the sentence given below in one of the 10 categories (i. activating invasion and metastasis, it, tumor promoting inflammation, ill, inducing angiogenesis, it, evating growth suppressors, v, resisting cell death; v) cellular emergencies, v, genomic instability and mutation, vii, sustaining proliferative signaling, it, avoiding immune destruction, x, enabling replicative immutability in the hallmarks of Cancer taxonomy? If contact the classified, answer as “empty”. [SENTENCE] & 46.93 \\ \hline \hline \end{tabular} \end{table} Table 15: Effects of Prompt Variation in the HoC dataset for the Document Classification Task. Figure 4: Average score by different models across NER datasets. Figure 5: Average score by different models across entity linking datasets. Figure 3: Average score by different models across relation extraction datasets. Figure 8: Average score by different models across summarization datasets. Figure 6: Average score by different models across different text classification datasets. Figure 7: Average score by different models across question answering datasets. \begin{table} \begin{tabular}{l l l l} \hline \hline **Dataset** & **Type** & **Prompt** & **GPL3.5 Response** & **Gold Label** \\ \hline BCSCDR & Chemical-Dissense Relation Extraction & Identify each pair of daps and the daps-induced side-effects (e.g., diseases) in the following passage: & Drug-Famodine & chemical: famodine-Dissense \\ & & Fumodine-associated delitism. A series of six cases. Famodine is a histamine I2D-term antagonist used in inpatient testing for prevention of stress ulcers and is showing its proximity because of its low cost. Although all of the currently available & Site effect: Increased \\ & & HE acceptor antagonists have shown the propensity to cause delitism, only two previously reported cases have been associated with fumodine. The authors report on six cases of fumodine-associated delitism in hospitalized patients when the declared company openedly protocol & \\ & & IT control of fumodine. The pharmacokinetics of fumodine are reviewed, with no change in its metabolism in the elderly population seen. The implications of using fumodine in elderly persons are discussed. & \\ \hline KD-DTI & Dang-Target & Identify the daps-target interactions in the following passage (along with the interaction: "helbistor", "agonist", "modulator", "activate", "bocker", "in-discrim", "demayer", "disruption", "interactive", "hard", "bocker", "Uberai", "potential", "solvent", "batter", \begin{table} \begin{tabular}{l l l l l} \hline \hline **Dataset** & **Type** & **Prompt** & & & \\ \hline iCiiiap & Dialog & Write a very short and concise one line summary of the following dialogue as an informal & What are some ways & I am depressed, and my paragements & are not supporting me \\ & Summarization & question in a healthcare forum: & & & \\ & Patient: Heille doctor, I am depressed. What can I do? My parents are not supporting for a & a without & & \\ & & surgery which is important for my self-confidence and self-esteem. My father is not lasting care & what details should I \\ & & of my health level-being. Please help Doctor: Heille. Any related incident or care for fixing & provide to my doctor? \\ & & depression is unclear in this query. Details of surgery and symptoms which are not mentioned & are appreciated to know the complete picture. Depression feelings are generally linked with & \\ & & some less, self-related issues, stress, or environmental or biological factors. I can be managed & \\ & & by: 1. Medicines. I. Paychotherapy to identify the concise agents and try resolving the issue. & \\ & & 3. Utilizing thousands related to concerns. 4. Physical exercises like shaking, logging, and, wodout. 5. Diet changes & Reduce the intake of too much salt and junk food and reduce sugar intake. Consumer healthy foods and fruits having serotonin and omega-3 can reduce depression. & \\ \hline HealthCare & Dialog & Write a very short and concise one line summary of the following dialogue as a question in a & What is the diameter of What should be the diameter of the hepatic portal vein & of hepatic portal vein of \\ Magic & Summarization & healthcare forum: & & \\ & Patient: Deor Docard. They have a 15 years old skelughter, I am looking for the written file to in a 1-2 year old child, i.e. years old? & in a 1-2 year old child, i.e. years old? & and what is the free- \\ & & understand one clinical purposes of a normal young child-1-2 years old? it what is the frequency of the hepatic portal flow? & query of the hepatic portal flow? & query of the hepatic portal flow? \\ & & you could help me with the information Kelvin in Doctor. 4.8 ms from i’s free diameter for & portal flow? \\ & & hepatic portal vein for 1.5 year of children, there is no thing like frequency of hepatic portal flow. please make it clear & \\ \hline MeQSum & Question & Rewrite the following question in a short and concise form. & Where can I get my & Where can I get genetic \\ & Summarization & Question: William’s syndrome. I would like to have any daughter tested for William’s syndrome. & daughter tested for testing for william’s syndewment in docome? \\ & & Could you please tell me where I would go or who does it in my area? That book! & William’s syndrome in docome? \\ \hline MEDIQA- & Question & Rewrite the following question in a short and concise form: & Any care for a growing & How can i get rid of a lower \\ QS & Summarization & Question: 23 surgeries and counting, lower lip birthmark, have tried all options on the theme & lower lip birthmark after lip birthmark permanently? \\ & & and genes what will have it, continues to grow back... any suggestion? Is there a cure coming & 23 surgeries? \\ \hline MEDIQA- & Answer & Write a very short and concise summary of the following article based on the question given & The article discusses & Bad breath could be caused \\ ANS & Summarization & below: & & \\ & & & be: \\ & & Question: Bad breath have very bad breath and at times it can make myself and others sick. It & which is usually \(r\)- diseases, infections, smoked some make as to what need to do. & related to poor dental by: ing. alcoholism, and cerf. \\ & & Article: Bad breath Halios Summary There are many reasons why you might have bad breath. & does and diseases can \\ & & You can get it if you don’t brush and foss regularly. Bacteria that build up in your mouth and & also produce distinct \\ & & between your teeth produce the bad color. Other problems in your mouth, such as gm disease, breath odors, such as dry mouth or cavities, may also cause. It. Signatures or problems with your nose may be to be blame. & truly breath being a \\ & & You can also have bad breath if you eat some foods, like raw unions, garlic or cabbage. And of sign of broadcasts in \\ & & course smoking causes in you had aroma. Some diseases and medicines are associated with a diabetes. Proper consumption specific breath odor. Having good dental habits, helping and fossing rapidly, help fight it hygiene, avoiding \\ & & had breath. Mouthawables, units or chewing gum may make your breath fresher. If you have an smoking, and follousing \\ & & underlying disorder, treating it may help eliminate the breath odor. & healthcare worker’s instructions are recommended for testing bal \\ & & & method for treating bal \\ & & breath. If bad breath \\ & & & persists or it is accompanied \\ & & & \\ & & & \\ & & & \\ & & medical attention may \\ \hline MEDIQA- & Answer & For the following question, some relevant answers are given below. Please write down a short & It is important to ask Most machines are safe your provider about step: 10 we with a pacemaker. \\ & & you monitor about step: 10 we with a pacemaker. \\ & & Question: Can I use a Power Plate machine if I have a pacemaker? & coffee drives that may change devices should be \\ & & & \\ & & & maker, but not? ap. \\ & & Answer 1: Most machines and devices will not interfere with your pacemaker. But some with pimimeness in the home very forward confles touch \\ & & strong magnetic fields may. Always say your provider about any specific driver that one might be an safe & If it is room (such as screwdrivers and to avoid), Do NOT put a magnet near your pacemaker. Most appliances is your home are safe & 11. It is room (such as screwdrivers and to avoid), Do NOT put a magnet near your pacemaker. Most appliances in your home are safe & 11. It is common (such as cliffs and table) to be around. This includes your refrigerator, weather, doy, counter, bed, motor, computers and from large motors, gpus (such as dills and table fix machines, that player, store, cup, change controls, and motorness. You should keep), extract, and equipment, sensors & Electric Insurance-several devices at least 12 inches (80 centimeters) away from this where the pacemaker is products that the mag, and \(r\)-s and four hours Sifted Oliver Street (has include: finger) operated cells (such as screwdrivers and theirchers, and may be machines Sereo speakable). \\ & & Only: In-gap your tooth can still and thus have a specific luxiteness and leaf blowers (can surround and real women medical equip). \\ & & Slot machines Store speakers Tell all provided you had two appearance before any tests are detectors and security may interfere with dome zone zone model equipment into strictty with your pacemaker.Say way from large motors, watch. Additionally, adj. your pacemaker.Say way may (or be called), and equipment. Do NOT lean over the open head of a car that is running. Also, for getting a pacemaker, from large motors, generalway may favor. Radio transmitters and high-clubage usage time Products that are magnetic, some activity criteria, and equipment. Rather therapy, such as some matreress, pillowness, and managers Large Legeratic or specific-position of some may apply, such as chairs and highlights all appliances for you have an cell phone. Do NOT put a piece in the same side of your body and, but if things heavy objects & voltage power lines Prod. \\ & & as your pacemaker. When you still come, hold it to your car on the opposite side of another and main main move: & this that magnetic your body. be careful around metal defects and security hands. Handheld security sounds & ments. It is important therapy, such as some may interfere with your pacemaker. Show your conflict and ask to be named second. Most to carry a viable end tamers, pillows, and security gates at an airports and stores an OK. But DO NOT stand near these devices for long with presumeal details & managers Large electrical-periodids. Your pacemaker may set of alarms. After any operation, have your provider check your and emergency contract or gasoline-powered equipment. \\ & & & information. & ames. Do not yet your \\ & & & & cell phone in a pocket or \\ & & & & \\ & & & & \\ \hline \hline \end{tabular} \end{table} Table 17: Sample prompts with GPT-3.5 generated responses for different Abstractive Summarization tasks (Dialog Summarization, Question Summarization, and Answer Summarization). **GPT-3.5 Response** **Policy** **Luy Sammer** **Wiez down a remarkable summary of the following biomedical article using lens** **This article discusses the rule of genetic differ. Many REAUs can be the institution access, technical terminology (e.g., by summary) and that it can be unrecurfied for more than two of us may be unrecurfied on the work for the non-target discovery.** [MISSING_PAGE_POST] **Sureurely** [MISSING_PAGE_POST] **Sure** **Surely** ** **Sure** **Sure** **Surely** ** [MISSING_PAGE_POST]
2308.01142
Incompressible Limit of Compressible Ideal MHD Flows inside a Perfectly Conducting Wall
We prove the incompressible limit of compressible ideal magnetohydrodynamic(MHD) flows in a reference domain where the magnetic field is tangential to the boundary. Unlike the case of transversal magnetic fields, the linearized problem of our case is not well-posed in standard Sobolev space $H^m~(m\geq 2)$, while the incompressible problem is still well-posed in $H^m$. The key observation to overcome the difficulty is a hidden structure contributed by Lorentz force in the vorticity analysis, which reveals that one should trade one normal derivative for two tangential derivatives together with a gain of Mach number weight $\varepsilon^2$. Thus, the energy functional should be defined by using suitable anisotropic Sobolev spaces. The weights of Mach number should be carefully chosen according to the number of tangential derivatives, such that the energy estimates are uniform in Mach number. Besides, part of the proof is similar to the study of compressible water waves, so our result opens the possibility to study the incompressible limit of free-boundary problems in ideal MHD.
Jiawei Wang, Junyan Zhang
2023-08-02T13:32:35Z
http://arxiv.org/abs/2308.01142v5
# Incompressible Limit of Compressible Ideal MHD Flows inside a Perfectly Conducting Wall ###### Abstract We prove the incompressible limit of compressible ideal magnetohydrodynamic(MHD) flows in a reference domain where the magnetic field is tangential to the boundary. Unlike the case of transversal magnetic fields, the linearized problem of our case is not well-posed in standard Sobolev space \(H^{m}\) (\(m\geq 2\)), while the incompressible problem is still well-posed in \(H^{m}\). The key observation to overcome the difficulty is a hidden structure contributed by Lorentz force in the vorticity analysis, which reveals that one should trade one normal derivative for two tangential derivatives together with a gain of Mach number weight \(\varepsilon^{2}\). Thus, the energy functional should be defined by using suitable anisotropic Sobolev spaces. The weights of Mach number should be carefully chosen according to the number of tangential derivatives, such that the energy estimates are uniform in Mach number. Besides, part of the proof is similar to the study of compressible water waves, so our result opens the possibility to study the incompressible limit of free-boundary problems in ideal MHD. **Keywords**: Compressible ideal MHD, Incompressible limit, Perfectly conducting wall. **MSC(2020) codes**: 35L60, 35Q35, 76M45, 76W05. ###### Contents * 1 Introduction * 2 Difficulties and strategies * 3 Uniform energy estimates * 4 Incompressible limit ## 1 Introduction In this paper, we consider the compressible ideal magnetohydrodynamics(MHD) equations \[\left\{\begin{aligned} & D_{t}\rho+\rho(\nabla\cdot u)=0&& \text{in }[0,T]\times\Omega,\\ &\rho D_{t}u=B\cdot\nabla B-\nabla P,\;\;P:=p+\frac{1}{2}|B|^{2}&& \text{in }[0,T]\times\Omega,\\ & D_{t}B=B\cdot\nabla u-B(\nabla\cdot u)&&\text{in }[0,T]\times\Omega,\\ &\nabla\cdot B=0&&\text{in }[0,T]\times\Omega,\\ & D_{t}S=0&&\text{in }[0,T]\times\Omega,\end{aligned}\right. \tag{1.1}\] describing the motion of a compressible conducting fluid in an electro-magnetic field. Here \(\Omega:=\mathbb{T}^{d-1}\times(-1,1)\) is the reference domain in \(\mathbb{R}^{d}\)\((d=2,3)\) with boundary \(\Sigma:=\Sigma_{\pm}\cup\Sigma_{-}\) and \(\Sigma_{\pm}:=\{x_{d}=\pm 1\}\). \(\nabla:=(\partial_{x_{1}},\cdots,\partial_{x_{d}})\) is the standard spatial derivative. \(D_{t}:=\partial_{t}+u\cdot\nabla\) is the material derivative. The fluid velocity, the magnetic field, the fluid density, the fluid pressure and the entropy are denoted by \(u=(u_{1},\cdots,u_{d})\), \(B=(B_{1},\cdots,B_{d})\), \(\rho\), \(p\) and \(S\) respectively. Note that the last equation of (1.1) is derived from the equation of total energy and Gibbs relation. See [6, Ch. 4.3] for more details. We assume the fluid pressure \(p=p(\rho,S)\) to be a given smooth function of \(\rho,S\) which satisfies \[\rho>0,\ \ \ \frac{\partial p}{\partial\rho}>0,\ \ \ \ \text{in}\ \tilde{\Omega}. \tag{1.2}\] These two conditions also guarantee the hyperbolicity of system (1.1). The initial and boundary conditions of system (1.1) are \[(u,B,\rho,S)|_{t=0}=(u_{0},B_{0},\rho_{0},S_{0}) \ \ \ \text{in}\ [0,T]\times\Omega, \tag{1.3}\] \[u_{d}=B_{d}=0 \ \ \ \text{on}\ [0,T]\times\Sigma, \tag{1.4}\] where the boundary condition for \(u_{d}\) is the slip boundary condition, and the boundary condition for \(B_{d}\) shows that \(\Sigma_{\pm}\) are perfectly conducting walls. **Remark 1.1**.: The conditions \(\nabla\cdot B=0\) in \(\Omega\) and \(B_{d}=0\) on \(\Sigma\) are both constraints for initial data so that the MHD system is not over-determined. One can show that they propagate within the lifespan of the solution. Using the theory of hyperbolic system with charateristic boundary condition [22], one can show that the correct number of boundary conditions when assuming \(u_{d}|_{\Sigma}=0\) is \(1\) (see [31]). So, \(B_{d}|_{\Sigma}=0\) has to be an initial constraint. The initial-boundary value problem (1.1)-(1.4) is used to characterize the motion of a plasma confined in a rigid perfectly conducting wall, which is an important model in the study of laboratory plasma confinement problems. See [6, Ch. 4.6] for more details. To make the initial-boundary value problem (1.1)-(1.4) solvable, we need to require the initial data satisfying the compatibility conditions up to certain order. For \(m\in\mathbb{N}\), we define the \(m\)-th order compatibility conditions to be \[\partial_{t}^{j}u_{d}|_{t=0}=0,\ \ 0\leq j\leq m. \tag{1.5}\] Let \(\mathcal{F}:=\log\rho\). Since \(\frac{\partial p}{\partial\rho}>0\) indicates \(\frac{\partial\mathcal{F}}{\partial p}>0\), using \(D_{t}S=0\), the first equation of (1.1) is equivalent to \[\frac{\partial\mathcal{F}}{\partial p}D_{t}p+\nabla\cdot u=0. \tag{1.6}\] Thus the compressible MHD system is now reformulated as follows \[\begin{cases}\frac{\partial\mathcal{F}}{\partial p}D_{t}p+\nabla\cdot u=0& \text{in}\ [0,T]\times\Omega,\\ \rho D_{t}u=B\cdot\nabla B-\nabla P,\ \ P:=\ p+\frac{1}{2}|B|^{2}&\text{in}\ [0,T]\times\Omega,\\ D_{t}B=B\cdot\nabla u-B(\nabla\cdot u)&\text{in}\ [0,T]\times\Omega,\\ \nabla\cdot B=0&\text{in}\ [0,T]\times\Omega,\\ D_{t}S=0&\text{in}\ [0,T]\times\Omega,\\ p=p(\rho,S),\frac{\partial\rho}{\partial\rho}>0&\text{in}\ [0,T]\times\bar{\Omega}.\\ u_{d}=B_{d}=0&\text{on}\ [0,T]\times\Sigma,\\ (u,B,\rho,S)|_{t=0}=(u_{0},B_{0},\rho_{0},S_{0})&\text{on}\ \{t=0\}\times\Omega. \end{cases} \tag{1.7}\] ### The equation of states and sound speed This paper is devoted to studying the behavior of the solution of (1.7) as the sound speed goes to infinity, which is known to be the incompressible limit. Physically, the sound speed is defined by \(c_{s}:=\sqrt{\frac{\partial\mathcal{P}}{\partial\rho}}\). Mathematically, Mathematically, it is convenient to view the sound speed as a parameter \(\lambda\). This being said, a typical choice of the equation of states \(p_{\lambda}(\rho,S)\) would be the polytropic gas \[p_{\lambda}(\rho,S)=\lambda^{2}\left(\rho^{\gamma}\exp(S/C_{V})-1\right),\ \ \ \gamma\geq 1,\ \ C_{V}>0. \tag{1.8}\] When viewing the density as a function of the pressure and the entropy, this indicates \[\rho_{A}(p/\lambda^{2},S)=\left(\left(1+\frac{p}{\lambda^{2}}\right)e^{-\frac{S} {\varepsilon_{V}}}\right)^{\frac{1}{7}},\quad\text{and}\ \log\left(\rho_{A}(p/\lambda^{2},S)\right)=\gamma^{-1}\log\left(\left(1+\frac{ p}{\lambda^{2}}\right)e^{-\frac{S}{\varepsilon_{V}}}\right). \tag{1.9}\] Hence, we can view \(\mathcal{F}=\log\rho\) as a parametrized family \(\{\mathcal{F}_{\varepsilon}(p,S)\}\) as well, where \(\varepsilon=\frac{1}{\lambda}\). Indeed, we have \[\frac{\partial\mathcal{F}_{\varepsilon}}{\partial p}=\gamma^{-1}\log\left((1+ \varepsilon^{2}p)e^{-\frac{S}{\varepsilon_{V}}}\right). \tag{1.10}\] Since we work on the case when the entropy and velocity are both bounded (later we will assume \(u,S\in H^{4}(\Omega)\)), we again slightly abuse the terminology, and call \(\lambda\) the sound speed and call \(\varepsilon\) the Mach number and thus \(M=O(\varepsilon)\). Furthermore, there exists \(C>0\) such that \[C^{-1}\varepsilon^{2}\leq\frac{\partial\mathcal{F}_{\varepsilon}}{\partial p }(p,S)=\frac{1}{\rho}\frac{\partial\rho}{\partial p}(p,S)\leq C\varepsilon^{ 2}. \tag{1.11}\] Also, we assume \[|\partial_{p}^{*}\mathcal{F}_{\varepsilon}(p,S)|\leq C,\quad|\partial_{p}^{*} \mathcal{F}_{\varepsilon}(p,S)|\leq C|\partial_{p}\mathcal{F}_{\varepsilon}(p,S)|^{*}\leq C\mathcal{F}_{\varepsilon}^{*}(p) \tag{1.12}\] holds for \(0\leq s\leq 8\). Thus, when considering the incompressible limit, that is, when \(\varepsilon>0\) is sufficiently small, it is more convenient to reformulate the compressible MHD system by replacing \(\frac{\partial\mathcal{F}}{\partial p}\) with \(\varepsilon^{2}\) as follows \[\begin{cases}\varepsilon^{2}D_{t}p+\nabla\cdot u=0&\text{in}\ [0,T]\times\Omega,\\ \rho D_{t}u=B\cdot\nabla B-\nabla P,\ \ P:=p+\frac{1}{2}|B|^{2}&\text{in}\ [0,T]\times\Omega,\\ D_{t}B=B\cdot\nabla u-B(\nabla\cdot u)&\text{in}\ [0,T]\times\Omega,\\ \nabla\cdot B=0&\text{in}\ [0,T]\times\Omega,\\ D_{t}S=0&\text{in}\ [0,T]\times\Omega,\\ p=p(\rho,S),\frac{\partial\rho}{\partial p}>0&\text{in}\ [0,T]\times\bar{\Omega}.\\ u_{d}=B_{d}=0&\text{on}\ [0,T]\times\Sigma,\\ (u,B,\rho,S)|_{t=0}=(u_{0},B_{0},\rho_{0},S_{0})&\text{on}\ \{t=0\}\times\Omega. \end{cases} \tag{1.13}\] ### An overview of previous results The incompressible limit of compressible inviscid fluids is considered to be a type of singular limit of hyperbolic system: the pressure for compressible fluids is a variable of hyperbolic system whereas the pressure for incompressible fluids is a Lagrangian multiplier and the equation of state is no longer valid. Early works about compressible Euler equations can be dated back to Klainerman-Majda [14, 15] when the domain is the whole space \(\mathbb{R}^{d}\) or the periodic domain \(\mathbb{T}^{d}\), Ebin [5] and Schochet [23, 24] when the domain is bounded, and Isozaki [9] when considering an exterior domain. The abovementioned papers consider the case of "well-prepared" initial data, which means the compressible initial data is exactly a small perturbation of a given incompressible initial data. When the initial data is "ill-prepared", that is, the compressible initial data is the small perturbation of incompressible initial data plus a highly oscillatory part, we refer to [34, 2, 9, 25, 8, 29, 18, 1] and references therein. The precise definitions of "well-prepared data" and "ill-prepared data" can be found in [29, 18, 1]. For compressible ideal MHD, the incompressible limit in the whole space was studied by Jiang-Ju-Li [10]. However, when the domain has a boundary, the study of incompressible limit for ideal compressible MHD is much more subtle. The first difficulty is that the vorticity estimate for ideal compressible MHD cannot be closed when using standard Sobolev spaces, and thus the method presented in [23, 24] is no longer valid. When the magnetic field is not tangential to the boundary, one can use such transversality to compensate the loss of normal derivative arising in the vorticity analysis. See Yanagisawa [35] for the well-posedness result under the condition \(B\times N|_{\partial\Omega}=\mathbf{0}\). As for the singular limits, we refer to Ju-Schochet-Xu [12] for the singular limits when both Mach number and Alfven number converge to zero and Jiang-Ju-Xu [11] for the low Alfven number limit of incompressible ideal MHD whose local existence was proved via a low-Mach-number regime. Unfortunately, when the magnetic field is tangential to the boundary, that is, \(B\cdot N|_{|\alpha\Omega}=0\), Ohno-Shirota [20] proved that the linearized problem near a non-zero magnetic field is ill-posed in standard Sobolev spaces. In this case, one has to use suitable anisotropic Sobolev spaces introduced by Chen [3] to proved the well-posedness. See Yanagisawa-Matsumura [36] and Secchi [26, 28]. Therefore, an essential difficulty in establishing the incompressible limit for ideal MHD with \(B\cdot N|_{|\alpha\Omega}=0\) is the "incompatibility of function spaces for the existence": compressible ideal MHD flow with the perfectly conducting wall condition may not be well-posed in standard Sobolev spaces, while the corresponding incompressible problem is well-posed (see, for example, Gu-Wang [7]). So far, there is only a partial answer for this incompressible limit problem. Ju and the first author [13] proved the incompressible limit in a suitable closed subspace of standard Sobolev space, introduced by Secchi [27], by adding more restrictive constraints for the boundary value of initial data. Specifically, [27, 13] require the initial data to satisfy \[\partial_{3}^{2k}(u_{3},B_{3})|_{\Sigma}=0,\ \partial_{3}^{2k+1}(p,u_{1},u_{2},B_{ 1},B_{2},S)|_{\Sigma}=0,\ \ k=0,1,\cdots,\lfloor\frac{m-1}{2}\rfloor.\] However, the physical interpretation of these "extra constraints" is still unclear. In other words, it is still unknown how to thoroughly overcome the difficulty caused by the abovementioned "incompatibility". The aim of this paper is to give a definitive answer to the incompressible limit problem of ideal MHD with the perfectly conducting wall condition when the initial data is "well-prepared". Indeed, in the vorticity analysis, there is a hidden structure brought by the Lorentz force term. To the best of our knowledge, such an observation never appears in any previous works, but it can really help us easily prove the uniform estimates in Mach number and also illustrates why the anisotropic Sobolev spaces are naturally introduced and necessary for compressible MHD but are not needed for incompressible MHD. ### The main theorems Before stating our results, we should first define the anisotropic Sobolev space \(H^{m}_{*}(\Omega)\) for \(m\in\mathbb{N}\) and \(\Omega=\mathbb{T}^{d-1}\times[-1,1]\). Let \(\omega=\omega(x_{d})\) be a cutoff function1 on \([-1,1]\) defined by \(\omega(x_{d})=(1-x_{d})(1+x_{d})\). Then we define \(H^{m}_{*}(\Omega)\) for \(m\in\mathbb{N}^{*}\) as follows Footnote 1: The choice of \(\omega(x_{d})\) is not unique. We just need \(\omega(x_{d})\) vanishes on \(\Sigma\) and is comparable to the distance function near \(\Sigma\). \[H^{m}_{*}(\Omega):=\left\{f\in L^{2}(\Omega)\bigg{|}(\omega\partial_{d})^{ \alpha_{d+1}}\partial_{1}^{\alpha_{1}}\cdots\partial_{d}^{\alpha_{d}}f\in L ^{2}(\Omega),\ \ \forall\alpha\ \text{with}\ \sum_{j=1}^{d-1}\alpha_{j}+2\alpha_{d}+\alpha_{d+1}\leq m\right\},\] equipped with the norm \[\|f\|^{2}_{H^{m}_{*}(\Omega)}:=\sum_{\sum_{j=1}^{d-1}\alpha_{j}+2\alpha_{d}+ \alpha_{d+1}\leq m}\|(\omega\partial_{d})^{\alpha_{d+1}}\partial_{1}^{\alpha _{1}}\cdots\partial_{d}^{\alpha_{d}}f\|^{2}_{L^{2}(\Omega)}. \tag{1.14}\] For any multi-index \(\alpha:=(\alpha_{0},\alpha_{1},\cdots,\alpha_{d},\alpha_{d+1})\in\mathbb{N}^{d +2}\), we define \[\partial_{*}^{\alpha}:=\partial_{t}^{\alpha_{0}}(\omega\partial_{3})^{\alpha_ {d+1}}\partial_{1}^{\alpha_{1}}\cdots\partial_{d}^{\alpha_{d}},\ \ \langle\alpha\rangle:=\sum_{j=0}^{d-1}\alpha_{j}+2\alpha_{d}+\alpha_{d+1},\] and define the **space-time anisotropic Sobolev norm**\(\|\cdot\|_{m,*}\) to be \[\|f\|^{2}_{m,*}:=\sum_{\langle\alpha\rangle\leq m}\|\partial_{*}^{\alpha}f\|^{ 2}_{L^{2}(\Omega)}=\sum_{\alpha_{0}\leq m}\|\partial_{t}^{\alpha_{0}}f\|^{2} _{H^{m-0}_{*}(\Omega)}. \tag{1.15}\] We also denote the interior Sobolev norm to be \(\|f\|_{s}:=\|f(t,\cdot)\|_{H^{m}(\Omega)}\) for any function \(f(t,x)\) on \([0,T]\times\Omega\) and denote the boundary Sobolev norm to be \(|f|_{s}:=|f(t,\cdot)|_{H^{m}(\Sigma)}\) for any function \(f(t,x)\) on \([0,T]\times\Sigma\). From now on, we assume the dimension to be \(d=3\), that is, \(\Omega=\mathbb{T}^{2}\times(-1,1)\) and \(\Sigma_{\pm}=\{x_{3}=\pm 1\}\). In the proof, we will see the 2D case follow in the same manner as the 3D case up to a slight modification in the vorticity analysis. First, we establish a local-in-time estimate that is uniform in Mach number \(\varepsilon\). **Theorem 1.1** (Uniform estimate in \(\varepsilon\)).: Let \(\varepsilon\in(0,1)\) be fixed. Let \((u_{0},B_{0},\rho_{0},S_{0})\in H^{8}(\Omega)\times H^{8}(\Omega)\times H^{8}(\Omega)\) be the initial data of (1.13) satisfying the compatibility conditions (1.5) up to 7-th order and \[E(0)\leq M \tag{1.16}\] for some \(M>0\) independent of \(\varepsilon\). Then there exists \(T>0\) depending only on \(M\), such that (1.13) admits a unique solution \((p(t),u(t),B(t),S(t))\) verifies the energy estimate \[\sup_{t\in[0,T]}E(t)\leq P(E(0)), \tag{1.17}\] where \(P(\cdots)\) is a generic polynomial in its arguments, and the energy \(E(t)\) is defined to be \[\begin{split} E(t)&=E_{4}(t)+E_{5}(t)+E_{6}(t)+E_{7 }(t)+E_{8}(t),\\ E_{4}(t)&=\sum_{k=0}^{4}\left\|\left(e^{(k-1)\cdot \partial_{t}^{k}u},e^{(k-1)\cdot\partial_{t}^{k}}B,e^{(k-1)\cdot\partial_{t}^{ k}}S,e^{k}\partial_{t}^{k}p\right)\right\|_{4-k}^{2}\\ E_{4+l}(t)&=\sum_{\begin{subarray}{c}\alpha_{0}=2 \ell,\\ \alpha_{0}<2\ell\end{subarray}}\sum_{k=0}^{4-\ell}\left\|e^{(k-1)\cdot\ast 2l} \mathcal{T}^{\alpha}\partial_{t}^{k}(u,B,S,p)\right\|_{4-k-l}^{2}\\ &\quad+\sum_{k=0}^{4-l}\left\|e^{(k-1)\cdot\ast 2l}\partial_{t}^{k+2l} (u,B,S),e^{k+2l}\partial_{t}^{k+2l}p\right\|_{4-k-l}^{2},\ 1\leq l\leq 4,\end{split} \tag{1.18}\] where \(K_{+}:=\max\{K,0\}\) and we denote \(\mathcal{T}^{\alpha}:=(\omega(x_{3})\partial_{3})^{\alpha_{4}}\partial_{ \eta}^{\alpha_{0}}\partial_{1}^{\alpha_{1}}\partial_{2}^{\alpha_{2}}\) to be a high-order tangential derivative for the multi-index \(\alpha=(\alpha_{0},\alpha_{1},\alpha_{2},0,\alpha_{4})\) with length (for the anisotropic Sobolev spaces) \(\langle\alpha\rangle=\alpha_{0}+\alpha_{1}+\alpha_{2}+2\times 0+\alpha_{4}\). In the rest of this paper, we sometimes write \(\mathcal{T}^{k}\) to represent a tangential derivative \(\mathcal{T}^{\alpha}\) with order \(\langle\alpha\rangle=k\) when we do not need to specify what the derivative \(\mathcal{T}^{\alpha}\) contains. **Remark 1.2** (Correction of \(E_{4}(t)\)).: We note that the norm \(\|p\|_{4}^{2}\) in \(E_{4}(t)\) defined by (1.18) should be replaced by \(\|\varepsilon p\|_{0}^{2}+\|\nabla p\|_{3}^{2}\) because we do not have \(L^{2}\) estimates of \(p\) without \(\varepsilon\) weight. We still write \(\|p\|_{4}^{2}\) as above for simplicity of notations. **Remark 1.3** ("Prepared" initial data).: The above estimate only requires \(\nabla\cdot u_{0}=O(\varepsilon)\) and \(\partial_{t}u|_{t=0}=O(1)\). In this case, the compressible data \(u_{0}\) is a small perturbation of an incompressible data \(u_{0}^{0}\), and this perturbation is _completely contributed by the compressibility_. Such compressible data are usually called "well-prepared initial data"2. Footnote 2: One can find the definitions of “well-prepared” and “ill-prepared” in [18, 1] for rescaled Euler system, which is equivalent to the statement in our paper. **Remark 1.4** (Relations with anisotropic Sobolev space).: The energy functional \(E(t)\) above is considered as a variant of \(\|\cdot\|_{8,*}\) norm at time \(t>0\). For different multi-index \(\alpha\), we impose different Mach number weights according to the number of tangential derivatives that appear in \(\partial_{*}^{\alpha}\), such that the energy estimate for the slightly modified \(\|\cdot\|_{8,*}\) norm is uniform in \(\varepsilon>0\). The next main theorem concerns the incompressible limit. We consider the incompressible MHD equations together with a transport equation satisfied by \((u^{0},B^{0},\pi,S^{0})\) with incompressible initial data \((u_{0}^{0},B_{0}^{0})\) and \(S_{0}^{0}\): \[\begin{cases}\rho(\partial_{t}u^{0}+u^{0}\cdot\nabla u^{0})-B^{0}\cdot\nabla B ^{0}+\nabla(\pi+\frac{1}{2}|B^{0}|^{2})=0&\text{in }[0,T]\times\Omega,\\ \partial_{t}B^{0}+u^{0}\cdot\nabla B^{0}-B^{0}\cdot\nabla u^{0}=0&\text{in }[0,T]\times\Omega,\\ \partial_{t}S^{0}+u^{0}\cdot\nabla S^{0}=0&\text{in }[0,T]\times\Omega,\\ \nabla\cdot u^{0}=\nabla\cdot B^{0}=0&\text{in }[0,T]\times\Omega,\\ u_{0}^{3}=B_{0}^{0}=0&\text{on }[0,T]\times\Sigma,\\ (u_{0},B_{0},S_{0})|_{t=0}=(u_{0}^{0},B_{0}^{0},S_{0}^{0})&\text{on } \{t=0\}\times\Omega..\end{cases} \tag{1.19}\] **Theorem 1.2** (Incompressible limit).: Under the hypothesis of Theorem 1.1, we assume that \((u_{0},B_{0},S_{0})\to(u_{0}^{0},B_{0}^{0},S_{0}^{0})\) in \(H^{4}(\Omega)\) as \(\varepsilon\to 0\) for \(\nabla\cdot u^{0}=\nabla\cdot B^{0}=0\) in \(\Omega\) with \(u_{03}^{0}=B_{03}^{0}=0\) on \(\Sigma\). Then it holds that \[(u,B,S)\to (u^{0},B^{0},S^{0})\quad\text{weakly-$\ast$ in $L^{\infty}([0,T];H^{4}( \Omega))$ and strongly in $C([0,T];H^{4-\delta}(\Omega))$}\] for \(\delta>0\). \((u^{0},B^{0},S^{0})\) solves (1.19), that is, the incompressible MHD equations together with a transport equation of \(S^{0}\). Here \(\varrho\) satisfies the transport equation \[\partial_{t}\varrho+u^{0}\cdot\nabla_{\varrho}=0,\ \ \varrho|_{t=0}=\rho(0,S_{0}^{0}),\] where we consider the equation of state (1.9) as \(\rho=\rho(\varepsilon^{2}p,S)\), that is, a function of \(\varepsilon^{2}p\) and \(S\). The function \(\pi\) satisfying \(\nabla\pi\in C([0,T];H^{3}(\Omega))\) represents the fluid pressure for incompressible MHD system (1.19). **Remark 1.5** (Choice of regularity).: We propose \(H^{4}\) regularity for the energy functional because lots of commutator estimates require the bound for \(\|\overline{\partial}^{2}(u,B)\|_{L^{\infty}}\). Recall that \(H^{\frac{4}{2}+\delta}\hookrightarrow L^{\infty}\), so it would be convenient to choose \(H^{2+\lceil\frac{4}{2}+\delta\rceil}\) regularity, that is, \(H^{4}\) for \(d=2,3\). The initial data are set in \(H^{8}(\Omega)\) in order to guarantee they satisfy the compatibility conditions up to 7-th order and the uniform-in-\(\varepsilon\) bound \(E(0)\leq M\). Our results can be easily generalized to the case when \(H^{4}\) and \(H_{\ast}^{8}\) are replaced by \(H^{m}\) and \(H_{\ast}^{2m}\) (\(m\geq 4\)) respectively. **Remark 1.6** (The space for the convergence of initial data).: The compressible initial data converges to the incompressible data in \(H^{4}(\Omega)\) instead of \(H_{\ast}^{8}(\Omega)\) because the higher-order energy \(E_{5}(0)\sim E_{8}(0)\) automatically vanishes as \(\varepsilon\to 0\). Thus, it suffices to require the convergence in \(H^{4}(\Omega)\). **Remark 1.7** (Boundedness of the domain).: In the proof of the main theorem, we do not use the boundedness of the domain, such as Poincare's inequality, to close the energy estimate. Thus, after making necessary modifications (including setting \(\rho_{0}-1\in H^{8}(\Omega)\) instead of \(\rho_{0}\) itself and assuming the initial data to be "localized"), our proof is still valid for the case of unbounded reference domains such as \(\mathbb{R}^{d-1}\times(-1,1)\) and the half space. Indeed, using a partition of unity as in [4], the proof can be directly generalized to the domains which are diffeomorphic to the reference domains via \(H^{8}(\Omega)\)-diffeomorphisms. ### Organization of the paper This paper is organized as follows. In section 2, we discuss the main difficulties and briefly introduce our strategies to tackle the problem. Then section 3 is devoted to the proof of uniform estimates in Mach number. Combined with the compactness argument, we conclude the incompressible limit in section 4. Acknowledgment.The research of Jiawei Wang is supported by the NSFC (Grants 12071044 and 12131007). Jiawei Wang would like to thank his Ph.D. advisor Prof. Qiangchang Ju for helpful suggestions. Junyan Zhang would like to thank Chenyun Luo for the kind hospitality during his visit at The Chinese University of Hong Kong. List of Notations * (Tangential derivatives) \(\mathcal{T}_{0}=\partial_{t}\) denotes the time derivative, \(\mathcal{T}_{j}=\overline{\partial}_{j}\) (\(1\leq j\leq d-1\)) denotes the tangential spatial derivatives and \(\mathcal{T}_{d+1}:=\omega(x_{d})\partial_{d}\) with \(\omega(x_{d})=(1-x_{d})(1+x_{d})\). * (Tangential components) \(\bar{u}:=(u_{1},\cdots,u_{d-1})\) and \(\bar{B}:=(B_{1},\cdots,B_{d-1})\). We also write \(\overline{\nabla}:=(\overline{\partial}_{1},\overline{\partial}_{2})\) as the tangential components of the operator \(\nabla\). * (\(L^{\infty}\)-norm) \(\|\cdot\|_{\infty}:=\|\cdot\|_{L^{\infty}(\Omega)}\), \(\cdot\|_{\infty}:=\|\cdot\|_{L^{\infty}(\Sigma)}\). * (Interior Sobolev norm) \(\|\cdot\|_{s}\): We denote \(\|f\|_{s}:=|f(t,\cdot)\|_{H^{s}(\Omega)}\) for any function \(f(t,y)\) on \([0,T]\times\Omega\). * (Boundary Sobolev norm) \(|\cdot\|_{s}\): We denote \(|f|_{s}:=|f(t,\cdot)|_{H^{s}(\Sigma)}\) for any function \(f(t,y)\) on \([0,T]\times\Sigma\). * (Anisotropic Sobolev norms) \(\|\cdot\|_{\infty,s}\): For any function \(f(t,x)\) on \([0,T]\times\Omega\), \(\|f\|_{\infty,\ast}^{2}:=\sum_{(a)\leq m}\|\partial_{\ast}^{\ast}f(t,\cdot)\|_{0} ^{2}\) denotes the \(m\)-th order space-time anisotropic Sobolev norm of \(f\). * (Polynomials) \(P(\cdots)\) denotes a generic polynomial in its arguments. * (Commutators) \([T,f]g=T(fg)-f(Tg)\), \([f,T]g=-[T,f]g,[T,f,g]:=T(fg)-T(f)g-fT(g)\) where \(T\) is a differential operator and \(f,g\) are functions. * (Equality modulo lower order terms) \(A\overset{L}{=}B\) means \(A=B\) modulo lower order terms. Difficulties and strategies Before going to the detailed proofs, we will discuss the main difficulties in this incompressible limit problem and briefly introduce our strategies to derive energy estimates that are uniform in the Mach number. Note that our initial data is assumed to be well-prepared, so the uniform estimates together with compactness argument are enough to derive the incompressible limit. ### Choice of function spaces Compressible ideal MHD with perfectly conducting wall conditions is a first-order quasilinear hyperbolic system with _characteristic_ boundary conditions of constant multiplicity, for which there is a potential of normal derivative loss. In fact, inside a rigid wall with the slip boundary condition, Euler equations and elastodynamic equations with degenerate deformation tensor on the boundary (see [38]) also have characteristic boundary conditions, but their vorticity can be controlled in the setting of standard Sobolev spaces because there are no other quantities involved in the pressure part. The loss of normal derivatives is then compensated via the control of vorticity and divergence. However, the vorticity estimates for compressible ideal MHD cannot be closed in the setting of standard Sobolev spaces, which was explicitly presented in [37]. On the one hand, people observed that such normal derivative loss can be compensated if the vertical component of the magnetic field does not vanish on the boundary. See Yanagisawa [35] for the well-posedness and [19, 32, 33] for the study of corresponding free-boundary problems (MHD contact discontinuities). But the transversality of magnetic fields violates the perfectly conducting wall condition. On the other hand, Chen [3] first introduced the anisotropic Sobolev spaces defined in (1.14)-(1.15) in the study of compressible gas dynamics inside a rigid wall. Under this setting, a normal derivative is considered as a second-order derivative, which exactly compensates the derivative loss. Detailed analysis for MHD in such setting is referred to the second author's paper [16, Sect. 2.5]. Using such function spaces, Yanagisawa-Matsumura [36] and Secchi [26, 28] proved the local well-posedness and see also [30, 31, 16] for the study of free-boundary problems, but the estimates obtained in these previous works are not uniform in Mach number. As stated in Section 1.2, the essential difficulty in proving the incompressible limit is that the function spaces for well-posedness of compressible problem are not "compatible" with the ones for the incompressible problems. Since our initial data is well-prepared, that is, a given incompressible data plus a slight perturbation, we shall still start with the incompressible counterpart and try to find out the relationships between the incompressible problem and the compressible problem. ### Key observation: hidden structure of Lorentz force in the vorticity analysis First, the entropy is easy to control thanks to \(D_{t}S=0\), so it suffices to analyze the relations between \((u,B)\) and \(p\). We start with the control of \(\|(u,B)\|_{4}\). Using div-curl decomposition, the \(H^{4}\) Sobolev norms are bounded by \(\|\nabla\times u\), \(\nabla\times B\|_{3}\), \(\|\nabla\cdot u\), \(\nabla\cdot B\|_{3}\) and the normal traces \(|u_{3}\), \(B_{3}|_{3,5}\). The boundary conditions (1.4) eliminate the normal traces, and the divergence part is reduced to tangential derivatives \(\|e^{2}D_{t}p\|_{3}\) thanks to the continuity equation and the divergence constraint \(\nabla\cdot B=0\). As for the vorticity, taking curl in the momentum equation yields the evolution equation of vorticity \[\rho D_{t}(\nabla\times u)-(B\cdot\nabla)(\nabla\times B)=\cdots\,, \tag{2.1}\] where we find that the term \(\nabla(p+\frac{1}{2}|B|^{2})\) is already eliminated. In the \(H^{3}\)-control of \(\nabla\times u\), we invoke the evolution equation of \(B\) to get \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\rho|\partial^{3}(\nabla\times u)|^ {2}+|\partial^{3}(\nabla\times B)|^{2}\,\mathrm{d}x=-\int_{\Omega}\partial^{ 3}\nabla\times(B(\nabla\cdot u))\cdot\partial^{3}(\nabla\times B)\,\mathrm{d }x+\text{ controllable terms}, \tag{2.2}\] where we find that there are 5 derivatives falling on \(u\) and thus the vorticity estimates cannot be closed in the setting of 4-th order standard Sobolev spaces. Such derivative loss in the vorticity analysis must appear because taking curl operator eliminates the term \(\nabla(\frac{1}{2}|B|^{2})\) in the momentum equation but does not eliminate the term \(B(\nabla\cdot u)\). However, in the proof of \(L^{2}\) estimate, the contribution of \(\nabla(\frac{1}{2}|B|^{2})\) should be cancelled with the contribution of \(B(\nabla\cdot u)\). If we further analyze this problematic term by using the structure of MHD system, we will find that the anisotropic Sobolev space is naturally introduced in order to close the energy estimates. Using the continuity equation \(\varepsilon^{2}D_{t}p=-\nabla\cdot u\), we find that the highest order term in (2.2) is \(\varepsilon^{2}B\times(\partial^{3}\nabla D_{t}p)\). Then commuting \(\nabla\) with \(D_{t}\) and invoking the momentum equation \(\rho D_{t}u+B\times(\nabla\times B)=-\nabla p\), we get \[\varepsilon^{2}B\times(\partial^{3}\nabla D_{t}p)\overset{L}{=}\varepsilon^ {2}B\times(\partial^{3}D_{t}\nabla p)\overset{L}{=}-\varepsilon^{2}\rho B \times(\partial^{3}D_{t}^{2}u)-\varepsilon^{2}\underbrace{B\times(B\times \partial^{3}D_{t}(\nabla\times B))}_{=\emptyset}.\] The above analysis for compressible ideal MHD shows that the vorticity estimates cannot be closed in standard Sobolev spaces, but in fact we _trade one normal derivative_ (in \(\nabla\times\)) _for two tangential derivatives_\(\varepsilon^{2}D_{t}^{2}\) thanks to the special structure of Lorentz force \(B\times(\nabla\times B)\) that eliminates the term \(B\times(B\times\partial^{3}D_{t}(\nabla\times B))\). This exactly illustrates why the anisotropic Sobolev space is naturally introduced to study compressible ideal MHD. It should also be noted that this difficulty never occurs for incompressible ideal MHD because the divergence-free condition automatically eliminates the bad term above. Besides, to prove the incompressible limit of ideal MHD with (1.4), an extra Mach number weight \(\varepsilon^{2}\) must be added together with the two tangential derivatives. So, we find that the "anisotropic part" of the energy, namely the terms \(E_{5}\sim E_{8}\) in (1.18), will automatically vanish when we take the incompressible limit \(\varepsilon\to 0_{+}\). The remaining term \(E_{4}(t)\) exactly consists of the standard Sobolev norms, which coincides with the energy for incompressible ideal MHD. **Remark 2.1** (Slight modifications in 2D).: When dimension \(d=2\), we have to replace the curl operator \(\nabla\times\) by \(\nabla^{\perp}\cdot\) where \(\nabla^{\perp}:=(-\partial_{2},\partial_{1})\). Then (2.2) reads \[\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\rho|\partial^{3}(\nabla^{\perp} \cdot u)|^{2}+|\partial^{3}(\nabla^{\perp}\cdot B)|^{2}\,\mathrm{d}x=-\int_{ \Omega}\Big{(}(B\cdot\nabla^{\perp})\partial^{3}\nabla\cdot u\Big{)}(\partial ^{3}\nabla^{\perp}\cdot B)\,\mathrm{d}x+\text{ controllable terms}, \tag{2.3}\] where the highest order term can be reduced as follows after invoking the first two equations of (1.13) \[-(B\cdot\nabla^{\perp})\partial^{3}\nabla\cdot u \overset{L}{=}\varepsilon^{2}B\cdot\partial^{3}D_{t}\nabla^{ \perp}p\overset{L}{=}\varepsilon^{2}(B_{2}\partial^{3}D_{t}\partial_{1}p-B_{ 1}\partial^{3}D_{t}\partial_{2}p)\] \[\overset{L}{=}-\varepsilon^{2}\rho(B_{2}\partial^{3}D_{t}^{2}u_{ 1}-B_{1}\partial^{3}D_{t}^{2}u_{2})-\varepsilon^{2}(B_{2}^{2}+B_{1}^{2})D_{t} \partial^{3}(\nabla^{\perp}\cdot B),\] where we note that the momentum equations can be written as \(\partial_{1}p=-\rho D_{t}u_{1}-B_{2}(\nabla^{\perp}\cdot B)\) and \(\partial_{2}p=-\rho D_{t}u_{2}+B_{1}(\nabla^{\perp}\cdot B)\). The first term can be treated as in the 3D case, and the contribution of the second term gives another energy term \(-\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\varepsilon^{2}|B|^{2} |\partial^{3}(\nabla^{\perp}\cdot B)|^{2}\,\mathrm{d}x\) which also automatically vanishes when we take the incompressible limit \(\varepsilon\to 0_{+}\). It should be emphasized that our analysis is very different from the previous works about well-posedness [36, 26, 16]. Indeed, these previous works [36, 26, 16] avoided using div-curl analysis to reduce the normal derivatives of \(u\) and \(B\). Instead, they controlled the normal derivatives by directly computing the corresponding \(L^{2}\)-type estimates which yields non-zero boundary integrals such as \(-\int_{\Sigma}\partial_{3}^{4}P\ \partial_{3}^{4}v_{3}\,\mathrm{d}x^{\prime}\) and only gives the estimates for \((u,B,\varepsilon p)\), not \((u,B,p)\). Since taking normal derivatives does not preserve the boundary conditions, one has to use the MHD equations to replace \(\partial_{3}P\) and \(\partial_{3}u_{3}\) by tangential derivatives of the other variables, and then uses Gauss-Green formula to rewrite the boundary integrals into the interior. During this process, more time derivatives might be produced without extra weights of Mach number (e.g., the product of the underlined terms in \(-\nabla P\to D_{t}v-(B\cdot\overline{\nabla})B\) and \(\partial_{3}v_{3}\to-\varepsilon^{2}D_{t}p-\overline{\nabla}\cdot\bar{v}\)). However, the continuity equation indicates that one may have to add more Mach number weights to higher-order time derivatives. Thus, following the methods in [36, 26, 16] might cause a potential of loss of Mach number weight, and one can find related details in [16, Sect. 2.5]. In other words, these previous works [36, 26, 16] only revealed that one should trade one normal derivative for two derivatives but ignored the necessity of adding extra weight of Mach number \(\varepsilon^{2}\). One of the advantages of our method is that all normal derivatives are reduced via div-curl analysis which does not involve any boundary estimates; and every time when we reduce a normal derivative, we never lose weights of Mach number because the continuity equation must be used in the reduction. ### Reduction of pressure and design of energy functional The main idea of proving the uniform-in-\(\varepsilon\) estimates is to repeatedly reduce normal derivatives to tangential derivatives until all derivatives are tangential (with suitable Mach number weights); and reduce spatial derivatives of \(p\) to tangential derivatives of \(u,B\) until all derivatives on \(p\) are time derivatives. Once we only have tangential derivatives, it suffices to mimic the \(L^{2}\) estimates to close the tangential estimates because taking tangential derivatives preserves the boundary conditions. From the definition of \(D_{t}\) and \(u_{3}|_{\Sigma}=B_{3}|_{\Sigma}=0\), we know both \(D_{t}\) and \(B\cdot\nabla\) are tangential derivatives. Based on this fact and the analysis in section 2.2, we have the following reductions: 1. [label=.] 2. Vorticity: \(\nabla\times(u,B)\to e^{2}\mathcal{T}^{2}u\). 3. Divergence: \(\nabla\cdot(u,B)\to e^{2}\mathcal{T}^{p}p\). 4. Reduction of pressure: The momentum equation reads \(-\nabla(p+|B|^{2}/2)=\rho D_{t}u-B\cdot\nabla B\) gives the relation \(\nabla P\to\mathcal{T}(u,B)\) and thus \(\nabla p\to\mathcal{T}(u,B)\) plus \(\nabla B\) where \(\nabla B\) is further reduced by using (a) and (b). This indicates that \(\partial_{t}^{k}\nabla p\) should have the same Mach number weight as \(\partial_{t}^{k+1}(u,B,S)\). 5. Tangential estimates: When estimating \(E_{4+t}(t)\) (defined in (1.18)), \(\mathcal{T}^{a}(u,B)\) is controlled together with \(\varepsilon\mathcal{T}^{a}p\) in the estimates of full tangential derivatives, i.e., when \(\langle\alpha\rangle=4+l\). In the above subsection we start with \(E_{4}\) and reduce \(\|u,B\|_{4}\) (part of \(E_{4}\)) to \(\|e^{2}\mathcal{T}p\|_{3}\) (still a part of \(E_{4}\)) and \(\|e^{2}\mathcal{T}^{2}u\|_{3}\) (part of \(E_{5}\)) via the div-curl analysis. The divergence part introduces time derivative, but the number of derivatives does not exceed \(4\), so we can repeatedly use (c) to reduce the spatial derivatives of \(p\) to tangential derivatives of \(u,B\) until there is no spatial derivative falling on \(p\). Finally, it suffices to control \(4\)-th order time derivatives of \(u,B\) and \(p\) with suitable Mach number weights. The Mach number weights of these quantities are determined by the relation (d) and the "preparedness" of initial data. See also diagram 2 below. For the control of terms that have \(5\) derivatives, for example the term \(\|e^{2}\mathcal{T}^{2}u\|_{5}\), we repeat the div-curl analysis to get \(\|e^{4}\mathcal{T}^{3}p\|_{2}\) (still a part of \(E_{5}\)) from the divergence part and \(\|e^{4}\mathcal{T}^{4}u\|_{2}\) (part of \(E_{6}\)) from the curl part. Again, the divergence part is reduced to \(E_{5}(t)\) which is controlled in a similar way as \(E_{4}(t)\); and then we do further div-curl decomposition for \(\|e^{4}\mathcal{T}^{4}u\|_{2}\) which then requires the control of \(E_{7}(t)\). Repeatedly, we finally need to control the \(L^{2}\) norms of \(\varepsilon^{8}\mathcal{T}^{8}(u,B)\), which is controlled together with \(\varepsilon^{8}\mathcal{T}^{8}p\) when \(\mathcal{T}^{8}\) are not purely time derivatives, and \(\varepsilon^{9}\partial_{t}^{8}p\) when \(\mathcal{T}^{8}\) are purely time derivatives. Then the control of \(\mathcal{T}^{8}\) is completely parallel to the proof of \(L^{2}\) energy conservation because there isn't any normal derivative. We present the above complicated reduction procedure in the following diagrams. For more details of the reduction procedures, we refer to section 3.2.3. We also list an example of reduction of \(E_{4+t}(t)\) by repeatedly using (c) and (d) when \(l=0\). The other cases follow in the same manner. (2.4) Diagram (2.4): From standard Sobolev space to anisotropic Sobolev space. (2.5) Diagram (2.5): Reduction of \(E_{4}(t)\) via (c) and (d). Recall that the vorticity estimates for compressible Euler equations can be closed under the setting of standard Sobolev spaces. As presented in [17], the reduction scheme for Euler equations is merely the control of \(E_{4}(t)\) but does not involve any higher-order energies. In other words, the reduction scheme is finished within the first column of diagram (2.4), or equivalently diagram (2.5) with \(E_{5}(t)\) replaced by \(E_{4}(t)\) Compared with Euler equations, the extra thing for compressible ideal MHD is that the vorticity analysis for \(E_{4+l}(t)\) requires the control of \(E_{4+l+1}(t)\) until there is no normal derivative on \(u,B\). That's why we have the first row in diagram (2.4). Then, as presented in each column of diagram (2.4), we need to mimic the scheme in [17], that is, run the scheme presented in diagram (2.5), for each \(E_{4+l}(t)\) to close the energy estimates. ### Further applications of our method Based on the above analysis, we give a complete and definitive answer to the incompressible limit problem of ideal MHD under the perfectly conducting wall condition when the initial data is well-prepared. Besides, compared with the previous work [13] by Ju and the first author, we give a clear illustration on the "incompatibility" of the function spaces of well-posedness and thoroughly resolve this issue. It should be noted that, in the uniform-in-\(\varepsilon\) estimates, the "preparedness" of initial data is only used to guarantee \(E(0)<\infty\). If the initial data is not "prepared", that is, \(\nabla\cdot u_{0}=O(1)\) is not small and \(\partial_{t}u|_{=0}=O(1/\varepsilon)\) is unbounded, we believe that the uniform-in-\(\varepsilon\) estimates can be proven in a similar way after doing suitable technical modifications. Hence, one can expect to generalize our result to the case when the initial data is not "prepared". It should also be noted that the reduction procedure presented in diagram (2.5) is parallel to the case of compressible water wave, which is presented in the paper [17, Sect. 2.1.3] by Luo and the second author. Indeed, it is exactly the appearance of the magnetic field and the perfectly conducting wall condition that force us to do further vorticity analysis as stated in section 2.2 and presented in diagram (2.4). Therefore, one can expect to extend our results to free-boundary problems. In fact, we believe that our method together with the techniques about free-boundary problems presented in [17] can be generalized to study the current-vortex sheets in ideal compressible MHD, which will be presented in a forthcoming paper by the second author. ## 3 Uniform energy estimates ### \(L^{2}\) estimate First, we establish \(L^{2}\)-energy estimate for (1.13). Invoking the momentum equation and integrate by parts, we have: \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\varepsilon ^{2}p^{2}+\rho|u|^{2}+|B|^{2}+S^{2}\,\mathrm{d}x\] \[= \frac{1}{2}\int_{\Omega}\nabla\cdot u(\varepsilon^{2}p^{2}+|B|^{ 2}+S^{2})\,\mathrm{d}x-\int_{\Omega}\nabla\cdot(pu)\,\mathrm{d}x\] \[+\int_{\Omega}(B\cdot\nabla B)\cdot u+(B\cdot\nabla u)\cdot B\, \mathrm{d}x-\int_{\Omega}\frac{1}{2}\nabla|B|^{2}\cdot u+(B\cdot B)\nabla \cdot u\,\mathrm{d}x\] \[= \frac{1}{2}\int_{\Omega}\nabla\cdot u(\varepsilon^{2}p^{2}+|B|^{ 2}+S^{2})\,\mathrm{d}x, \tag{3.1}\] which give the energy estimate: \[\frac{\mathrm{d}}{\mathrm{d}t}\|(ep,u,B,S)\|_{0}^{2}\leq P(E_{4}(t)). \tag{3.2}\] ### Reduction of normal derivatives #### 3.2.1 Reduction of pressure We show how to reduce the control of the pressure to that of the velocity and magnetic field when there is at least one spatial derivative on \(P\). This follows from using the momentum equation \[-\nabla(p+|B|^{2}/2)=\rho D_{t}u-B\cdot\nabla B. \tag{3.3}\] Then \[\|\nabla(p+|B|^{2}/2)\|_{0}\lesssim\|\rho\|_{\infty}\|D_{t}u\|_{0}+\|B\cdot \nabla B\|_{0}, \tag{3.4}\] where \(D_{t}u=\partial_{t}u+\bar{u}\cdot\overline{\nabla}u+u_{3}\partial_{3}u\) and \(B\cdot\nabla B=\bar{B}\cdot\overline{\nabla}B+B_{3}\partial_{3}B\). Similarly, using (3.3), we also have the reduction for \(L^{\infty}\) norm \[\|\nabla(p+|B|^{2}/2)\|_{\infty}\lesssim\|\rho\|_{\infty}\|D_{t}u\|_{\infty}+ \|B\cdot\nabla B\|_{\infty}. \tag{3.5}\] Recall that \(D_{t}=\partial_{t}+\bar{u}\cdot\overline{\nabla}+u_{3}\partial_{3}\) and \(B\cdot\nabla=\bar{B}\cdot\overline{\nabla}+B_{3}\partial_{3}\) are both tangential derivatives thanks to \(u_{3}=B_{3}=0\) on \(\Sigma\). The above inequalities (3.4)-(3.5) show that \(\nabla P\) is reduced to \(\partial_{t}u\), \(\overline{\partial}u\), \(\omega(x_{3})\partial_{3}u\), \(\overline{\partial}B\) and \(\omega(x_{3})\partial_{3}B\) for some weight function \(\omega(x_{3})\) vanishing on \(\Sigma\). Thus, the momentum equation reduces a _normal_ derivative of \(P\) to a _tangential_ derivative of \(u,B\). Also, one can reduce the Sobolev norm of \(\nabla P\) in the same way. Let \(\mathcal{T}=\partial_{t}\) or \(\overline{\partial}\) or \(\omega(x_{3})\partial_{3}\) and \(D=\partial\) or \(\partial_{t}\). The above estimate yields the control of \(\|\mathcal{T}^{\alpha}D^{k}\nabla p\|_{0}\) after taking \(\mathcal{T}^{\alpha}D^{k}\), \(k+\langle\alpha\rangle\geq 1\), to (3.3). Specifically, at the leading order, \(\|\mathcal{T}^{\alpha}D^{k}\nabla(p+|B|^{2}/2)\|_{0}\) is controlled by \[\|\rho\|_{\infty}\|\mathcal{T}^{\alpha}D^{k}\mathcal{T}u\|_{0}+\|\mathcal{T}^ {\alpha}D^{k}\mathcal{T}B\|_{0}. \tag{3.6}\] As for the fluid pressure \(p\), by definition we have \(\partial_{i}p=\partial_{i}P-\partial_{i}B\cdot B\). Thus, to control the Sobolev norms of \(\nabla p\), we still need to do further analysis for \(\nabla B\), which will be controlled together with \(\nabla u\). #### 3.2.2 Div-Curl analysis From the analysis of section 3.2.1, in order to control the Sobolev norms defined in (1.18), we shall reduce the normal derivatives of \(u\) and \(B\) and then analyze the tangential derivatives. We use the following lemma to reduce the normal derivatives of \(u,B\) to their divergence and curl. **Lemma 3.1** (Hodge elliptic estimates).: For any sufficiently smooth vector field \(X\) and any real number \(s\geq 1\), one has \[\|X\|_{s}^{2}\lesssim\|X\|_{0}^{2}+\|\nabla\cdot X\|_{s-1}^{2}+\|\nabla \times X\|_{s-1}^{2}+\|\overline{\partial}X\cdot N\|_{s-\frac{3}{2}}^{2}. \tag{3.7}\] We start from the control of \(E_{4}(t)\). We apply Lemma 3.1 to \(\|\varepsilon^{(k-1)_{*}}\partial_{t}^{k}(u,B)\|_{4-k}\) for \(0\leq k\leq 3\) to get \[\|\varepsilon^{(k-1)_{*}}\partial_{t}^{k}u\|_{4-k}^{2}\lesssim \|\varepsilon^{(k-1)_{*}}\partial_{t}^{k}u\|_{0}^{2}+\| \varepsilon^{(k-1)_{*}}\nabla\times\partial_{t}^{k}u\|_{3-k}^{2}+\|\varepsilon ^{(k-1)_{*}}\nabla\cdot\partial_{t}^{k}u\|_{3-k}^{2}, \tag{3.8}\] \[\|\varepsilon^{(k-1)_{*}}\partial_{t}^{k}B\|_{4-k}^{2}\lesssim \|\varepsilon^{(k-1)_{*}}\partial_{t}^{k}B\|_{0}^{2}+\| \varepsilon^{(k-1)_{*}}\nabla\times\partial_{t}^{k}B\|_{3-k}^{2}, \tag{3.9}\] where we already use \(\nabla\cdot B=0\) and the boundary condition \(u_{3}=B_{3}=0\) on \(\Sigma\). The \(L^{2}\) norm of \(u\) and \(B\) has been controlled in (3.2), while the control of \(\|\varepsilon^{(k-1)_{*}}\partial_{t}^{k}(u,B)\|_{0}\) (\(1\leq k\leq 3\)) is parallel to the case of \(k=0\) and will be postponed to later sections. The divergence part is reduced to the estimates of \(p\) by using the continuity equation \[\|\varepsilon^{(k-1)_{*}}\nabla\cdot\partial_{t}^{k}u\|_{3}^{2}=\left\| \varepsilon^{(k-1)_{*}+2}\partial_{t}^{k}(D_{t}p)\right\|_{3-k}^{2}, \tag{3.10}\] which will be further reduced to the tangential estimates of \((u,B)\) by using the argument in Section 3.2.1. According to the reduction procedure in section 3.2.1 and (b)-(d) in section 2.3, the control of \(E_{4}(t)\) will be reduced to the control of \(\mathcal{T}^{\alpha}(u,B)\) and \(\partial_{t}^{4}p\) with suitable Mach number weights, plus the curl part. Next, we analyze the curl part via the evolution equation of vorticity. **Lemma 3.2**.: \[\sum_{k=0}^{3}\frac{\mathrm{d}}{\mathrm{d}t}\left(\left\|\varepsilon^{(k-1)_{*} }\nabla\times\partial_{t}^{k}u\right\|_{3-k}^{2}+\left\|\varepsilon^{(k-1)_{*} }\nabla\times\partial_{t}^{k}B\right\|_{3-k}^{2}\right)\leq P(E_{4}(t),E_{5}( t)).\] (3.11) Proof.: First, we take \(\nabla\times\) in the momentum equation \(\rho D_{t}u-(B\cdot\nabla)B=-\nabla P\) to get the evolution equation of vorticity \[\rho D_{t}(\nabla\times u)-(B\cdot\nabla)\nabla\times B=\rho[D_{t},\nabla \times]u-[B\cdot\nabla,\nabla\times]B-(\nabla\rho)\times(D_{t}u), \tag{3.12}\] where we notice that the right side only contains the first-order derivatives and does not lose Mach number weight. Note that the equation of state is smooth, so \(\partial_{S}\rho\) is bounded. \[[D_{t},\partial_{t}](\cdot)=-\partial_{t}u_{t}\partial_{j}(\cdot),\ \ [B\cdot \nabla,\partial_{i}](\cdot)=-\partial_{t}B_{k}\partial_{k}(\cdot),\ \ \nabla\rho=\frac{\partial\rho}{\partial\rho}\nabla p+ \partial_{S}\rho\nabla S\,=O(\varepsilon^{2})\nabla p+\partial_{S}\rho\nabla S.\] We first prove the case when \(k=0\). Indeed, the cases for \(1\leq k\leq 3\) follow in the same manner. In order to control the \(H^{3}\) norms of the curl part, we take \(\partial^{3}\) in (3.12) to get \[\rho D_{t}\partial^{3}(\nabla\times u)-(B\cdot\nabla)\partial^{3}\nabla\times B =\underbrace{\partial^{3}(\text{RHS of \eqref{eq:HHS}})+[\rho D_{t},\partial^{3}](\nabla \times u)-[B\cdot\nabla,\partial^{3}](\nabla\times B)}_{\mathcal{R}_{1}}, \tag{3.13}\] where the order of derivatives on the right side must be \(\leq 4\). Now, standard \(L^{2}\)-type estimate yields that \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\rho| \partial^{3}\nabla\times u|^{2}\,\mathrm{d}x=\int_{\Omega}\rho(\partial_{t} \partial^{3}\nabla\times u)\cdot\partial^{3}\nabla\times u\,\mathrm{d}x+ \frac{1}{2}\int_{\Omega}\partial_{\rho}|\partial^{3}\nabla\times u|^{2}\, \mathrm{d}x\] \[= \int_{\Omega}\rho D_{t}\partial^{3}\nabla\times u\cdot\partial^ {3}\nabla\times u\,\mathrm{d}x\] \[\underbrace{-\int_{\Omega}\rho(u\cdot\nabla)\partial^{3}\nabla \times u\cdot\partial^{3}\nabla\times u\,\mathrm{d}x-\frac{1}{2}\int_{\Omega }(\frac{\partial\rho}{\partial\rho}\partial_{t}p+\partial_{S}\rho\partial_{t }S)|\partial^{3}\nabla\times u|^{2}\,\mathrm{d}x}_{\coloneqq\mathcal{I}_{1}}. \tag{3.14}\] Then invoking (3.13) gives us the following terms \[\int_{\Omega}\rho D_{t}\partial^{3}\nabla\times u\cdot\partial^ {3}\nabla\times u\,\mathrm{d}x\] \[= \int_{\Omega}\left((B\cdot\nabla)\partial^{3}\nabla\times B \right)\cdot\partial^{3}\nabla\times u\,\mathrm{d}x\underbrace{+\int_{\Omega }\mathcal{R}_{1}\cdot\partial^{3}\nabla\times u\,\mathrm{d}x}_{\coloneqq\mathcal{ I}_{2}}\] \[\stackrel{{ B\cdot\nabla}}{{=}}-\int_{\Omega}( \partial^{3}\nabla\times B)\cdot\partial^{3}\nabla\times(B\cdot\nabla u)\, \mathrm{d}x-\underbrace{-\int_{\Omega}(\partial^{3}\nabla\times B)\cdot[B \cdot\nabla,\partial^{3}\nabla\times]u\,\mathrm{d}x}_{\coloneqq\mathcal{I}_{3}}+ \mathcal{I}_{2} \tag{3.15}\] Next, we insert the evolution equation of \(B\), that is, \(D_{t}B=(B\cdot\nabla)u-B(\nabla\cdot u)\) to get \[-\int_{\Omega}(\partial^{3}\nabla\times B)\cdot\partial^{3} \nabla\times(B\cdot\nabla u)\,\mathrm{d}x\] \[= -\int_{\Omega}\partial^{3}\nabla\times B\cdot\partial^{3}\nabla \times D_{t}B\,\mathrm{d}x-\int_{\Omega}\partial^{3}\nabla\times B\cdot \partial^{3}\nabla\times(B\nabla\cdot u)\,\mathrm{d}x\] \[= -\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}|\partial^ {3}\nabla\times B|^{2}\,\mathrm{d}x\underbrace{-\int_{\Omega}\partial^{3} \nabla\times B\cdot([\partial^{3}\nabla\times,D_{t}]B+(u\cdot\nabla)\partial^ {3}\nabla\times B)\,\mathrm{d}x}_{\coloneqq\mathcal{I}_{4}}\] \[\underbrace{+\int_{\Omega}\partial^{3}\nabla\times B\cdot\partial^ {3}\nabla\times(\varepsilon^{2}BD_{t}p)\,\mathrm{d}x}_{\coloneqq\mathcal{K}}. \tag{3.16}\] Based on the concrete form of the commutators, a straightforward product estimate for \(\mathcal{I}_{j}\) (\(1\leq j\leq 4\)) gives us \[\sum_{j=1}^{4}\mathcal{I}_{j}\leq P(E_{4}(t)). \tag{3.17}\] The most difficult term is \(\mathcal{K}\) because the highest-order term has 5-th order derivative and thus cannot be controlled by \(E_{4}(t)\): \[\varepsilon^{2}\partial^{3}\nabla\times(BD_{t}p) =\varepsilon^{2}\partial^{3}((\nabla D_{t}p)\times B+D_{t}p(\nabla \times B))\] \[= -\varepsilon^{2}B\times(\partial^{3}\nabla D_{t}p)\underbrace{- \varepsilon^{2}[\partial^{3},B\times](\nabla D_{t}p)+\partial^{3}(\varepsilon^ {2}D_{t}p(\nabla\times B))}_{\mathcal{R}_{2}},\] where straightforward calculation shows that \(\|\mathcal{R}_{2}\|_{0}\leq P(E_{4}(t))\). We then further analyze the problematic term \(-\varepsilon^{2}B\times(\partial^{3}\nabla D_{t}p)\). We invoke a simple vector identity (in 3D) \((B\cdot\nabla)B-\frac{1}{2}\nabla|B|^{2}=-B\times(\nabla\times B)\) to rewrite the momentum equation to be \[\rho D_{t}u+B\times(\nabla\times B)=-\nabla p.\] So, we commute \(D_{t}\) with \(\nabla\) and insert this equation into \(\varepsilon^{2}B\times(\partial^{3}\nabla D_{t}p)\) to get \[-\varepsilon^{2}B\times(\partial^{3}\nabla D_{t}p)= -\varepsilon^{2}B\times(\partial^{3}D_{t}\nabla p)-\varepsilon^{ 2}B\times(\partial^{3}(\nabla u_{I}\partial_{I}p))\] \[=\varepsilon^{2}B\times\partial^{3}D_{t}(\rho D_{t}u)+ \varepsilon^{2}B\times\partial^{3}D_{t}(B\times(\nabla\times B))-\varepsilon^ {2}B\times(\partial^{3}(\nabla u_{I}\partial_{I}p))\] \[=\varepsilon^{2}\rho B\times(\partial^{3}D_{t}^{2}u)+\varepsilon ^{2}\underbrace{B\times(B\times(\partial^{3}D_{t}(\nabla\times B)))}_{= \emptyset}\] \[\underbrace{+\varepsilon^{2}B\times([\partial^{3}D_{t},\rho]D_{ t}u)+\varepsilon^{2}B\times([\partial^{3}D_{t},B\times]\nabla\times B)- \varepsilon^{2}B\times(\partial^{3}(\nabla u_{I}\partial_{I}p))}_{\mathcal{R} _{3}}, \tag{3.18}\] where \(\|\mathcal{R}_{3}\|_{0}\leq P(E_{4}(t))\). Therefore, we control the term \(\mathcal{K}\) with both \(E_{4}(t)\) and \(E_{5}(t)\) as follows: \[\mathcal{K}\leq\|B\|_{4}(P(E_{4}(t))+\|\rho B\|_{\infty}\|\varepsilon^{2}D_{t }^{2}u\|_{3})\leq P(E_{4}(t),E_{5}(t)), \tag{3.19}\] which gives us the energy estimate \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\|\nabla\times u\|_{3}^{2}+\|\nabla \times B\|_{3}^{2}\right)\leq P(E_{4}(t),E_{5}(t)). \tag{3.20}\] Recall that \(D_{t}=\partial_{t}+u\cdot\nabla=\partial_{t}+\tilde{u}\cdot\overline{\nabla} +u_{3}\partial_{3}\) and \(u_{3}|_{\Sigma}=0\) imply that \(D_{t}\) is a tangential derivative. So, the above analysis shows that the curl estimate for compressible ideal MHD cannot be closed in standard Sobolev space, but in fact we can trade one normal derivative (\(\nabla\times\)) for two tangential derivatives together with square Mach number weight, that is, \(\varepsilon^{2}\mathcal{T}^{2}\). This step naturally introduces the so-called anisotropic Sobolev space and also explains why we add \(\varepsilon^{2}\)weight to \(E_{5}(t)\). Similarly, we can prove the same conclusion for \(\partial^{\alpha}\partial_{t}^{k}\) with \(k+|\alpha|=3\) by replacing \(\partial^{3}\) with \(\varepsilon^{(k-1)_{t}}\partial^{\alpha}\partial_{t}^{k}\). Indeed, the highest order derivatives in the above commutators do not exceed 4-th order, and there is no loss of Mach number weight because none of the above steps creates negative power of Mach number. Besides, the problematic term can still be analyzed in the same manner: \[\varepsilon^{(k-1)_{t}+2}B\times(\partial^{\alpha}\partial_{t}^{k}\nabla D_{t }p)\overset{L}{=}-\varepsilon^{(k-1)_{t}}\cdot\rho B\times\partial^{\alpha} \partial_{t}^{k}(\varepsilon^{2}D_{t}^{2}u),\] where the \(L^{2}\) norms of the omitted lower-order terms are controlled by \(P(E_{4}(t))\). Hence, we can conclude the vorticity analysis for \(E_{4}(t)\) with the following inequality \[\sum_{k=0}^{3}\frac{\mathrm{d}}{\mathrm{d}t}\left(\left\|\varepsilon^{(k-1)_{t }}\nabla\times\partial_{t}^{k}u\right\|_{3-k}^{2}+\left\|\varepsilon^{(k-1)_{ t}}\nabla\times\partial_{t}^{k}B\right\|_{3-k}^{2}\right)\leq P(E_{4}(t),E_{5}(t)). \tag{3.21}\] The above div-curl analysis shows that it is necessary to control \(\|\varepsilon^{2+(k-1)_{t}}\mathcal{T}^{2}\partial_{t}^{k}u\|_{3-k}\), which should be analyzed together with \(\|\varepsilon^{2+(k-1)_{t}}\mathcal{T}^{2}\partial_{t}^{k}B\|_{3-k}\). When \(k=3\), \(\mathcal{T}^{2}\partial_{t}^{3}\) is a purely tangential derivative and we would like to postpone the tangential estimates to the next subsection. When \(0\leq k\leq 2\), we again apply Lemma 3.1 to \(\|\varepsilon^{2+(k-1)_{t}}\partial_{t}^{k}\mathcal{T}^{2}u\|_{3-k}^{2}\leq\| \varepsilon^{2+(k-1)_{t}}\partial_{t}^{k}\mathcal{T}^{2}u\|_{0}^{2}+\| \varepsilon^{2+(k-1)_{t}}\partial_{t}^{k}\nabla\cdot\mathcal{T}^{2}u\|_{2-k}^ {2}+\|\varepsilon^{2+(k-1)_{t}}\partial_{t}^{k}\nabla\times\mathcal{T}^{-2}u\|_{ 2-k}^{2},\) (3.22) where we already use the boundary conditions and the divergence constraint to eliminate the boundary normal traces and \(\nabla\cdot B\). The divergence part is reduced to the estimates of \(p\) by using the continuity equation \[\|\varepsilon^{2+(k-1)_{t}}\partial_{t}^{k}\nabla\cdot\mathcal{T}^{ 2}u\|_{2-k}^{2} \leq\|\varepsilon^{2+(k-1)_{t}}\partial_{t}^{k}\mathcal{T}^{2} \nabla\cdot u\|_{2-k}^{2}+\|\varepsilon^{2+(k-1)_{t}}[\mathcal{T}^{2},\nabla \cdot]\partial_{t}^{k}u\|_{2-k}^{2}\] \[=\left\|\varepsilon^{4+(k-1)_{t}}\partial_{t}^{k}\mathcal{T}^{2}(D _{t}p)\right\|_{2-k}^{2}+\left\|\varepsilon^{2+(k-1)_{t}}[\mathcal{T}^{2},\nabla \cdot]\partial_{t}^{k}u\right\|_{2-k}^{2}, \tag{3.24}\] where \(\|\varepsilon^{2+(k-1)\cdot}[\mathcal{T}^{2},\nabla\cdot]\partial_{t}^{k}\|_{2-k}^ {2}\) can be controlled by \(\varepsilon^{4}E_{4}(t)\leq\varepsilon^{4}E_{4}(0)+\int_{0}^{t}E_{5}(\tau)\mathrm{ d}\tau\). Indeed, such commutators only appear when we commute the third component of \(\nabla\cdot\) with \(\omega\partial_{3}\). We have the following identity which can be shown by induction: \[[(\omega\partial_{3})^{k},\partial_{3}](\cdot)=\sum_{\ell\leq k-1}c_{k,\ell}( \omega\partial_{3})^{\ell}\partial_{3}(\cdot)=\sum_{\ell\leq k-1}d_{k,\ell} \partial_{3}(\omega\partial_{3})^{\ell}(\cdot), \tag{3.25}\] where \(c_{k,\ell}\) and \(d_{k,\ell}\) are smooth functions that depend on \(k\), \(\ell\) and the derivatives (up to order \(k\)) of \(\omega\). For example, we assume \(k=0\) and \(\mathcal{T}^{2}=(\omega\partial_{3})\overline{\partial}\), for which the highest-order term in the commutator is \(\|\varepsilon^{2}(\partial_{3}\omega(x_{3}))\partial_{3}\overline{\partial}u _{3}\|_{2}\leq\varepsilon^{4}E_{4}(t).\) This term can be either controlled by \(\varepsilon^{4}E_{4}(0)+\int_{0}^{t}E_{5}(\tau)\mathrm{d}\tau\) or absorbed by \(E_{4}(t)\) when \(\varepsilon\) is sufficiently small. Similar estimates apply to the commutator \(\|\varepsilon^{2+(k-1)\cdot}[\mathcal{T}^{2},\nabla\cdot]\partial_{t}^{k}B\|_ {2-k}^{2}\) so we omit the details. The term \(\|\varepsilon^{4+(k-1)\cdot}\partial_{t}^{k}\mathcal{T}^{2}(D_{t}p)\|_{2-k}^ {2}\) will be further reduced to the tangential estimates of \((u,B)\) by using the argument in Section 3.2.1. Finally, we need to control the tangential derivatives of \(u\) and \(B\) (including time derivatives) in \(E_{5}(t)\) and the \(L^{2}\) norm of \(\partial_{t}^{5}p\) with suitable Mach number weights. Next, we analyze the vorticity term. The proof is parallel to Lemma 3.2. **Lemma 3.3**.: \[\sum_{k=0}^{2}\frac{\mathrm{d}}{\mathrm{d}t}\left(\left\|\varepsilon^{2+(k-1) \cdot}\nabla\times\mathcal{T}^{2}u\right\|_{2-k}^{2}+\left\|\varepsilon^{2+(k -1)\cdot}\nabla\times\mathcal{T}^{2}B\right\|_{2-k}^{2}\right)\leq P(E_{4}(t ),E_{5}(t),E_{6}(t)).\] (3.26) Proof.: For the case of \(k=0\), \(\alpha_{3}=\alpha_{4}=0\), \(\mathcal{T}^{\alpha}=\partial_{t}^{\alpha_{0}}\partial_{1}^{\alpha_{1}} \partial_{2}^{\alpha_{2}}\), we take \(\partial^{2}\nabla\times\mathcal{T}^{2}\) in the momentum equation \(\rho D_{t}u-(B\cdot\nabla)B=-\nabla(p+\frac{1}{2}|B|^{2})\) to get \[\rho D_{t}(\partial^{2}\nabla\times\mathcal{T}^{2}u)-(B\cdot\nabla)(\partial ^{2}\nabla\times\mathcal{T}^{2}B)=[\rho D_{t},\partial^{2}\nabla\times \mathcal{T}^{2}]u-[B\cdot\nabla,\partial^{2}\nabla\times\mathcal{T}^{2}]B, \tag{3.27}\] then we have \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\rho| \varepsilon^{2}\partial^{2}\nabla\times\mathcal{T}^{2}u|^{2}\,\mathrm{d}x\] \[= \int_{\Omega}\rho D_{t}(\varepsilon^{2}\partial^{2}\nabla\times \mathcal{T}^{2}u)\cdot(\varepsilon^{2}\partial^{2}\nabla\times\mathcal{T}^{2}u )\,\mathrm{d}x\] \[\underbrace{-\int_{\Omega}\rho(u\cdot\nabla)(\varepsilon^{2} \partial^{2}\nabla\times\mathcal{T}^{2}u)\cdot(\varepsilon^{2}\partial^{2} \nabla\times\mathcal{T}^{2}u)\,\mathrm{d}x-\int_{\Omega}(\frac{\partial\rho}{ \partial\rho}\partial_{i}p+\partial_{5}\rho\partial_{i}S)|\varepsilon^{2} \partial^{2}\nabla\times\mathcal{T}^{2}u|^{2}\,\mathrm{d}x}_{:=T_{1}}\] \[= \int_{\Omega}\varepsilon^{2}(B\cdot\nabla)(\partial^{2}\nabla \times\mathcal{T}^{2}B)\cdot(\varepsilon^{2}\partial^{2}\nabla\times\mathcal{T}^ {2}u)\,\mathrm{d}x\] \[\underbrace{+\int_{\Omega}\varepsilon^{2}\left([\rho D_{t}, \partial^{2}\nabla\times\mathcal{T}^{2}]u-[B\cdot\nabla,\partial^{2}\nabla \times\mathcal{T}^{2}]B\right)\cdot(\varepsilon^{2}\partial^{2}\nabla\times \mathcal{T}^{2}u)\,\mathrm{d}x}_{:=T_{2}^{*}}+T_{1}^{*}. \tag{3.28}\] Then we integrate \(B\cdot\nabla\) by parts and invoke the evolution equation of \(B\), namely \(D_{t}B=(B\cdot\nabla)u-B(\nabla\cdot u)\) to get \[\int_{\Omega}\varepsilon^{2}(B\cdot\nabla)(\partial^{2}\nabla\times \mathcal{T}^{2}B)\cdot(\varepsilon^{2}\partial^{2}\nabla\times\mathcal{T}^{2}u) \,\mathrm{d}x\] \[= -\int_{\Omega}\varepsilon^{2}(\partial^{2}\nabla\times\mathcal{T}^ {2}B)\cdot\varepsilon^{2}(\partial^{2}\nabla\times\mathcal{T}^{2}(B\cdot\nabla u ))\,\mathrm{d}x-\int_{\Omega}\varepsilon^{2}(\partial^{2}\nabla\times \mathcal{T}^{2}B)\cdot\varepsilon^{2}([B\cdot\nabla,\partial^{2}\nabla\times \mathcal{T}^{2}]u)\,\mathrm{d}x\] \[= -\int_{\Omega}\varepsilon^{2}(\partial^{2}\nabla\times\mathcal{T}^ {2}B)\cdot\varepsilon^{2}(\partial^{2}\nabla\times\mathcal{T}^{2}(D_{i}B))\, \mathrm{d}x-\int_{\Omega}\varepsilon^{2}(\partial^{2}\nabla\times\mathcal{T}^ {2}B)\cdot\varepsilon^{2}(\partial^{2}\nabla\times\mathcal{T}^{2}(B\nabla \cdot u))\,\mathrm{d}x\] \[-\int_{\Omega}\varepsilon^{2}(\partial^{2}\nabla\times\mathcal{T} ^{2}B)\cdot\varepsilon^{2}([B\cdot\nabla,\partial^{2}\nabla\times\mathcal{T}^ {2}]u)\,\mathrm{d}x\] \[= -\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}| \varepsilon^{2}\partial^{2}\nabla\times\mathcal{T}^{2}B|^{2}\,\mathrm{d}x \underbrace{+\int_{\Omega}\varepsilon^{2}\partial^{2}\nabla\times\mathcal{T}^ {2}B\cdot\varepsilon^{4}\partial^{2}\nabla\times\mathcal{T}^{2}(BD_{i}p)\, \mathrm{d}x}_{i\prec\mathcal{K}}\] \[\underbrace{-\int_{\Omega}\varepsilon^{2}\partial^{2}\nabla\times \mathcal{T}^{2}B\cdot\varepsilon^{2}[\partial^{2}\nabla\times\mathcal{T}^{2},D _{i}]B\,\mathrm{d}x-\int_{\Omega}\varepsilon^{2}(\partial^{2}\nabla\times \mathcal{T}^{2}B)\cdot\varepsilon^{2}([B\cdot\nabla,\partial^{2}\nabla\times \mathcal{T}^{2}]u)\,\mathrm{d}x}_{:=\mathcal{T}_{3}}. \tag{3.29}\] A straightforward product estimate for \(\mathcal{I}^{\prime}_{1}\) gives us \[\mathcal{I}^{\prime}_{1}\leq P(E_{4}(t),E_{5}(t)). \tag{3.30}\] For \(\mathcal{I}^{\prime}_{2}\), we first consider the commutator \(\varepsilon^{2}[\rho u\cdot\nabla,\partial^{2}\nabla\times\mathcal{T}^{2}]u\). Since, \[[\rho u\cdot\nabla,\partial^{2}\nabla\times\mathcal{T}^{2}]u=\partial(\rho u _{j})\partial^{2}\mathcal{T}^{2}\partial_{j}u_{i}+\mathcal{T}(\rho u_{j}) \partial^{3}\mathcal{T}\partial_{j}u_{i}+\text{ lower order terms}, \tag{3.31}\] we obtain \[\varepsilon^{2}\|\partial(\rho u_{j})\partial^{2}\mathcal{T}^{2} \partial_{j}u_{i}\|_{0} \leq\|\partial(\rho u)\|_{\infty}\|\varepsilon^{2}\mathcal{T}^{2}u \|_{3}\leq P(E_{4}(t),E_{5}(t)), \tag{3.32}\] \[\varepsilon^{2}\|\mathcal{T}(\rho u_{j})\partial^{3}\mathcal{T} \partial_{j}u_{i}\|_{0} \leq\|\mathcal{T}(\rho u)\|_{\infty}\|\varepsilon^{2}\mathcal{T} \partial_{j}u\|_{3}\leq P(E_{4}(t),E_{5}(t)),\ j=1,2, \tag{3.33}\] Since \(u_{3}|_{\Sigma}=0\), using fundamental theorem of calculus, we have \(u_{3}(x_{3})=u_{3}(-1)+\int_{-1}^{x_{3}}\partial_{3}u_{3}(\xi_{3})\mathrm{d} \xi_{3}\) (W.L.O.G. \(x_{3}<0\)), so the length of the interval \([-1,x_{3}]\) is comparable to the weight function \(\omega(x_{3})=(1-x_{3})(1+x_{3})\). We have \[|\mathcal{T}(\rho u_{3})(x_{3})|\leq 0+\int_{-1}^{x_{3}}\|\partial_{3}\mathcal{T} (\rho u_{3})(\xi_{3})\|_{\infty}\mathrm{d}\xi_{3}\lesssim\omega(x_{3})\| \partial_{3}\mathcal{T}(\rho u_{3})\|_{\infty}\] Then we can combine this \(\omega(x_{3})\) with \(\partial_{3}\) to convert a normal derivative to a tangential derivative. \[\varepsilon^{2}\|\mathcal{T}(\rho u_{3})(x_{3})\partial^{3} \mathcal{T}\partial_{3}u_{i}\|_{0} \leq\|\partial_{3}\mathcal{T}(\rho u_{3})\|_{\infty}\|\varepsilon^{ 2}\omega(x_{3})\partial^{3}\mathcal{T}\partial_{3}u\|_{0}\] \[\leq\|\partial_{3}\mathcal{T}(\rho u_{3})\|_{\infty}\|\varepsilon ^{2}\partial^{3}\mathcal{T}\omega(x_{3})\partial_{3}u\|_{0}+P(E_{4}(t),E_{5}(t))\] \[\leq P(E_{4}(t),E_{5}(t)). \tag{3.34}\] Thus, \[\|\varepsilon^{2}[\rho u\cdot\nabla,\partial^{2}\nabla\times\mathcal{T}^{2}]u \|_{0}\leq P(E_{4}(t),E_{5}(t)). \tag{3.35}\] Similarly, since \(B_{3}|_{\Sigma}=0\), \[\|\varepsilon^{2}[\rho\partial_{t},\partial^{2}\nabla\times\mathcal{T}^{2}]u \|_{0}+\|\varepsilon^{2}[B\cdot\nabla,\partial^{2}\nabla\times\mathcal{T}^{2}]B \|_{0}\leq P(E_{4}(t),E_{5}(t)), \tag{3.36}\] then \[\mathcal{I}^{\prime}_{2}\leq P(E_{4}(t),E_{5}(t)). \tag{3.37}\] We can also get the estimate of \(\mathcal{I}^{\prime}_{3}\) in a similar way: \[\mathcal{I}^{\prime}_{3}\leq P(E_{4}(t),E_{5}(t)). \tag{3.38}\] For \(\mathcal{K}^{\prime}\), we consider the estimate of \(\varepsilon^{4}\partial^{2}\nabla\times\mathcal{T}^{2}(BD_{i}p)\). Since, \[\varepsilon^{4}\partial^{2}\nabla\times\mathcal{T}^{2}(BD_{i}p)\] \[=\varepsilon^{4}\partial^{2}\mathcal{T}^{2}(D_{i}\nabla p\times B )\underbrace{+\varepsilon^{4}\partial^{2}\mathcal{T}^{2}(\nabla u\cdot\nabla p \times B)+\varepsilon^{4}\partial^{2}\mathcal{T}^{2}(D_{i}p\nabla\times B)}_ {\overleftarrow{\mathcal{R}}^{\prime}_{1}}\] \[=\varepsilon^{4}\partial^{2}\mathcal{T}^{2}D_{i}\nabla p\times B \underbrace{-\varepsilon^{4}[\partial^{2}\mathcal{T}^{2},B\times](D_{i} \nabla p)}_{\overleftarrow{\mathcal{R}}^{\prime}_{1}}+\mathcal{R}^{\prime}_{1}\] \[= -\varepsilon^{4}\partial^{2}\mathcal{T}^{2}D_{i}(\rho D_{i}u) \times B+\varepsilon^{4}\partial^{2}\mathcal{T}^{2}D_{i}((\nabla\times B) \times B)\times B+\mathcal{R}^{\prime}_{1}+\mathcal{R}^{\prime}_{2}\] \[= -\varepsilon^{4}\rho\partial^{2}\mathcal{T}^{2}D_{i}^{2}u\times B +\underbrace{\varepsilon^{4}[\partial^{2}\mathcal{T}^{2}D_{i}(\nabla\times B )\times B]\times B}_{\overleftarrow{\mathcal{R}}^{\prime}_{1}}+\mathcal{R}^{ \prime}_{2}\] \[\underbrace{-\varepsilon^{4}([\partial^{2}\mathcal{T}^{2}D_{i}, \rho]D_{i}u)\times B-\varepsilon^{4}([\partial^{2}\mathcal{T}^{2}D_{i},\,B \times](\nabla\times B))\times B}_{\overleftarrow{\mathcal{R}}^{\prime}_{3}}, \tag{3.39}\] we get that \[\|\mathcal{R}^{\prime}_{1}+\mathcal{R}^{\prime}_{2}+\mathcal{R}^{ \prime}_{3}\|_{0}\lesssim P(E_{4}(t))(\|\varepsilon^{2}\mathcal{T}^{2}B\|_{3}+\| \varepsilon^{2}\mathcal{T}^{3}B\|_{2}+\|\varepsilon^{4}\mathcal{T}^{2}D_{i}p \|_{2}+\|\varepsilon^{4}\mathcal{T}^{2}u\|_{3}), \tag{3.40}\] and thus \[\mathcal{K}^{\prime}\leq P(E_{4}(t))\|\varepsilon^{4}\mathcal{T}^{2}D_{i}^{2} u\|_{2}+P(E_{4}(t),E_{5}(t)). \tag{3.41}\] So we obtain the vorticity estimates for \(k=0\) and \(\alpha_{4}=0\) \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\|\varepsilon^{2}\nabla\times\mathcal{T}^ {2}u\|_{2}^{2}+\left\|\varepsilon^{2}\nabla\times\mathcal{T}^{-2}B\right\|_{2 }^{2}\right)\leq P(E_{4}(t),E_{5}(t))\|\varepsilon^{4}\mathcal{T}^{2}D_{i}^{2 }u\|_{2}\leq P(E_{4}(t),E_{5}(t),E_{6}(t)). \tag{3.42}\] For the case of \(\alpha_{4}\geq 1\), since \([\partial_{3},\omega\partial_{3}]\neq 0\), we need to reconsider the estimates of commutators. First, we consider commutators: \([\mathcal{T}^{2},\nabla\times](\cdot)\) and \([\mathcal{T}^{2},\nabla\cdot](\cdot)\). According to the identity (3.25), such commutators contain at most first-order derivatives, which can be controlled by \[\|\varepsilon^{2}[\mathcal{T}^{2},\nabla\times]u\|_{2}+\|\varepsilon^{2}[ \mathcal{T}^{2},\nabla\times]B\|_{2}+\|\varepsilon^{2}[\mathcal{T}^{2},\nabla \cdot]u\|_{2}\leq\varepsilon^{2}\left(E_{4}(0)+\int_{0}^{t}P(E_{4}(\tau)) \mathrm{d}\tau\right). \tag{3.43}\] Next, for example, we consider the commutator \(\varepsilon^{2}[B\cdot\nabla,\partial^{2}\nabla\times(\omega\partial_{3}) \mathcal{T}]B\) in \(\mathcal{I}^{\prime}_{2}\) (we take \(\alpha_{4}=1\), \(\mathcal{T}^{\alpha}=(\omega\partial_{3})\partial_{t}^{\alpha_{4}}\partial_{1 }^{\alpha_{4}}\partial_{2}^{\alpha_{4}}\) with \(\alpha_{0}+\alpha_{1}+\alpha_{2}=1\) without loss of generality), \[[B\cdot\nabla,\partial^{2}\nabla\times(\omega\partial_{3}) \mathcal{T}]B =[B_{3}\partial_{3},\omega\partial_{3}^{4}\mathcal{T}]B+\text{ lower order terms}\] \[\overset{L}{=}B_{3}\partial_{3}(\omega\partial_{3}^{4}\mathcal{T} B)-\omega\partial_{3}^{4}\mathcal{T}(B_{3}\partial_{3}B)\] \[\overset{L}{=}B_{3}(\partial_{3}\omega)\partial_{3}^{3}\mathcal{T }B, \tag{3.44}\] where the "lower order terms" represent the terms that do not have the leading order derivative and the terms that \(\partial^{2}\nabla\times\) contributes to tangential derivatives instead of \(\partial_{3}^{3}\) as above. Since \(B_{3}|_{\Sigma}=0\), we again use the fundamental theorem of calculus to get \[\varepsilon^{2}\|B_{3}(\partial_{3}\omega)\partial_{3}^{3}\mathcal{T }B\|_{0} =\|\varepsilon^{2}|B_{3}(x_{3})|\partial_{3}^{3}\mathcal{T}B\|_{0}\] \[\lesssim\|\partial_{3}B_{3}\|_{\infty}\|\varepsilon^{2}\omega(x_{3} )\partial_{3}^{3}\mathcal{T}B\|_{0}\] \[\leq P(E_{4}(t)). \tag{3.45}\] All the other terms in \(\mathcal{I}^{\prime}_{2}\) and \(\mathcal{I}^{\prime}_{3}\) can be treated similarly. Thus, we obtain \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\left\|\varepsilon^{2}\nabla\times \mathcal{T}^{-2}u\right\|_{2}^{2}+\left\|\varepsilon^{2}\nabla\times\mathcal{T}^ {-2}B\right\|_{2}^{2}\right) \leq P(E_{4}(t),E_{5}(t))\|\varepsilon^{4}\mathcal{T}^{4}u\|_{2} \leq P(E_{4}(t),E_{5}(t),E_{6}(t)). \tag{3.46}\] For \(k=1,2\), we obtain the following energy estimates in similar way as in the proof of Lemma 3.2 \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\left\|\varepsilon^{2}\nabla \times\mathcal{T}^{-2}\partial_{t}u\right\|_{1}^{2}+\left\|\varepsilon^{2} \nabla\times\mathcal{T}^{-2}\partial_{t}B\right\|_{1}^{2}\right) \leq P(E_{4}(t),E_{5}(t))\|\varepsilon^{4}\mathcal{T}^{4}\partial_{t}u\|_{1 }\leq P(E_{4}(t),E_{5}(t),E_{6}(t)), \tag{3.47}\] \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\left\|\varepsilon^{3}\nabla \times\mathcal{T}^{2}\partial_{t}^{2}u\right\|_{0}^{2}+\left\|\varepsilon^{3} \nabla\times\mathcal{T}^{-2}\partial_{t}^{2}B\right\|_{0}^{2}\right) \leq P(E_{4}(t),E_{5}(t))\|\varepsilon^{5}\mathcal{T}^{4}\partial_{t}^{2}u\|_{0 }\leq P(E_{4}(t),E_{5}(t),E_{6}(t)). \tag{3.48}\] The proof is completed. The proof of Lemma 3.3 shows that we need to trade one normal derivative for two more tangentials with Mach number weight \(\varepsilon^{2}\), that is, we need to control \(\|\varepsilon^{4}\mathcal{T}^{4}(\partial_{t}^{k}u,\partial_{t}^{k}B)\|_{2-k}\). We again do the analogous div-curl analysis and reduce it to \(\|\varepsilon^{6}\mathcal{T}^{6}(\partial_{t}u,\partial_{t}B)\|_{1-k}\), and then repeat once more such that we finally need to control \(\|\varepsilon^{8}\mathcal{T}^{8}(u,B)\|_{0}\) which is a purely tangential estimate. To sum up, we can obtain the following estimates by mimicing the proof of Lemma 3.2 and Lemma 3.3. **Lemma 3.4**.: The div-curl analysis and vorticity estimates for \(\|\varepsilon^{4}\mathcal{T}^{4}(u,B)\|_{2}\), \(\|\varepsilon^{4}\partial_{t}\mathcal{T}^{4}(u,B)\|_{1}\) and \(\|\varepsilon^{6}\mathcal{T}^{6}(u,B)\|_{1}\) are listed below: \[\|\varepsilon^{4}\mathcal{T}^{4}\partial_{t}^{k}u\|_{2-k}^{2} \lesssim\|\varepsilon^{4}\mathcal{T}^{4}\partial_{t}^{k}u\|_{0}^{2}+\| \varepsilon^{4}\nabla\times\mathcal{T}^{4}\partial_{t}^{k}u\|_{1-k}^{2}+\| \varepsilon^{4}\nabla\cdot\mathcal{T}^{4}\partial_{t}^{k}u\|_{1-k}^{2},\ k=0,1, \tag{3.49}\] \[\|\varepsilon^{4}\mathcal{T}^{4}\partial_{t}^{k}B\|_{2-k}^{2} \lesssim\|\varepsilon^{4}\mathcal{T}^{4}\partial_{t}^{k}B\|_{0}^{2}+\| \varepsilon^{4}\nabla\times\mathcal{T}^{4}\partial_{t}^{k}B\|_{1-k}^{2}+\| \varepsilon^{4+(k-1)_{*}}[\mathcal{T}^{4},\nabla]\partial_{t}^{k}B\|_{1-k}^{2 },\ k=0,1,\] (3.50) \[\sum_{k=0,1}\frac{\mathrm{d}}{\mathrm{d}t}\left(\left\| \varepsilon^{4}\nabla\times\partial_{t}^{k}\mathcal{T}^{4}u\right\|_{1-k}^{2 }+\left\|\varepsilon^{4}\nabla\times\partial_{t}^{k}\mathcal{T}^{4}B\right\|_ {1-k}^{2}\right)\leq P(E_{4}(t),E_{5}(t),E_{6}(t))\|\varepsilon^{6}\mathcal{T }^{6}\partial_{t}^{k}u\|_{1-k},\] (3.51) \[\|\varepsilon^{6}\mathcal{T}^{6}u\|_{1}^{2} \lesssim\|\varepsilon^{6}\mathcal{T}^{6}u\|_{0}^{2}+\| \varepsilon^{6}\nabla\times\mathcal{T}^{6}u\|_{0}^{2}+\|\varepsilon^{6} \nabla\cdot\mathcal{T}^{6}u\|_{0}^{2},\] (3.52) \[\|\varepsilon^{6}\mathcal{T}^{6}B\|_{1}^{2} \lesssim\|\varepsilon^{6}\mathcal{T}^{6}B\|_{0}^{2}+\| \varepsilon^{6}\nabla\times\mathcal{T}^{6}B\|_{0}^{2}+\|\varepsilon^{6} \mathcal{T}^{6},\nabla\cdot\|B\|_{0}^{2},\] (3.53) \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\left\|\varepsilon^{6} \nabla\times\mathcal{T}^{6}u\right\|_{0}^{2}+\left\|\varepsilon^{6}\nabla \times\mathcal{T}^{6}B\right\|_{0}^{2}\right)\leq P(E_{4}(t)E_{5}(t),E_{6}(t),E _{7}(t))\|\varepsilon^{8}\mathcal{T}^{8}u\|_{0}. \tag{3.54}\] Besides, the divergence part is reduced to the estimates of \(p\) by using the continuity equation \[\|\varepsilon^{4}\partial_{t}^{k}\nabla\cdot\mathcal{T}^{4}u\|_{1-k}^{2} =\left\|\varepsilon^{6}\partial_{t}^{k}\mathcal{T}^{4}(D_{t}p)\right\|_{1-k}^ {2}+\|\varepsilon^{4}[\mathcal{T}^{4},\nabla\cdot]\partial_{t}^{k}u\|_{1-k}^{2 },\ \ \ k=0,1,\] \[\|\varepsilon^{6}\nabla\cdot\mathcal{T}^{6}u\|_{0}^{2} =\left\|\varepsilon^{8}\mathcal{T}^{6}(D_{t}p)\right\|_{0}^{2}+\| \varepsilon^{6}[\mathcal{T}^{-6},\nabla\cdot]u\|_{0}^{2}, \tag{3.55}\] where the commutators can be controlled \[\sum_{k=0,1}\|\varepsilon^{4}[\mathcal{T}^{4},\nabla\cdot]\partial _{t}^{k}u\|_{1-k}^{2} \leq\varepsilon^{4}E_{5}(t)\lesssim\varepsilon^{4}E_{5}(0)+\int_{0}^{t}E_{6}( \tau)\mathrm{d}\tau, \tag{3.56}\] \[\|\varepsilon^{6}[\mathcal{T}^{6},\nabla\cdot]u\|_{0}^{2} \leq\varepsilon^{4}E_{6}(t)\lesssim\varepsilon^{4}E_{6}(0)+\int_{0}^{t}E_{7}( \tau)\mathrm{d}\tau. \tag{3.57}\] by using the identity (3.25). Moreover, \(\left\|\varepsilon^{6}\partial_{t}^{k}\mathcal{T}^{4}(D_{t}p)\right\|_{1-k}^{2}\) and \(\left\|\varepsilon^{8}\mathcal{T}^{6}(D_{t}p)\right\|_{0}^{2}\) will be further reduced to the tangential estimates of \((u,B)\) by using the argument in Section 3.2.1. #### 3.2.3 The remaining tangential estimates Recall that the entropy \(S\) is directly controlled via \(D_{t}S=0\), so we can temporarily ignore that. Let us summarize what kind of tangential derivatives we shall control in order to close the estimates that are uniform in Mach number \(\varepsilon\). Again we start with \(\|\varepsilon^{(k-1)_{*}}(\partial_{t}^{k}u,\partial_{t}^{k}B)\|_{4-k}\) in \(E_{4}(t)\). We set such Mach number weights based on the following facts: * \(H^{4}\) regularity: Several commutator estimates require the bound for \(\|\overline{\partial}^{2}(u,B)\|_{L^{\infty}}\). Recall that \(H^{\frac{4}{2}+\delta}\hookrightarrow L^{\infty}\), so it would be convenient to choose \(H^{2+\lceil\frac{4}{2}+\delta\rceil}\) regularity, that is, \(H^{4}\) for \(d=2\), \(3\). * The initial data is well-prepared, so \(\|\nabla p(0)\|_{3}\) should be bounded with respect to \(\varepsilon\) and we shall add Mach number weights to \(\partial_{t}^{k}p\) when \(k\geq 1\). * The \(L^{2}\) estimate in section 3.1 suggests that \(u,B,\varepsilon p\) should share the same weights of Mach number \(\varepsilon\) because taking tangential derivatives preserves the boundary conditions. * The reduction procedure in section 3.2.1 shows that \(\nabla p\) is converted to \(\mathcal{T}(u,B)\) and \(\nabla B\). When \(\mathcal{T}\) is a spatial derivative, \(\mathcal{T}(u,B)\) and \(\nabla B\) should be controlled via div-curl analysis as shown in Lemma 3.2. When \(\mathcal{T}=\partial_{t}\), we find that \(\partial_{t}^{k}p,\ \partial_{t}^{k+1}(u,B)\) should share the same weights of Mach number. Thus, \(E_{4}(t)\) consists of the following quantities. \[\|u,B,S,p\|_{4},\ \|\partial_{t}u,\partial_{t}B,\partial_{t}S, \varepsilon\partial_{t}p\|_{3},\ \|\varepsilon\partial_{t}^{2}u,\varepsilon\partial_{t}^{2}B, \varepsilon\partial_{t}^{2}S,\varepsilon^{2}\partial_{t}^{2}p\|_{2},\] \[\|\varepsilon^{2}\partial_{t}^{3}u,\varepsilon^{2}\partial_{t}^{3}B, \varepsilon^{2}\partial_{t}^{3}S,\varepsilon^{3}\partial_{t}^{3}p\|_{1},\ \| \varepsilon^{3}\partial_{t}^{4}u,\varepsilon^{3}\partial_{t}^{4}B, \varepsilon^{3}\partial_{t}^{4}S,\varepsilon^{4}\partial_{t}^{4}p\|_{0}.\] The Sobolev norms of \(\|e^{(k-1)_{+}(\partial_{t}^{k}u,\partial_{t}^{k}B)}\|_{4-k}\) for \(0\leq k\leq 3\) are controlled via div-curl analysis. The divergence part can be absorbed by \(E_{4}(t)\) itself because the continuity equation produces extra \(\varepsilon^{2}\) weight and \(\varepsilon^{2}<\varepsilon\) since we assume the Mach number is small. Even if we do not assume \(\varepsilon\) is small, we can alternatively control the divergence by repeatedly using the reduction of pressure such that there are only tangential derivatives. The curl part, thanks to the special structure of Lorentz force, is reduced to \(\|\varepsilon^{(k-1)_{+}+2}D_{t}^{2}\partial_{t}^{k}u\|_{3-k}\) which is a part of \(E_{5}(t)\). So, it remains to control \(E_{5}(t)\) and the full time derivatives, namely \(\|\varepsilon^{3}\partial_{t}^{4}u,\varepsilon^{3}\partial_{t}^{4}B,\varepsilon ^{4}\partial_{t}^{4}p\|_{0}\). See also diagram (2.5). Then \(E_{5}(t)\) is designed in the same manner as \(E_{4}(t)\): \[\|\varepsilon^{2}\mathcal{T}^{2}(u,B,S,p)\|_{3}\ (\text{produced from the vorticity analysis of}\ \|u,B\|_{4}),\] \[\|\varepsilon^{2}\mathcal{T}^{2}(\partial_{t}u,\partial_{t}B, \partial_{t}S,\varepsilon\partial_{t}p)\|_{2}\ (\text{produced from the vorticity analysis of}\ \|\partial_{t}u,\partial_{t}B\|_{3}),\] \[\|\varepsilon^{2}\mathcal{T}^{2}(\varepsilon\partial_{t}^{2}u, \varepsilon\partial_{t}^{2}B,\varepsilon\partial_{t}^{2}S,\varepsilon^{2} \partial_{t}^{2}p)\|_{1}\ (\text{produced from the vorticity analysis of}\ \|\varepsilon\partial_{t}^{2}u,\varepsilon\partial_{t}^{2}B\|_{2}),\] \[\|\varepsilon^{2}\mathcal{T}^{2}(\varepsilon^{2}\partial_{t}^{3}u, \varepsilon^{2}\partial_{t}^{3}B,\varepsilon^{2}\partial_{t}^{3}S,\varepsilon ^{3}\partial_{t}^{3}p)\|_{0}\ (\text{produced from the vorticity analysis of}\ \|\varepsilon^{2}\partial_{t}^{3}u,\varepsilon^{2}\partial_{t}^{3}B\|_{1}).\] The div-curl analysis of \(E_{5}(t)\) produces \(\|\varepsilon^{(k-1)_{+}+4}\mathcal{T}^{2}D_{t}^{2}\partial_{t}^{k}u\|_{2-k}\) which is part of \(E_{6}(t)\). In order to close the energy estimates, we now need to control \(E_{6}(t)\) and also the full time derivatives \(\|\varepsilon^{2}\mathcal{T}^{2}(\varepsilon\partial_{t}^{3}u,\varepsilon \partial_{t}^{3}B,\varepsilon^{2}\partial_{t}^{3}p)\|_{0}\). Repeat the above procedure once more, we can see \(E_{6}(t)\) should be designed as \[\|\varepsilon^{4}\mathcal{T}^{4}(u,B,S,p)\|_{2}\ (\text{produced from the vorticity analysis of}\ \|\varepsilon^{2}\mathcal{T}^{2}(u,B)\|_{3}),\] \[\|\varepsilon^{4}\mathcal{T}^{4}(\partial_{t}u,\partial_{t}B, \partial_{t}S,\varepsilon\partial_{t}p)\|_{1}\ (\text{produced from the vorticity analysis of}\ \|\varepsilon^{2}\mathcal{T}^{2}(\partial_{t}u,\partial_{t}B)\|_{2},\] \[\|\varepsilon^{4}\mathcal{T}^{4}(\varepsilon\partial_{t}^{2}u, \varepsilon\partial_{t}^{2}B,\varepsilon\partial_{t}^{2}S,\varepsilon^{2} \partial_{t}^{2}p)\|_{0}\ (\text{produced from the vorticity analysis of}\ \|\varepsilon^{2}\mathcal{T}^{2}(\varepsilon\partial_{t}^{2}u, \varepsilon\partial_{t}^{2}B)\|_{1}).\] Combining the result of vorticity estimate, it remains to control \(\|\varepsilon^{6}\mathcal{T}^{4}D_{t}^{2}\partial_{t}^{k}u\|_{1-k}\) (part of \(E_{7}(t)\)) and also the full time derivatives \(\|\varepsilon^{4}\mathcal{T}^{4}(\varepsilon\partial_{t}^{2}u,\varepsilon \partial_{t}^{2}B,\varepsilon\partial_{t}^{2}S,\varepsilon^{2}\partial_{t}^{2 }p)\|_{0}\). Repeat the above procedure once more, we can see \(E_{7}(t)\) should be designed as \[\|\varepsilon^{6}\mathcal{T}^{6}(u,B,S,p)\|_{1}\ (\text{produced from the vorticity analysis of}\ \|\varepsilon^{4}\mathcal{T}^{4}(u,B)\|_{2}),\] \[\|\varepsilon^{6}\mathcal{T}^{6}(\partial_{t}u,\partial_{t}B, \partial_{t}S,\varepsilon\partial_{t}p)\|_{0}\ (\text{produced from the vorticity analysis of}\ \|\varepsilon^{4}\mathcal{T}^{4}(\partial_{t}u,\partial_{t}B)\|_{1}\ \text{and}\ (\text{c}).\] **Discussion about the weights of \(p\).** For the pressure \(p\), we only have the control of \(\|\varepsilon^{k+2}\mathcal{T}^{\alpha}\partial_{t}^{k}p\|_{4-k-l}\) at this step, which has one more \(\varepsilon\)-weight than what we defined in (1.18) when \(k\geq 1,\ l\geq 1\) and \(\alpha_{0}<2l\). To replace \(\varepsilon^{k+2l}\) by \(\varepsilon^{(k-1)_{+}+2l}\) when \(l\geq 1\) and \(\alpha_{0}<2l\), we just need to notice that \(\mathcal{T}^{\alpha}\) now must contain at least one tangential spatial derivative, and then we can use the momentum equation \[-\partial_{t}p=B_{k}\partial_{t}B_{k}+\rho D_{t}u_{t}-(B\cdot\nabla)B_{i},\ \ - \omega\partial_{3}p=B_{k}\underbrace{\omega\partial_{3}B_{k}}_{\text{ tangential}}+\omega pD_{t}u_{3}-(\omega B\cdot\nabla)B_{3}\] to reduce the control of \(\|\varepsilon^{(k-1)_{+}+2l}\mathcal{T}^{\alpha}\partial_{t}^{k}p\|_{4-k-l}\ ( \langle\alpha\rangle=2l,\ \alpha_{0}<2l)\) to \(\|\varepsilon^{(k-1)_{+}+2l}\mathcal{T}^{\beta}\partial_{t}^{k}(u,B)\|_{4-k-l}\) for some \(\beta\) satisfying \(\langle\beta\rangle=2l\) and \(\alpha_{0}\leq\beta_{0}\leq\alpha_{0}+1\), which has been included in \(E_{4+l}(t)\). Combining the result of vorticity estimate, it remains to control \(\|\varepsilon^{6}\mathcal{T}^{\alpha}(\partial_{t}u,\partial_{t}B,\partial_{t }p)\|_{0}\) for \(\langle\alpha\rangle=6,\ \alpha_{0}<6,\ \|\varepsilon^{6}(\partial_{t}^{k}u, \partial_{t}^{k}B,\varepsilon\partial_{t}^{2}p)\|_{0}\) and \(\|\varepsilon^{8}\mathcal{T}^{\alpha}D_{t}^{2}u\|_{0}\) for \(\langle\alpha\rangle=6\). This indicates us to define \(E_{8}(t)\) to be \[E_{8}(t)=\sum_{\langle\alpha\rangle=8,\alpha_{0}<8}\left\|\varepsilon^{8} \mathcal{T}^{\alpha}(u,B,S,p)\right\|_{0}^{2}+\left\|\varepsilon^{8}\partial_{t }^{8}(u,B,S,ep)\right\|_{0}^{2}.\] To sum up, the remaining estimates are all tangential estimates, which can be proved by analyzing the \(L^{2}\) estimates after taking the following tangential derivatives to MHD system (1.13) \[\varepsilon^{3}\partial_{t}^{4},\ \varepsilon^{4}\mathcal{T}^{2}\partial_{t}^{3},\ \varepsilon^{5}\mathcal{T}^{4}\partial_{t}^{2},\ \varepsilon^{6}\mathcal{T}^{6}\partial_{t},\ \varepsilon^{8}\mathcal{T}^{8}. \tag{3.58}\] ### Tangential estimates #### 3.3.1 The \(\mathcal{T}^{\alpha}\)-differentiated equations By the div-curl analysis, the crucial step is to study the higher order tangential energy estimate of (1.13). In particular, we define the following tangential derivatives \[\mathcal{T}_{0}=\partial_{t},\quad\mathcal{T}_{1}=\partial_{1},\quad\mathcal{T}_{2 }=\partial_{2},\quad\mathcal{T}_{4}=\omega(x_{3})\partial_{3}. \tag{3.59}\] We take \(\mathcal{T}^{\alpha}\) in the momentum equation \(\rho D_{t}u=-\nabla p-B\times(\nabla\times B)\) to get \[\rho D_{t}(\mathcal{T}^{\alpha}u)-(B\cdot\nabla)(\mathcal{T}^{\alpha}B)+ \mathcal{T}^{\alpha}\nabla(p+|B|^{2}/2)=[\rho D_{t},\mathcal{T}^{\alpha}]u-[B \cdot\nabla,\mathcal{T}^{\alpha}]B, \tag{3.60}\] where \(\mathcal{T}^{\alpha}:=\mathcal{T}_{0}^{\alpha_{0}}\mathcal{T}_{1}^{\alpha_{1} }\mathcal{T}_{2}^{\alpha_{2}}\mathcal{T}_{4}^{\alpha_{4}}\) and \(\langle\alpha\rangle=\alpha_{0}+\alpha_{1}+\alpha_{2}+0\times 2+\alpha_{4}\leq 8\). Accoding to section 3.2.3, it suffices to consider the following cases * \(\langle\alpha\rangle=\alpha_{0}=4\) with \(\varepsilon^{3}\) weight, * \(\langle\alpha\rangle=5,\ \alpha_{0}\geq 3\) with \(\varepsilon^{4}\) weight, * \(\langle\alpha\rangle=6,\ \alpha_{0}\geq 2\) with \(\varepsilon^{5}\) weight, * \(\langle\alpha\rangle=7,\ \alpha_{0}\geq 1\) with \(\varepsilon^{6}\) weight, * \(\langle\alpha\rangle=8\) with \(\varepsilon^{8}\) weight. Since these derivatives are all tangential, taking either of them still preserves the boundary conditions. For simplicity, we will only show the details of the following cases (either full spatial derivatives or full time derivatives) \[\varepsilon^{8}\mathcal{T}^{\alpha}\ \text{with}\ \langle\alpha\rangle=8\ \text{and}\ \alpha_{0}=0,\ \ \varepsilon^{8}\partial_{t}^{8},\ \ \varepsilon^{k-1}\partial_{t}^{k}\ (4\leq k\leq 7).\] and the other cases (space-time mixed derivatives) can be proven in the same manner. #### 3.3.2 Tangential energy estimate with full spatial derivatives In this subsection we study the spatially-differentiated equations, i.e., the equations obtained by commuting \(\mathcal{T}^{\alpha}\), \(\alpha_{0}=0\), and \(\langle\alpha\rangle=8\), with (1.13). We aim to prove the following estimate **Proposition 3.5**.: For \(\mathcal{T}^{\alpha}\) with multi-index \(\alpha\) satisfying \(\alpha_{0}=0\) and \(\langle\alpha\rangle=8\), we have the energy inequality: \[\sum_{\langle\alpha\rangle=8,\alpha_{0}=0}\frac{\mathrm{d}}{\mathrm{d}t}\left\| \varepsilon^{8}\mathcal{T}^{\alpha}(u,B,p)\right\|_{0}^{2}\leq P(E(t)). \tag{3.61}\] Proof.: We first consider the case of \(\alpha_{4}=0\), that is, \(\mathcal{T}^{\alpha}=\overline{\partial}_{1}^{\alpha_{1}}\overline{\partial}_ {2}^{\alpha_{2}}\) with \(\alpha_{1}+\alpha_{2}=8\). To simplify our notation, we will write \(\overline{\partial}^{8}\) to represent such \(\mathcal{T}^{\alpha}\). \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\rho| \varepsilon^{8}\overline{\partial}^{8}u|^{2}\,\mathrm{d}x\] \[=\int_{\Omega}\varepsilon^{16}\rho\overline{\partial}^{8}\partial _{t}u\cdot\overline{\partial}^{8}u\,\mathrm{d}x+\frac{1}{2}\int_{\Omega} \partial_{t}\rho|\varepsilon^{8}\overline{\partial}^{8}u|^{2}\,\mathrm{d}x\] \[=\int_{\Omega}\varepsilon^{8}\overline{\partial}^{8}(\rho D_{t}u )\cdot(\varepsilon^{8}\overline{\partial}^{8}u)\,\mathrm{d}x-\underbrace{\int_ {\Omega}\varepsilon^{16}\left(\rho\overline{\partial}^{8}(u\cdot\nabla u)+[ \overline{\partial}^{8},\rho]D_{t}u+(\frac{\partial\rho}{\partial p}\partial_ {t}p+\partial_{S}\rho\partial_{t}S)\overline{\partial}^{8}u\right)\cdot \overline{\partial}^{8}u\,\mathrm{d}x}_{:=\mathcal{T}_{1}}. \tag{3.62}\] The first integral above, after invoking the momentum equation and integrating \(B\cdot\nabla\) by parts, is equal to \[\int_{\Omega}\varepsilon^{8}\overline{\partial}^{8}(B\cdot\nabla B)\cdot( \varepsilon^{8}\overline{\partial}^{8}u)\,\mathrm{d}x-\int_{\Omega}\varepsilon ^{2k}\overline{\partial}^{8}(\nabla(p+|B|^{2}/2))\cdot(\varepsilon^{2k} \overline{\partial}^{8}u)\,\mathrm{d}x \tag{3.63}\] For the first term, we integrate \(B\cdot\nabla\) by parts \[\int_{\Omega}\varepsilon^{8}\overline{\partial}^{8}(B\cdot\nabla B )\cdot(\varepsilon^{8}\overline{\partial}^{8}u)\,\mathrm{d}x\] \[\stackrel{{ B\cdot\nabla}}{{=}}-\int_{\Omega} \varepsilon^{8}\overline{\partial}^{8}B\cdot(\varepsilon^{8}\overline{ \partial}^{8}(B\cdot\nabla u))\,\mathrm{d}x\underbrace{+\int_{\Omega} \varepsilon^{8}[\overline{\partial}^{8},B\cdot\nabla]B\cdot(\varepsilon^{8} \overline{\partial}^{8}u)\,\mathrm{d}x-\int_{\Omega}\varepsilon^{8} \overline{\partial}^{8}B\cdot(\varepsilon^{8}[B\cdot\nabla,\overline{\partial}^ {8}]u)\,\mathrm{d}x}_{:=\mathcal{T}_{2}}. \tag{3.64}\] For the second term, we integrate by parts and invoke the continuity equation to get \[-\int_{\Omega}\varepsilon^{8}\overline{\partial}^{8}(\nabla(p+|B| ^{2}/2))\cdot(\varepsilon^{8}\overline{\partial}^{8}u)\,\mathrm{d}x=\int_{ \Omega}\varepsilon^{8}\overline{\partial}^{8}(p+|B|^{2}/2)\cdot(\varepsilon^{8} \overline{\partial}^{8}\nabla\cdot u)\,\mathrm{d}x\] \[=\int_{\Omega}\varepsilon^{8}\overline{\partial}^{8}(|B|^{2}/2) \cdot(\varepsilon^{8}\overline{\partial}^{8}\nabla\cdot u)\,\mathrm{d}x-\int_ {\Omega}\varepsilon^{8}\overline{\partial}^{8}p\cdot(\varepsilon^{8}\overline{ \partial}^{8}(\varepsilon^{2}D_{t}p))\,\mathrm{d}x, \tag{3.65}\] where the boundary integral on \(\Sigma\) vanishes thanks to the boundary condition \(u_{3}=B_{3}=0\) on \(\Sigma\). The first integral in (3.64), after inserting the evolution equation of \(B\), is equal to \[-\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}|\varepsilon ^{8}\overline{\partial}^{8}B|^{2}\,\mathrm{d}x\underbrace{-\int_{\Omega} \varepsilon^{8}\overline{\partial}^{8}B\cdot((u\cdot\nabla)\overline{ \partial}^{8}B+[\overline{\partial}^{8},u\cdot\nabla]B)\,\mathrm{d}x}_{:= \mathcal{J}_{3}}\] \[\underbrace{-\int_{\Omega}\varepsilon^{8}\overline{\partial}^{8 }B\cdot(\varepsilon^{8}\overline{\partial}^{8}(B\nabla\cdot u))\,\mathrm{d}x}_ {:=\mathcal{K}_{4}}, \tag{3.66}\] which gives the energy of \(B\). The last term on the right side of (3.65) gives the energy of \(\varepsilon p\) \[-\int_{\Omega}\varepsilon^{8}\overline{\partial}^{8}p\cdot( \varepsilon^{8}\overline{\partial}^{8}(\varepsilon^{2}D_{i}p))\,\mathrm{d}x\] \[= -\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\left| \varepsilon^{8}\overline{\partial}^{8}(\varepsilon p)\right|^{2}\,\mathrm{d}x \underbrace{-\int_{\Omega}\varepsilon^{8}\overline{\partial}^{8}p\cdot( \varepsilon^{10}[\overline{\partial}^{8},u\cdot\nabla]p)\,\mathrm{d}x}_{:= \mathcal{J}_{4}}. \tag{3.67}\] Then we analyze the first term on the right side of (3.65). Since \(\overline{\partial}(\frac{1}{2}|B|^{2})=\overline{\partial}B\cdot B\), we have \[\int_{\Omega}\varepsilon^{8}\overline{\partial}^{8}(|B|^{2}/2) \cdot(\varepsilon^{8}\overline{\partial}^{8}\nabla\cdot u)\,\mathrm{d}x\] \[=\underbrace{\int_{\Omega}(\varepsilon^{8}B)\cdot(\overline{ \partial}^{8}B)(\varepsilon^{8}\overline{\partial}^{8}\nabla\cdot u)\,\mathrm{d }x}_{:=\mathcal{K}_{2}}+\underbrace{\sum_{0<(\alpha)<8}\int_{\Omega} \varepsilon^{8}C_{\alpha}(\overline{\partial}^{\alpha}B)\cdot(\overline{ \partial}^{8-\alpha}B)(\varepsilon^{8}\overline{\partial}^{8}\nabla\cdot u)\, \mathrm{d}x}_{:=\mathcal{J}_{5}} \tag{3.68}\] where \(C_{\alpha}\) are some positive constants. Now let us control the commutators \(\mathcal{J}_{1}\sim\mathcal{J}_{5}\). For \(\mathcal{J}_{1}\sim\mathcal{J}_{4}\), since \(u_{3}=B_{3}=0\) on \(\Sigma\), it suffices to analyze one of the following two types of commutators \[\varepsilon^{8}[\overline{\partial}^{8},u\cdot\nabla]f,\ \ \varepsilon^{8}[ \overline{\partial}^{8},B\cdot\nabla]f.\] For example. we expand the first one to find that \[\varepsilon^{8}[\overline{\partial}^{8},u\cdot\nabla]f=\varepsilon^{8} \overline{\partial}^{8}u\cdot\nabla f+\sum_{j=1,2}\varepsilon^{8}(\overline{ \partial}\bar{u}_{j})(\overline{\nabla}\overline{\partial}^{7}f)+\varepsilon^ {8}(\overline{\partial}u_{3})(\partial_{3}\overline{\partial}^{7}f)+\sum_{k=2 }^{7}\binom{8}{k}\varepsilon^{8}\overline{\partial}^{k}u_{j}\partial_{j} \overline{\partial}^{8-j}f,\] and it is easy to see that the last term is directly controlled by \(E(t)\). For the first term, we have \[\|\varepsilon^{8}\overline{\partial}^{8}u\cdot\nabla f+\varepsilon^{8}( \overline{\partial}\bar{u})\cdot(\overline{\nabla}\overline{\partial}^{7}f)\| _{0}\leq\|\varepsilon^{8}\overline{\partial}^{8}u\|_{0}\|\nabla f\|_{\infty}+ \|\overline{\partial}u\|_{L^{2}}\|\varepsilon^{8}\overline{\partial}^{8}f\|_{ 0},\] which is directly controlled by \(E(t)\). For the third term, note that \(\overline{\partial}u_{3}|_{\Sigma}=0\), using fundamental theorem of calculus, we have \[|\overline{\partial}u_{3}(x_{3})|\lesssim\omega(x_{3})\|\overline{\partial} \partial_{3}u_{3}\|_{L^{\infty}},\] and thus \[\|\varepsilon^{8}(\overline{\partial}u_{3})(\partial_{3}\overline{\partial}^{ 7}f)\|_{0}\lesssim\|\overline{\partial}\partial_{3}u_{3}\|_{L^{\infty}}\| \varepsilon^{8}(\omega\partial_{3})\overline{\partial}^{7}f\|_{0}\] which is again controlled by \(E(t)\). As for \(\mathcal{J}_{5}\), we integrate \(\overline{\partial}\) by parts to get \[\mathcal{J}_{5}\lesssim\sum_{0<(\alpha)<8}\|\varepsilon^{8}\overline{ \partial}^{2+1}B\cdot\overline{\partial}^{8-\alpha}B)\|_{0}\|\varepsilon^{8} \overline{\partial}^{7}(\nabla\cdot u)\|_{0}\lesssim E(t)\|\varepsilon^{10} \overline{\partial}^{7}D_{i}p\|_{0}\leq P(E(t)).\] Thus, all the commutators \(\mathcal{J}_{i}\) (\(i=1,\cdots,5\)) are directly controlled \[\mathcal{J}_{1}+\mathcal{J}_{2}+\mathcal{J}_{3}+\mathcal{J}_{4}+\mathcal{J}_{5} \leq P(E(t)). \tag{3.69}\] Next, we show that \(\mathcal{K}_{1}\) has cancellation with \(\mathcal{K}_{2}\). We have \[\mathcal{K}_{1}= -\sum_{i,j=1}^{3}\int_{\Omega}\varepsilon^{8}\overline{\partial}^{ \delta}B_{i}\cdot(\varepsilon^{8}\overline{\partial}^{\delta}(B_{i}\partial_{j }u_{j}))\,\mathrm{d}x\] \[= -\underbrace{\int_{\Omega}\varepsilon^{8}B\cdot\overline{\partial }^{\delta}B(\varepsilon^{8}\overline{\partial}^{\delta}\nabla\cdot u)\,\mathrm{ d}x}_{=-\mathcal{K}_{2}}-\sum_{i,j=1}^{3}\int_{\Omega}\varepsilon^{8}\overline{ \partial}^{\delta}B_{i}\cdot(\varepsilon^{8}\overline{\partial}B_{i}\overline {\partial}^{\delta}\partial_{j}u_{j})\,\mathrm{d}x}_{=-\mathcal{K}_{2}}\] \[-\underbrace{\sum_{i,j=1}^{3}\sum_{2\leq(a)\leq 8}\int_{\Omega} \varepsilon^{8}\overline{\partial}^{\delta}B_{i}\cdot(\varepsilon^{8}C_{a} \overline{\partial}^{\alpha}B_{i}\overline{\partial}^{\delta-\alpha}\partial _{j}u_{j})\,\mathrm{d}x}_{\mathcal{J}_{a}}. \tag{3.70}\] The last term \(\mathcal{J}_{6}\) is again directly controlled: \(\mathcal{J}_{6}\leq P(E(t))\). The first term cancels with \(\mathcal{K}_{2}\). For the second term, we need to further analyze the case when \(j=3\). Thanks to the continuity equation, we have \(-\partial_{3}u_{3}=\varepsilon^{2}D_{i}p+\overline{\nabla}\cdot\bar{u}\) and thus \[-\sum_{i,j=1}^{3}\int_{\Omega}\varepsilon^{8}\overline{\partial}^{\delta}B_{i }\cdot(\varepsilon^{8}\overline{\partial}B_{i}\overline{\partial}^{\prime} \partial_{3}u_{3})\,\mathrm{d}x=\sum_{i=1}^{3}\int_{\Omega}\varepsilon^{8} \overline{\partial}^{\delta}B_{i}\cdot\left(\varepsilon^{8}\overline{\partial }B_{i}\overline{\partial}^{\prime}(\varepsilon^{2}D_{i}p+\overline{\nabla} \cdot\bar{u})\right)\,\mathrm{d}x\lesssim P(E(t)). \tag{3.71}\] When \(j=1,2\), the second term is again directly controlled by \(P(E(t))\) because all the derivatives are tangential. \[-\sum_{i=1}^{3}\sum_{j=1,2}\int_{\Omega}\varepsilon^{8}\overline{\partial}^{ \delta}B_{i}\cdot(\varepsilon^{8}\overline{\partial}B_{i}\overline{\partial}^ {\prime}\overline{\partial}_{j}\bar{u}_{j})\,\mathrm{d}x\leq P(E(t)). \tag{3.72}\] Thus, we conclude that \[\mathcal{K}_{1}+\mathcal{K}_{2}\leq P(E(t)). \tag{3.73}\] Combining (3.63)-(3.73), we conclude that \[\frac{\,\mathrm{d}}{\,\mathrm{d}t}\left\|\varepsilon^{8}\overline{\partial}^{ \delta}(\varepsilon p,u,B)\right\|_{0}^{2}\leq P(E(t)). \tag{3.74}\] To control \(\|\varepsilon^{8}\overline{\partial}^{\delta}p\|\), it suffices to invoke the momentum equation \(-\overline{\partial}_{i}p=B\cdot(\overline{\partial}_{i}B)+\rho D_{i}\bar{u} _{i}-(B\cdot\nabla)B\) and thus we convert \(\varepsilon^{8}\overline{\partial}^{\delta}p\) to \(\varepsilon^{8}\overline{\partial}^{\prime}\mathcal{T}(u,B)\) which is part of \(E_{8}(t)\). We then conclude that \[\left\|\varepsilon^{8}\overline{\partial}^{\delta}(p,u,B)(t)\right\|_{0}^{2} \leq E_{8}(0)+\int_{0}^{t}P(E(\tau))\mathrm{d}\tau. \tag{3.75}\] For the case of \(\alpha_{4}\geq 1\), since \([\partial_{3},\omega\partial_{3}]\neq 0\), we need to reconsider the estimates of commutators. For the commutators of type \([\mathcal{T}^{\alpha},\partial_{3}](\cdot)\), we can use identity (3.25). For example, we need to control the extra terms when commuting \(\mathcal{T}^{\alpha}\) with \(\nabla\): \[-\int_{\Omega}(\varepsilon^{8}\mathcal{T}^{\alpha}u_{i})(\varepsilon^{8}[ \mathcal{T}^{\alpha},\partial_{i}]P)\,\mathrm{d}x\,\text{and}\,\,\int_{\Omega}( \varepsilon^{8}[\partial_{i},\mathcal{T}^{\alpha}]u_{i})(\varepsilon^{8} \mathcal{T}^{\alpha}P)\,\mathrm{d}x.\] Without loss of generality, we assume \(\mathcal{T}^{\alpha}=(\omega(x_{3})\partial_{3})\overline{\partial}^{\prime}\). In the first integral above, the highest-order term should be \[-\int_{\Omega}(\varepsilon^{8}\mathcal{T}^{\alpha}u_{3})(\varepsilon^{8}\ \partial_{3}\omega\ \partial_{3}\overline{\partial}^{\prime}P)\,\mathrm{d}x.\] Now we invoke the momentum equation to replace \(-\partial_{3}P\) with \(\rho D_{i}u_{3}-(B\cdot\nabla)B_{3}\). Recall that \(D_{t}\) and \(B\cdot\nabla\) are both tangential derivatives thanks to \(u_{3}|_{\Sigma}=B_{3}|_{\Sigma}=0\). So the above integral is controlled by \(\|\varepsilon^{8}\mathcal{T}^{\alpha}u\|_{0}(\|\varepsilon^{8}\mathcal{T}^{ \alpha^{\prime}}u\|_{0}+\|\varepsilon^{8}\mathcal{T}^{\alpha^{\prime\prime}}B \|_{0})\) plus lower-order terms where \(\langle\alpha^{\prime}\rangle=\langle\alpha^{\prime\prime}\rangle=\$\). Similarly, for the second integral above, one can invoke the continuity equation, which reads \(\partial_{3}u_{3}=-\overline{\nabla}\cdot\bar{u}-\varepsilon^{2}D_{i}p\), to finish the control. A slight difference is that we do not have the energy for \(\|\varepsilon^{8}\mathcal{T}^{\alpha}p\|_{0}\). Luckily, at this step, \(\mathcal{T}^{\alpha}\) must contain a spatial derivative \(\omega(x_{3})\partial_{3}\). So, it suffices to again invoke the momentum equation \[-\omega\partial_{3}p=\underbrace{\omega\partial_{3}}_{\text{ tangential}}(\frac{1}{2}|B|^{2})+\omega\rho D_{t}u_{3}-\omega(B\cdot\nabla B_{3})\] such that all terms that contain the fluid pressure are replaced by the velocity and the magnetic field. As for the other commutators, for example, we re-consider \(\varepsilon^{8}[\omega\partial_{3}\overline{\partial}^{7},B\cdot\nabla]B\) in the analogue of \(\mathcal{J}_{2}\) (W.L.O.G. \(\alpha_{4}=1\)). We have \[[\omega\partial_{3}\overline{\partial}^{7},B\cdot\nabla]B =[\omega\partial_{3}\overline{\partial}^{7},B_{3}\partial_{3}]B+[ \omega\partial_{3}\overline{\partial}^{7},\tilde{B}\cdot\overline{\nabla}]B\] \[=\omega\partial_{3}\overline{\partial}^{7}(B_{3}\partial_{3}B)- B_{3}\partial_{3}(\omega\partial_{3}\overline{\partial}^{7}B)+[\omega \partial_{3}\overline{\partial}^{7},\tilde{B}\cdot\overline{\nabla}]B. \tag{3.76}\] Compared with the case of \(\alpha_{4}=0\), the potential risk is that \(\partial_{3}\) may fall on \(\omega(x_{3})\) and then converts the tangential derivative \(\omega\partial_{3}\) to be a normal derivative \(\partial_{3}\). So, it suffices to analyze the term \(-B_{3}(\partial_{3}\omega)(\partial_{3}\overline{\partial}^{7}B)\). Luckily, there is a \(B_{3}\) (replaced by \(u_{3}\) if we commute \(\mathcal{T}^{\alpha}\) with \(u\cdot\nabla\)). Since \(B_{3}|_{\Sigma}=0\), we again use the fundamental theorem of calculus to obtain \[|B_{3}(x_{3})|\leq\omega(x_{3})\|\partial_{3}B_{3}\|_{\infty},\] and thus \[\varepsilon^{8}\|B_{3}(\partial_{3}\omega)(\partial_{3}\overline{ \partial}^{7}B)\|_{0}\lesssim\|\partial_{3}B_{3}\|_{\infty}\|_{\infty}\| \varepsilon^{8}(\omega\partial_{3})\overline{\partial}^{7}B\|_{0}\leq E(t). \tag{3.77}\] Other terms arise from commutators can be treated similarly. Therefore, we obtain the energy estimate \[\sum_{(\alpha)=8,\alpha_{0}=0}\frac{\mathrm{d}}{\mathrm{d}t}\left\| \varepsilon^{8}\mathcal{T}^{\alpha}(u,B,p)\right\|_{0}^{2}\leq P(E(t)). \tag{3.78}\] The proof is completed. #### 3.3.3 Tangential energy estimate with full time derivatives In this subsection we study the time-differentiated equations, i.e., the equations obtained by commuting \(\partial_{t}^{k}\), with (1.13). We aim to prove the following proposition which gives stronger estimates for full time derivatives than what we need in (3.58). **Proposition 3.6**.: The following tangential estimate for full time derivatives holds: \[\frac{\mathrm{d}}{\mathrm{d}t}\left(\sum_{k=4}^{7}\left\|\varepsilon^{k-1} \partial_{t}^{k}(u,B,\varepsilon p)\right\|_{0}^{2}+\left\|\varepsilon^{8} \partial_{t}^{8}(u,B,\varepsilon p)\right\|_{0}^{2}\right)\leq P(E(t)). \tag{3.79}\] Proof.: Let us first consider the case when \(k=8\), that is, \(\varepsilon^{8}\partial_{t}^{8}\)-estimates. We just need to replace \(\overline{\partial}^{8}\) by \(\partial_{t}^{8}\) in the proof of Proposition 3.5 and then check if the analogues of the commutators \(\mathcal{J}_{1}\sim\mathcal{J}_{6}\) can be controlled by \(P(E(t))\) or not. From (3.68) and (3.70), we know that the analogue of \(\mathcal{J}_{5}\) and \(\mathcal{J}_{6}\) can be controlled in the same way because \(\partial_{t}^{8}(u,B)\) and \(\partial^{8}(u,B)\) have the same weights of Mach number. Next, let us analyze the commutators in the analogues of \(\mathcal{J}_{1}\sim\mathcal{J}_{3}\). For example, we consider \[\varepsilon^{8}[\partial_{t}^{8},B\cdot\nabla]f,\ \ f=u\text{ or }B.\] This commutator is equal to \[\varepsilon^{8}(\partial_{t}^{8}B_{j}\partial_{j}f+8\partial_{t} B_{j}\partial_{j}\partial_{t}^{7}f)+\sum_{k=2}^{7}\varepsilon^{8}\binom{8}{k} \partial_{t}^{k}B_{j}\partial_{j}\partial_{t}^{8-k}f\] \[=(\varepsilon^{8}\partial_{t}^{8}B_{j})(\partial_{j}f)+8( \partial_{t}B_{j})\partial_{j}(\varepsilon^{8}\partial_{t}^{7}f)+\sum_{k=2}^{7} \binom{8}{k}(\varepsilon^{k-1}\partial_{t}^{k}B_{j})(\varepsilon^{9-k} \partial_{j}\partial_{t}^{8-k}f),\] where the \(L^{2}(\Omega)\) norms of the first term and the third term can be directly controlled by \(E(t)\). For the second term, when \(j=1,2\), it can be directly controlled; when \(j=3\), we again use \(\partial_{t}B_{3}|_{\Sigma}=0\) and fundamental theorem of calculus to create a weight function \(\omega(x_{3})\): \[\|(\partial_{t}B_{j})\partial_{j}(\varepsilon^{8}\partial_{t}^{ \gamma}f)\|\leq\|\partial_{t}B\|_{\infty}\|\varepsilon^{8}\overline{\partial}B \partial_{t}^{\gamma}f\|_{0}+\|\partial_{3}\partial_{t}B_{3}\|_{\infty}\| \varepsilon^{8}(\omega\partial_{3})\partial_{t}^{\gamma}f\|_{0}\lesssim\sqrt{ E_{4}(t)E_{8}(t)}\leq E(t).\] It remains to analyze the commutator \(\mathcal{J}_{4}\) arising from (3.67). The difference is that \(\partial_{t}^{8}p\) needs one more \(\varepsilon\) weight. Luckily, this term is produced because we invoke the continuity equation which automatically gives us \(\varepsilon^{2}\) weight. So, the analogue of \(\mathcal{J}_{4}\) is controlled in the following way: \[-\int_{\Omega}\varepsilon^{8}\partial_{t}^{8}p\left(\varepsilon^ {10}[\partial_{t}^{8},u\cdot\nabla]p\right)\,\mathrm{d}x=-\int_{\Omega} \varepsilon^{9}\partial_{t}^{8}p\left(\varepsilon^{9}[\partial_{t}^{8},u \cdot\nabla]p\right)\,\mathrm{d}x \tag{3.80}\] \[= -\int_{\Omega}\varepsilon^{9}\partial_{t}^{8}p\left(\varepsilon^ {9}(\partial_{t}^{8}u_{\rho}\partial_{j}p+8\partial_{t}u_{\rho}\partial_{j} \partial_{t}^{8}p)\right)\,\mathrm{d}x\] \[-\int_{\Omega}(\varepsilon^{9}\partial_{t}^{8}p)\left(\sum_{k=2} ^{7}\binom{8}{k}(\varepsilon^{k-1}\partial_{t}^{k}u_{j})(\varepsilon^{10-k}u _{\rho}\partial_{j}\partial_{t}^{8-k}p)\right)\,\mathrm{d}x\lesssim E(t).\] Therefore, we conclude the \(\varepsilon^{8}\partial_{t}^{8}\)-estimates as follows \[\frac{\,\mathrm{d}}{\,\mathrm{d}t}\left\|\varepsilon^{8}\partial_{t}^{8}(u,B, \varepsilon p)\right\|_{0}^{2}\leq P(E(t)). \tag{3.81}\] For \(4\leq k\leq 7\), the proof is still parallel to Proposition 3.5 if we replace \(\varepsilon^{8}\partial_{t}^{8}\) by \(\varepsilon^{k-1}\partial_{t}^{k}\) (\(4\leq k\leq 7\)). We only need to re-consider the estimates of the commutators arising in the analogues of \(\mathcal{J}_{1}\sim\mathcal{J}_{6}\). For example, we look at the case \(k=7\). Similarly as above, we first consider \(\mathrm{th}\ L^{2}\) estimates of the following commuator \[\varepsilon^{6}[\partial_{t}^{7},B\cdot\nabla]f,\ \ f=u\text{ or }B,\] which is equal to \[\varepsilon^{6}(\partial_{t}^{7}B_{j}\partial_{j}f+7\partial_{t}B _{j}\partial_{j}\partial_{t}^{6}f)+\sum_{k=2}^{6}\varepsilon^{6}\binom{7}{k} \partial_{t}^{k}B_{j}\partial_{j}\partial_{t}^{7-k}f\] \[=(\varepsilon^{6}\partial_{t}^{7}B_{j})(\partial_{j}f)+7(\partial _{t}B_{j})\partial_{j}(\varepsilon^{6}\partial_{t}^{7}f)+\sum_{k=2}^{6}\binom{7 }{k}(\varepsilon^{k-1}\partial_{t}^{k}B_{j})(\varepsilon^{7-k}\partial_{j} \partial_{t}^{7-k}f).\] Again, the first term and the third term are directly controlled by \(E(t)\). For the second term, using \(\partial_{t}B_{3}|_{\Sigma}=0\) and fundamental theorem of calculus, we then obtain \[\|(\partial_{t}B_{j})\partial_{j}(\varepsilon^{6}\partial_{t}^{ 7}f)\|_{0}\leq\|\partial_{t}B\|_{\infty}\|\varepsilon^{6}\overline{\partial} \partial_{t}^{6}f\|_{0}+\|\partial_{3}\partial_{t}B_{3}\|_{\infty}\|\varepsilon^ {6}(\omega\partial_{3})\partial_{t}^{6}f\|_{0}\lesssim\sqrt{E_{4}(t)E_{7}(t)} \lesssim E(t).\] As for the analogue of \(\mathcal{J}_{4}\), the continuity equation gives us extra \(\varepsilon^{2}\) weight such that the estimate is uniform in Mach number: \[-\int_{\Omega}\varepsilon^{6}\partial_{t}^{7}p\left(\varepsilon^{ 8}[\partial_{t}^{7},u\cdot\nabla]p\right)\,\mathrm{d}x=-\int_{\Omega} \varepsilon^{7}\partial_{t}^{7}p\left(\varepsilon^{7}[\partial_{t}^{7},u\cdot \nabla]p\right)\,\mathrm{d}x\] \[= -\int_{\Omega}\varepsilon^{7}\partial_{t}^{7}p\left(\varepsilon^{ 7}(\partial_{t}^{7}u_{j}\partial_{j}p+7\partial_{t}u_{j}\partial_{j}\partial _{t}^{6}p)\right)\,\mathrm{d}x \tag{3.82}\] \[-\int_{\Omega}(\varepsilon^{7}\partial_{t}^{7}p)\left(\sum_{k=2} ^{6}\binom{7}{k}(\varepsilon^{k-1}\partial_{t}^{k}u_{j})(\varepsilon^{8-k}u _{\rho}\partial_{t}^{8-k}p)\right)\,\mathrm{d}x\lesssim E(t).\] Therefore, we obtain the energy estimate \[\frac{\,\mathrm{d}}{\,\mathrm{d}t}\left\|\varepsilon^{6}\partial_{t}^{7}( \varepsilon p,u,B)\right\|_{0}^{2}\leq P(E(t)). \tag{3.83}\] The case when \(4\leq k\leq 6\) can be proved in the same way, so we conclude the following energy estimate for full time derivatives \[\frac{\,\mathrm{d}}{\,\mathrm{d}t}\sum_{k=4}^{7}\left\|\varepsilon^{k-1} \partial_{t}^{k}(u,B,\varepsilon p)\right\|_{0}^{2}\leq P(E(t)). \tag{3.84}\] The proof is completed. #### 3.3.4 Space-time mixed tangential derivatives We still need to prove the tangential estimates for \(\varepsilon^{8}\mathcal{T}^{\alpha}\) (\(0<\alpha_{0}<8\)). Indeed, they can be proved in the same way as Proposition 3.5 and Proposition 3.6 because of the following reasons * The derivatives \(\varepsilon^{8}\mathcal{T}^{\alpha}\) (\(0<\alpha_{0}<8\)) are tangential, so they preserve the boundary conditions and thus all boundary integrals must vanish. * The commutators arising in the proof of Proposition 3.5 and Proposition 3.6 do not produce extra time derivatives, so the weight \(\varepsilon^{8}\) is enough to close the estimate. * When \(\alpha_{4}\neq 0\), that is, \((\omega\partial_{3})^{\alpha_{4}}\) appears in \(\mathcal{T}^{\alpha}\), it suffices to use the same strategy as presented in Proposition 3.5. Indeed, we only commute \((\omega\partial_{3})\)'s with either \(u\cdot\nabla\) or \(B\cdot\nabla\), so there must a \(u_{3}\) or \(B_{3}\) appearing when \(\partial_{3}\) falls on \(\omega\). Thus, we can use the boundary conditions for \(u_{3},\ B_{3}\) to reproduce the weight function \(\omega(x_{3})\). Based on the above analysis, we conclude the tangential estimates in following proposition. **Proposition 3.7**.: We have the energy inequality: \[\frac{\mathrm{d}}{\mathrm{d}t}\bigg{(} \tag{3.85}\] \[\quad+\sum_{l=0}^{3}\sum_{(\omega)=2l,\omega_{0}<2l}\big{\|} \varepsilon^{3+l}\mathcal{T}^{\alpha}\partial_{t}^{4-l}(u,B,p)\big{\|}_{0}^{ 2}+\sum_{l=0}^{3}\big{\|}\varepsilon^{3+l}\partial_{t}^{4+l}(u,B,\varepsilon p )\big{\|}_{0}^{2}\bigg{)}\leq P(E(t)).\] ### Uniform estimates Since \(D_{t}S=0\), we can easily prove the estimates for \(S\) by directly commuting \(D_{t}\) with \(\partial_{*}^{\alpha}\) and corresponding weights of Mach number. The proof does not involve any boundary term because we do not integrate by parts, so we omit the details. \[\frac{\mathrm{d}}{\mathrm{d}t}\sum_{l=0}^{4}\sum_{(\alpha)=2l}\big{\|} \varepsilon^{(k-1),+2l}\mathcal{T}^{\alpha}\partial_{t}^{k}S\big{\|}_{4-k-l} ^{2}\leq P(E(t)). \tag{3.86}\] Combining the tangential estimates presented in Propositions 3.5-3.7 with the div-curl analysis in section 3.2.2, the reduction of pressure in section 3.2.1, the \(L^{2}\) estimates and the summary in section 3.2.3, we can get the following energy inequality \[\frac{\mathrm{d}}{\mathrm{d}t}E(t)\leq P(E(t)), \tag{3.87}\] where \(E(t)\) is defined by (1.18). Since the right side of the energy inequality does not rely on \(\varepsilon\), we can use Gronwall's inequality to prove that there exists some \(T>0\) independent of \(\varepsilon\) such that \[\sup_{t\in[0,T]}E(t)\leq P(E(0)). \tag{3.88}\] Theorem 1.1 is proven. ### The case of 2D MHD The previous sections are devoted to the incompressible limit of 3D MHD. Now we explain how to modify the proof such that it is valid for 2D MHD flows. The proof for the 2D case is essentially the same as the 3D case: we only need to do a slight modification in the vorticity analysis because the vorticity in 2D is a scalar function, not a vector function. First, lemma 3.1 should be modified as below **Lemma 3.8** (Hodge elliptic estimates).: For any sufficiently smooth vector field \(X\in\mathbb{R}^{2}\) and any real number \(s\geq 1\), one has \[\|X\|_{s}^{2}\lesssim\|X\|_{0}^{2}+\|\nabla\cdot X\|_{s-1}^{2}+\|\nabla^{ \perp}\cdot X\|_{s-1}^{2}+|\overline{\partial}_{1}X\cdot N|_{s-\frac{3}{2}}^{ 2}, \tag{3.89}\] where \(\nabla=(\partial_{1},\partial_{2})\) and \(\nabla^{\perp}=(-\partial_{2},\partial_{1})\). Taking \(\nabla^{\perp}\cdot\) in the momentum equation, we obtain the analogue of (3.12) as \[\rho D_{t}(\nabla^{\perp}\cdot u)-(B\cdot\nabla)(\nabla^{\perp}\cdot B)=\rho[D_{t },\nabla^{\perp}\cdot]u-[B\cdot\nabla,\ \nabla^{\perp}\cdot]B-(\nabla^{\perp}\rho)\cdot(D_{t}u), \tag{3.90}\] which has the same structure as (3.12). Thus, one can follow the analysis in section 3.2.2. The only slight difference is the treatment of Lorentz force term. We take \(\partial^{3}\)-estimate of \(\nabla^{\perp}\cdot(u,B)\) for an example. Following the proof of lemma 3.2, we need to control the following integral \[\mathcal{K}^{\prime\prime}:=\int_{\Omega}\partial^{3}(\nabla^{\perp}\cdot B) \,\partial^{3}\nabla^{\perp}\cdot(\varepsilon^{2}BD_{t}p)\,\mathrm{d}x, \tag{3.91}\] where the problematic term is \[\varepsilon^{2}(B\cdot\nabla^{\perp})\partial^{3}D_{t}p=\varepsilon^{2}(-B_{ 1}\partial^{3}\partial_{2}D_{t}p+B_{2}\partial^{3}\partial_{1}D_{t}p)\] after inserting the continuity equation. Commuting \(\nabla^{\perp}\) with \(D_{t}\), we have \[\varepsilon^{2}(-B_{1}\partial^{3}\partial_{2}D_{t}p+B_{2} \partial^{3}\partial_{1}D_{t}p)\] \[=\varepsilon^{2}\left(-B_{1}\partial^{3}D_{t}(\partial_{2}p)+B_{ 2}\partial^{3}D_{t}(\partial_{1}p)\right)-\varepsilon^{2}\left(B_{1}\partial^ {3}(\partial_{2}u_{j}\partial_{j}p)-B_{2}\partial^{3}(\partial_{1}u_{j} \partial_{j}p)\right),\] where the \(L^{2}(\Omega)\) norm of the second term is directly controlled by \(E_{4}(t)\). For the first term, we again invoke the momentum equation, which in 2D reads \[-\partial_{1}p=\rho D_{t}u_{1}-B_{1}\partial_{1}B_{1}-B_{2} \partial_{2}B_{1}+\partial_{1}(|B|^{2}/2)=\rho D_{t}u_{1}+B_{2}(\partial_{1}B _{2}-\partial_{2}B_{1})=\rho D_{t}u_{1}+B_{2}(\nabla^{\perp}\cdot B),\] \[-\partial_{2}p=\rho D_{t}u_{2}-B_{1}\partial_{1}B_{2}-B_{2} \partial_{2}B_{2}+\partial_{2}(|B|^{2}/2)=\rho D_{t}u_{1}-B_{1}(\partial_{1}B _{2}-\partial_{2}B_{1})=\rho D_{t}u_{2}-B_{1}(\nabla^{\perp}\cdot B).\] Now, we have \[\varepsilon^{2}(-B_{1}\partial^{3}\partial_{2}D_{t}p+B_{2} \partial^{3}\partial_{1}D_{t}p)\] \[=\varepsilon^{2}\rho(B_{1}\partial^{3}D_{t}^{2}u_{2}-B_{2} \partial^{3}D_{t}^{2}u_{1})\underbrace{-\varepsilon^{2}(B_{1}^{2}+B_{2}^{2}) \partial^{3}D_{t}(\nabla^{\perp}\cdot B)}_{\text{not zero, but }\varepsilon^{2}|B|^{2} \text{ has definite sign}}+L^{2}(\Omega)\text{-controllable terms},\] and thus \[\mathcal{K}^{\prime\prime}\overset{L}{=}\int_{\Omega}\partial^{3} (\nabla^{\perp}\cdot B)\varepsilon^{2}\rho(B_{1}\partial^{3}D_{t}^{2}u_{2}-B _{2}\partial^{3}D_{t}^{2}u_{1})\,\mathrm{d}x-\int_{\Omega}\partial^{3}( \nabla^{\perp}\cdot B)\,\varepsilon^{2}|B|^{2}\partial^{3}D_{t}(\nabla^{\perp }\cdot B)\,\mathrm{d}x \tag{3.92}\] \[\overset{L}{=}\int_{\Omega}\partial^{3}(\nabla^{\perp}\cdot B) \varepsilon^{2}\rho(B_{1}\partial^{3}D_{t}^{2}u_{2}-B_{2}\partial^{3}D_{t}^{2 }u_{1})\,\mathrm{d}x\] \[\quad-\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega} \left|\varepsilon|B|\partial^{3}(\nabla^{\perp}\cdot B)\right|^{2}\,\mathrm{d }x-\frac{1}{2}\int_{\Omega}(u\cdot\nabla)\left(\left|\varepsilon|B|\partial^ {3}(\nabla^{\perp}\cdot B)\right|^{2}\right)\,\mathrm{d}x,\] where the last term on the right side can be directly controlled, and the first term requires the control of \(\left\|\varepsilon^{2}D_{t}^{2}u\right\|_{3}\). But the second term is no longer zero as in 3D case. Instead, it can be moved to the left side as a part of energy of \(\nabla^{\perp}\cdot B\). The conclusion of Lemma 3.2 is then modified to be \[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\int_{\Omega}\rho\left|\partial^{3}( \nabla^{\perp}\cdot u)\right|^{2}+\left|(1+\varepsilon|B|)\partial^{3}( \nabla^{\perp}\cdot B)\right|^{2}\,\mathrm{d}x\leq P(E_{4}(t))+E_{5}(t). \tag{3.93}\] ## 4 Incompressible limit This section is devoted to showing that we can pass the solution of (1.13) to the incompressible limit (1.19) provided that we have the convergence for initial data. In other words, we study the behavior of the solution of (1.13) as the Mach number \(\varepsilon\) tends to \(0\). Recall that the Mach number \(\varepsilon\) is defined in Section 1.1. The energy estimate presented in Theorem 1.1 implies the boundedness of \(\sum\limits_{k=0,1}\|\partial_{t}^{k}(u,B,S)\|_{4-k}\) uniformly in \(\varepsilon\) within the time interval \([0,T]\). Thus, by Aubin-Lions Lemma, up to a subsequence, we have \[(u,B,S)\to(u^{0},B^{0},S^{0})\quad\text{weakly-* in }L^{\infty}([0,T];H^{4}( \Omega))\text{ and strongly in }C([0,T];H^{4-\delta}(\Omega)), \tag{4.1}\] for \(\delta>0\). Moreover, by using the continuity equation \(\varepsilon^{2}D_{t}p+\nabla\cdot u=0\), we have \[\nabla\cdot u\to\nabla\cdot u^{0}=0\quad\text{in }L^{\infty}([0,T];H^{3}( \Omega))\text{ and strongly in }C([0,T];H^{3-\delta}(\Omega)) \tag{4.2}\] for \(\delta>0\) because \(\|\varepsilon\partial_{t,x}p\|_{3}\) and \(\|u\|_{4}\) are uniformly bounded in \([0,T]\). Now, \((u^{0},B^{0},S^{0})\in L^{\infty}([0,T];H^{4}(\Omega))\) solves the incompressible MHD equations together with a transport equation \[\begin{cases}\varrho(\partial_{t}u^{0}+u^{0}\cdot\nabla u^{0})-B^{0}\cdot \nabla B^{0}+\nabla(\pi+\frac{1}{2}|B^{0}|^{2})=0&\text{in }[0,T]\times\Omega,\\ \partial_{t}B^{0}+u^{0}\cdot\nabla B^{0}-B^{0}\cdot\nabla u^{0}=0&\text{in }[0,T]\times\Omega,\\ \nabla\cdot u^{0}=\nabla\cdot B^{0}=0&\text{in }[0,T]\times\Omega,\\ \partial_{t}S^{0}+u^{0}\cdot\nabla S^{0}=0&\text{in }[0,T]\times\Omega,\\ u_{3}^{0}=B_{3}^{0}=0&\text{on }[0,T]\times\Sigma,\\ (u^{0},B^{0},S^{0})|_{t=0}=(u_{0}^{0},B_{0}^{0},S_{0}^{0}),\end{cases} \tag{4.3}\] for a suitable fluid pressure function \(\pi\) satisfying \(\nabla\pi\in C([0,T];H^{3}(\Omega))\). Here \(\varrho\) satisfies \(\partial_{t}\varrho+u^{0}\cdot\nabla_{\varrho}=0\), with initial data \(\varrho_{0}:=\rho(0,S_{0}^{0})\). Moreover, the uniqueness of the limit function implies that the convergence holds as \(\varepsilon\to 0\) without restricting to a subsequence. Theorem 1.2 is then proven.
2306.00928
ACLM: A Selective-Denoising based Generative Data Augmentation Approach for Low-Resource Complex NER
Complex Named Entity Recognition (NER) is the task of detecting linguistically complex named entities in low-context text. In this paper, we present ACLM Attention-map aware keyword selection for Conditional Language Model fine-tuning), a novel data augmentation approach based on conditional generation to address the data scarcity problem in low-resource complex NER. ACLM alleviates the context-entity mismatch issue, a problem existing NER data augmentation techniques suffer from and often generates incoherent augmentations by placing complex named entities in the wrong context. ACLM builds on BART and is optimized on a novel text reconstruction or denoising task - we use selective masking (aided by attention maps) to retain the named entities and certain keywords in the input sentence that provide contextually relevant additional knowledge or hints about the named entities. Compared with other data augmentation strategies, ACLM can generate more diverse and coherent augmentations preserving the true word sense of complex entities in the sentence. We demonstrate the effectiveness of ACLM both qualitatively and quantitatively on monolingual, cross-lingual, and multilingual complex NER across various low-resource settings. ACLM outperforms all our neural baselines by a significant margin (1%-36%). In addition, we demonstrate the application of ACLM to other domains that suffer from data scarcity (e.g., biomedical). In practice, ACLM generates more effective and factual augmentations for these domains than prior methods. Code: https://github.com/Sreyan88/ACLM
Sreyan Ghosh, Utkarsh Tyagi, Manan Suri, Sonal Kumar, S Ramaneswaran, Dinesh Manocha
2023-06-01T17:33:04Z
http://arxiv.org/abs/2306.00928v1
ACLM: A Selective-Denoising based Generative Data Augmentation Approach for Low-Resource Complex NER ###### Abstract Complex Named Entity Recognition (NER) is the task of detecting linguistically complex named entities in low-context text. In this paper, we present ACLM (**A**ttention-map aware keyword selection for **C**onditional **L**anguage **M**odel fine-tuning), a novel data augmentation approach, based on conditional generation, to address the data scarcity problem in low-resource complex NER. ACLM alleviates the context-entity mismatch issue, a problem existing NER data augmentation techniques suffer from and often generates incoherent augmentations by placing complex named entities in the wrong context. ACLM builds on BART and is optimized on a novel text reconstruction or denoising task - we use _selective masking_ (aided by attention maps) to retain the named entities and certain _keywords_ in the input sentence that provide contextually relevant additional knowledge or hints about the named entities. Compared with other data augmentation strategies, ACLM can generate more diverse and coherent augmentations preserving the true word sense of complex entities in the sentence. We demonstrate the effectiveness of ACLM both qualitatively and quantitatively on monolingual, cross-lingual, and multilingual complex NER across various low-resource settings. ACLM outperforms all our neural baselines by a significant margin (1%-36%). In addition, we demonstrate the application of ACLM to other domains that suffer from data scarcity (e.g., biomedical). In practice, ACLM generates more effective and factual augmentations for these domains than prior methods.1 Footnote 1: Code: [https://github.com/Sreyan88/ACLM](https://github.com/Sreyan88/ACLM) ## 1 Introduction Named Entity Recognition (NER) is a fundamental task in Natural Language Processing (NLP) that aims to detect various types of named entities (NEs) from text. Recently, there has been considerable progress in NER using neural learning methods that achieve state-of-the-art (SOTA) performance Wang et al. (2021); Zhou and Chen (2021) on well-known benchmark datasets, including CoNLL 2003 Tjong Kim Sang and De Meulder (2003) and OntoNotes Schwartz et al. (2012). However, these datasets are designed to evaluate the performance on detecting "relatively easy" NEs like _proper names_ (e.g., people such as "Barack Obama," locations such as "New York," or organizations such as "IBM") in well-formed, context-rich text that comes from news articles Augenstein et al. (2017). On the other hand, complex NER benchmarks like MultiCoNER Malmasi et al. (2022) present several contemporary challenges in NER, including short low-context texts with emerging and semantically ambiguous complex entities (e.g., movie names in online comments) that reduce the performance of SOTA methods previously evaluated only on the existing NER benchmark datasets. Our experiments reveal that the performance of the current SOTA NER method Zhou and Chen (2021) (previously evaluated only on the CoNLL 2003 dataset) drops by 23% when evaluated on MultiCoNER and 31.8% when evaluated on a low-resource setting with just 500 training samples (more details in Table 8). Thus, we emphasize that research on building systems that can effectively detect complex NEs in the text is currently understudied in the field of NLP. In the past, researchers have made several attempts at building supervised approaches to detect complex and compositional noun phrase entities in sentences Doddington et al. (2004); Biggio et al. (2010); Magnolini et al. (2019). However, the scarcity of annotated training data for building effective systems has always been a challenge. Data augmentation has been shown to be an effective solution for low-resource NER Ding et al. (2020); Liu et al. (2021); Zhou et al. (2022). In practice, though these systems perform well and generate coherent augmentations on common NER benchmark datasets with easy proper noun NEs, they fail to be effective for complex NER, often generating incoherent augmentations. We first argue that certain types of complex NEs follow specific linguistic patterns and appear only in specific contexts (examples in Appendix 4), and augmentations that do not follow these patterns impede a NER model from learning such patterns effectively. This sometimes also leads to augmentations with context-entity mismatch, further hurting the learning process. For e.g., unlike proper names, substituting complex NEs from other sentences in the corpus or replacing them with synonyms Dai and Adel (2020) often leads to augmentations where the NE does not fit into the new context (e.g., swapping proper names across sentences might still keep the sentence coherent _but_ swapping the name of a book with a movie (both _creative work_ entity) _or_ the name of a football team with a political party (both _group_ entity) makes it incoherent). Fine-tuning pre-trained language models (PLMs), similar to prior-work Ding et al. (2020); Liu et al. (2021); Zhou et al. (2022), fail to generate new context around complex NEs or completely new NEs with the desired linguistic patterns due to low-context sentences and the lack of existing knowledge of such linguistically complex NEs (examples in Fig. 3). This leads to in-coherent augmentations and poses a severe problem in knowledge-intensive tasks like biomedical NER, where non-factual augmentations severely hurt learning. Our experiments also reveal that introducing new context patterns around NEs proves to be a more effective data augmentation technique for complex NER than diversifying NEs (ACLM vs. MELM in Table 1). **Main Results:** To overcome the aforesaid problems, we formulate data augmentation as a conditional generation task and propose ACLM, a conditional text generation model that generates augmentation samples by introducing new and diverse context patterns around a NE. ACLM builds on BART Lewis et al. (2020) and is fine-tuned on a modification of the text reconstruction from corrupted text task, a common denoising-based PLM pre-training objective. In contrast to other PLM pre-training strategies, which randomly mask a portion of the text for corruption, our modified objective is based on _selective masking_, wherein we mask all other words in the sentence except the NEs and a small percentage of _keywords_ related to the NEs. We refer to this corrupted sentence as a _template_, and it serves as input to the model for both the training and generation phases. These keywords are other non-NE tokens in the sentence that provide contextually relevant additional knowledge or hints to BART about the complex NEs without the need of retrieving knowledge from any external sources. We select these keywords using attention maps obtained from a transformer model fine-tuned on the NER task, and they help the PLM overcome the problem where it might not possess enough knowledge about a semantically ambiguous complex NE (example in Fig. 3). Training ACLM on this modified objective allows us to generate diverse, coherent, factual, and high-quality augmentations given templates. We also propose _mixner_, a novel algorithm that mixes two templates during the augmentation generation phase and boosts the diversity of augmentations. Our primary contributions are as follows: * We propose ACLM, a novel data augmentation framework specially designed for low-resource complex NER. Compared with previous methods in the literature, ACLM effectively alleviates the context-entity mismatch problem by preserving the true sense of semantically ambiguous NEs in augmentations. Additionally, to accompany ACLM, we propose _mixner_, which boosts the diversity of ACLM generations. * We qualitatively and quantitively show the benefits of ACLM for monolingual, cross-lingual, and multilingual complex NER across various low-resource settings on the MultiCoNER dataset. Our proposed ACLM outperforms all other baselines in literature by a significant margin (1%-36%) and generates more diverse, coherent, and high-quality augmentations compared to them. * We perform extensive experiments to study the application of ACLM in three other domains, including science and medicine. ACLM outperforms all our baselines in these domains (absolute gains in the range of 1%-11%) and generates more factual augmentations. ## 2 Background and Related Work **Complex NER Background:** Complex NER is a relatively understudied task in the field of NLP. Building on insights from Augenstein et al. (2017), we discuss key reasons behind high performance on common NER benchmark datasets and try to understand why modern SOTA NER algorithms do not work well on complex NER benchmarks: (1) **Context**: Most of the common benchmark datasets are curated from articles in the news domain. This gives them several advantages, including rich context and surface features like proper punctuation and capitalized nouns, all of which are major drivers of success in these datasets (Mayhew et al., 2019). In contrast, for entity recognition beyond news text, like search queries or voice commands, the context is less informative and lacks surface features (Guo et al., 2009; Carmel et al., 2014); (2) **Entity Complexity**: Data from news articles contain _proper names_ or "easy" entities with simple syntactic structures, thus allowing pre-trained models to perform well due to their existing knowledge of such entities. On the other hand, complex NEs like movie names are syntactically ambiguous and linguistically complex and which makes Complex NER a difficult task (Ashwini and Choi, 2014). Examples of such entities include noun phrases (e.g., Eternal Sunshine of the Spotless Mind), gerunds (e.g., Saving Private Ryan), infinitives (e.g., To Kill a Mockingbird), or full clauses (e.g., Mr. Smith Goes to Washington); (3) **Entity Overlap**: Models trained on these common benchmark datasets suffer from memorization effects due to the large overlap of entities between the train and test sets. Unseen and emerging entities pose a huge challenge to complex NER (Bernier-Colborne and Langlais, 2020). **Complex NER:** Prior work has mostly focused on solving the entity complexity problem by learning to detect complex nominal entities in sentences (Magnolini et al., 2019; Meng et al., 2021; Fetahu et al., 2022; Chen et al., 2022). Researchers have often explored integrating external knowledge in the form of gazetteers for this task. Gazetteers have also proven to be effective for low-resource NER (Rijhwani et al., 2020). GemNet (Meng et al., 2021), the current SOTA system for complex NER, conditionally combines the contextual and gazetteer features using a Mixture-of-Experts (MoE) gating mechanism. However, gazetteers are difficult to build and maintain and prove to be ineffective for complex NER due to their limited entity coverage and the nature of unseen and emerging entities in complex NER. **Data Augmentation for Low-Resource NER:** Data Augmentation to handle data scarcity for low-resource NLP is a well-studied problem in the literature and is built on word-level modifications, including simple synonym replacement strategies (Wei and Zou, 2019), or more sophisticated learning techniques like LSTM-based language models (Kobayashi, 2018), Masked Language Modeling (MLM) using PLMs (Kumar et al., 2020), auto-regressive PLMs (Kumar et al., 2020), or constituent-based tagging schemes (Zhou et al., 2019). However, most of these methods, though effective for classification tasks, suffer from token-label misalignment when applied to token-level tasks such as NER and might require complex pre-processing steps (Bari et al., 2020; Zhong and Cambria, 2021). One of the first works to explore effective data augmentation for NER replaces NEs with existing NEs of the same type or replaces tokens in the sentence with one of their synonyms retrieved from WordNet (Dai and Adel, 2020). Following this, many neural learning systems were proposed that either modify the Masked Language Modelling (MLM) training objective using PLMs (Zhou et al., 2022; Liu et al.) or use generative language modeling with LSTM LMs (Ding et al., 2020) or mBART (Liu et al., 2021), to produce entirely new sentences from scratch. However, all these systems were designed for low-resource NER on common benchmark datasets and failed to generate effective augmentations for low-resource complex NER with semantically ambiguous and complex entities. ## 3 Methodology In this section, we give an overview of our approach. Fig. 1 represents the entire workflow of our ACLM data augmentation framework. A sentence is first passed through a fine-tuned XLM-RoBERTa fine-tuned on only gold data to generate the attention map for each token in the sentence. This attention map is then used to selectively mask the sentence and create a template. This template is then used as an input to optimize the model on the text reconstruction objective for fine-tuning ACLM: the model is asked to reconstruct the entire original sentence from only the content in the template. While generating augmentations, ACLM follows the same template generation process in addition to adding two templates through _mixner_, which we discuss in detail in Section 3.3. ### Template Creation To corrupt a sentence and create a template, we follow a 4-step process described below: 1. **Keyword Selection**: For each sentence in our training corpus, we first obtain a set of non-NE tokens in the sentence that are most attended by its NEs. We call these tokens _keywords_. For our research, we consider a non-NE token as a keyword if the NEs in the sentence contextually depend on them the most. We measure contextual dependency between NE and non-NE tokens using attention scores from attention maps extracted from a transformer-based NER model fine-tuned only on gold data. We hypothesize that attention heads in a transformer when fine-tuned for NER, formulated as a token-level tagging task, tend to pay the highest attention to the most contextually relevant tokens around it. Thus, formally put, consider a sentence with a total of \(T\) tokens comprised of \(t_{other}\) non-NE and \(t_{entity}\) NE tokens. Our primary aim is to find the top \(p\%\) of \(t_{other}\) tokens, which we call keywords. To calculate the total attention score that each token in the sentence assigns to each other token, we sum up the attention scores across each of the heads in the transformer network and across the last \(a\) layers (\(a\) = 4 in our case). Different heads in different layers tend to capture different properties of language, and taking the average attention scores across the last 4 layers ensures that diverse linguistic relations are taken into account while choosing the keywords (e.g., syntactic, semantic, etc.). This also makes the keyword selection process more robust, as in low-resource conditions the attention maps may be noisy, and the NEs might not be focusing on the right context always. Additionally, the choice of just the last four layers is inspired by the fact that the lower layers have very broad attention and spend at most 10% of their attention mass on a single token (Clark et al., 2019). Note \(t_{entity}\) might be comprised of (1) multiple contiguous tokens forming an individual NE and (2) multiple such individual NEs. To handle the first case, inspired from Clark et al. (2019), we sum up the attention scores over all the individual tokens in the NE. For the second case, we find \(t_{attn}\) for each individual NE and take a set union of tokens in these \(t_{attn}\). Thus, as an extra pre-processing step, to improve robustness, we also ignore punctuations, stop words, and other NEs from the top \(p\%\) of \(t_{other}\) tokens to obtain our final keywords. We provide examples of templates in Appendix C. 2. **Selective Masking**: After selecting the top \(p\%\) of \(t_{other}\) tokens in the sentence as keywords, we now have \(K\) non-NE keyword tokens and \(E\) entity tokens. To create the template, we now substitute each non-NE token not belonging to the \(K\) with the mask token and remove contiguous mask tokens. Figure 1: **Overview of ACLM:** ACLM follows a 4-step template creation process, which serves as an input to the model during fine-tuning and generation. 1**Keyword Selection:** The most important keywords (in red) associated with the NEs (in bold) in the sentence is first extracted using attention maps obtained from a fine-tuned NER model. 2**Selective Masking:** All words except the NEs and the keywords obtained from the previous step is replaced with mask tokens [**M**]. 3**Labeled Sequence Linearization:** Label tokens are added before and after each entity in the sentence. 4**Dynamic Masking:** The template goes through further masking where a small portion of the keywords are dynamically masked at each training iteration. While generation we also apply _mixner_, which randomly joins two templates after 4 and before 4. Post generating augmentations with ACLM, the generated augmentations are concatenated with the gold data and used to fine-tune our final NER model. 3. **Labeled Sequence Linearization**: After we have our initial template, inspired by Zhou et al. (2022), we perform labeled sequence linearization to explicitly take label information into consideration during fine-tuning and augmentation generation. Similar to Zhou et al. (2022), as shown in Figure 1, we add label tokens before and after each entity token and treat them as the normal context in the sentence. Additionally, these label tokens before and after each NE provide boundary supervision for NEs with multiple tokens. 4. **Dynamic Masking**: Post labeled sequence linearization, our template goes through further masking wherein we dynamically mask a small portion of the \(K\) keywords during each iteration of training and generation. To be precise, we first sample a dynamic masking rate \(\varepsilon\) from a Gaussian distribution \(\mathcal{N}(\mu,\,\sigma^{2})\), where the Gaussian variance \(\sigma\) is set to 1/\(K\). Next, we randomly sample tokens from the \(K\) keywords in the sentence according to the masking rate \(\varepsilon\) and replace this with mask tokens, followed by removing consecutive mask tokens. At every round of generation, dynamic masking helps boost 1) context diversity by conditioning ACLM generation on different templates with a different set of keywords and 2) length diversity by asking ACLM to infill a different number of mask tokens. ### Fine-tuning ACLM As discussed earlier, ACLM is fine-tuned on a novel text reconstruction from corrupted text task wherein the created templates serve as our corrupted text and ACLM learns to recover the original text from the template. Text reconstruction from the corrupted text is a common denoising objective that PLMs like BART and BERT are pre-trained on. For this work, we use it as our fine-tuning objective and differ from other existing pre-training objectives by our _selective masking_ strategy for creating templates. ### Data Generation Post fine-tuning on the text reconstruction task, we utilize ACLM to generate synthetic data for data augmentation. For each sentence in the training dataset, we apply steps 1-4 in the Template Creation pipeline for \(R\) rounds to randomly corrupt the sentence and obtain a template which is then passed through the fine-tuned ACLM model to generate a total of \(R\times\) augmented training samples. Additionally, to boost diversity, during auto-regressive generation, we randomly sample the next word from the _top-k_ most probable words and choose the most probable sequence with beam search. **mixner:** During the \(R\) rounds of augmentation on our training dataset, we propose the use of _mixner_, a novel template mixing algorithm that helps ACLM generate diverse sentences with new context and multiple NEs in the sentence. More specifically, given the template for any arbitrary sentence \(a\) in the training set in step 3 of the template creation process, we retrieve the template for another sentence \(b\) that is semantically similar to \(a\) and join both the templates before passing on the template to step 4. We show examples of sentences generated with _mixner_ in Fig. 3 and Section D.1. Note that we apply _mixner_ only in the generation step and not during fine-tuning. As mentioned earlier, to retrieve \(b\) from the training set, we randomly sample a sentence from the _top-k_ sentences with the highest semantic similarity to \(a\). To calculate semantic similarity between each sentence in the training set, we first take the embedding \(e\) for each sentence from a multi-lingual Sentence-BERT Reimers and Gurevych (2019) and then calculate semantic similarity by: \[\mathrm{sim}(e_{i},e_{j})=\frac{e_{i}\cdot e_{j}}{\left\|e_{i}\right\|\left\| e_{j}\right\|} \tag{1}\] where \(\mathrm{sim}(\,.\,)\) is the cosine similarity between two embeddings, and \(i,j\in N\) where \(i\neq j\), and Figure 2: **Overview of _mixner_: During the augmentation generation process, for a particular sentence in the training dataset, we retrieve another semantically similar sentence and concatenate them before step 4 is the size if the training set. Additionally, we don't apply _mixner_ on all rounds \(R\) but sample a probability \(\gamma\) from a Gaussian distribution \(\mathcal{N}(\mu,\,\sigma^{2})\) and only apply _mixner_ if \(\gamma\) crosses a set threshold \(\beta\). #### 3.3.1 Post-Processing As a post-processing step, we remove augmentations similar to the original sentence and also the extra label tokens added in the labeled sequence linearization step. Finally, we concatenate the augmented data with the original data to fine-tune our NER model. ## 4 Experiments and Results ### Dataset All our experiments were conducted on the MultiCoNER dataset (Malmasi et al., 2022), a large multilingual dataset for complex NER. MultiCoNER covers 3 domains, including Wiki sentences, questions, and search queries, across 11 distinct languages. The dataset represents contemporary challenges in NER discussed in Section 2 and is labeled with six distinct types of entities: **person**, **location**, **corporation**, **groups** (political party names such as _indian national congress_), **product** (consumer products such as _apple iPhone 6_), and **creative work** (movie/song/book titles such as _on the beach_). We conduct experiments on a set of 10 languages \(\mathbb{L}\) where \(\mathbb{L}\) = {English (**En**), Bengali (**Bn**, Hindi (**Hi**), German (**De**), Spanish (**Es**), Korean (**Ko**), Dutch (**Ni**), Russian (**Ru**), Turkish (**Tr**), Chinese (**Zh**)}. Language-wise dataset statistics can be found in Table 12. We would also like to highlight that the number of sentences in MultiCoNER test sets ranges from **133,119 - 217,887**, which is much higher than test sets of other existing NER datasets. For more details on the dataset, we refer our readers to Malmasi et al. (2022). For monolingual and cross-lingual low-resource experiments, we perform iterative stratified sampling over all the sentences by using the entity classes in a sample as its target label across four low-resource settings (100, 200, 500, and 1000). We downsample the development set accordingly. For multi-lingual experiments, we combine all the data sampled for our monolingual settings. We evaluate all our systems and baselines on the original MultiCoNER test sets. We report micro-averaged F1 scores averaged across 3 runs for 3 different random seeds. ### Experimental Setup **ACLM.** We use mBart-50-large (Tang et al., 2020) with a condition generation head to fine-tune ACLM. We fine-tune ACLM for 10 epochs using Adam optimizer (Kingma and Ba, 2014) with a learning rate of \(1e^{-5}\) and a batch size of 32. **NER.** We use XLM-RoBERTa-large with a linear head as our NER model. Though the field of NER has grown enormously, in this paper, we adhere to the simplest formulation and treat the task as a token-level classification task with a BIO tagging scheme. We use the Adam optimizer to optimize our model, set the learning rate to \(1e^{-2}\), and train with a batch size of 16. The NER model is trained for 100 epochs, and the model with the best performance on the dev set is used for testing. **Hyper-parameter Tuning.** For template creation during fine-tuning and generation, we set the selection rate \(p\) and the Gaussian \(\mu\) to be 0.3 and 0.5, respectively. The number of augmentation rounds \(R\) is set as 5. For _mixner_ we set Gaussian \(\mu\) and \(\beta\) to be 0.5 and 0.7, respectively. All hyper-parameters are tuned on the development set with grid search. More details can be found in Appendix A. ### Baselines To prove the effectiveness of our proposed ACLM, we compare it with several strong NER augmentation baselines in the literature. In this sub-section, we briefly describe each of these baselines. All baselines were run for \(R\) rounds. **Gold-Only.** The NER model is trained using only gold data from the MultiCoNER dataset without any augmentation. **Label-wise token replacement (LwTR).**(Dai and Adel, 2020b) A token in a sentence is replaced with another token with the same label; the token is randomly selected from the training set. **DAGA.**(Ding et al., 2020) Data Augmentation with a Generation Approach (DAGA) proposes to train a one-layer LSTM-based recurrent neural network language model (RNNLM) by maximizing the probability for the next token prediction with linearized sentences. During generation, they use random sampling to generate entirely new sentences with only the [**BOS**] token fed to the model. **MulDA.**(Liu et al., 2021) Multilingual Data Augmentation Framework (MulDA) builds on DAGA and trains a pre-trained mBART model on next token prediction with linearized sentences for generation-based multilingual data augmentation. For a fair comparison, we replace mBART in MulDA with mBART-50. **MELM.**(Zhou et al., 2022) Masked Entity Language Modeling (MELM) proposes fine-tuning a transformer-encoder-based PLM on linearized labeled sequences using masked language modeling. MELM outperforms all other baselines and prior-art on low-resource settings on the CoNLL 2003 NER dataset across four languages in mono-lingual, cross-lingual, and multi-lingual settings. **ACLM _random._ We train and infer ACLM with templates created with randomly sampled _keywords_ instead of taking _keywords_ with high attention scores. This baseline proves the effectiveness of our _keyword_ selection algorithm which provides NEs in the template with rich context. **ACLM _only entity._ We train and infer ACLM with templates created with only linearized entities and no _keywords_. This baseline proves the effectiveness of additional context in our templates. ### Experimental Results **Monolingual Complex NER.** Table 1 compares the performance of all our baselines with ACLM on the MultiCoNER test sets under various low-resource settings for 10 languages. As clearly evident, ACLM outperforms all our baselines in all settings by consistently achieving the best results in all individual languages. Moreover, ACLM improves over our neural baselines (MELM and DAGA) by a significant margin (absolute gains in the range of 1.5% - 22% across individual languages). Although LwTR performs better than ACLM in rare instances, we emphasize that (1) LwTR generates nonsensical, incoherent augmentations, (discussed further in Section D.1) and (2) Based on a learning-based paradigm, ACLM shows bigger margins to LwTR at slightly higher gold training samples (200 \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**\#Gold**}} & \multicolumn{1}{c}{**Cross-lingual**} & \multicolumn{1}{c}{**Cross-lingual**} & \multicolumn{1}{c}{**Attraction**} & \multicolumn{1}{c}{**Cross-lingual**} & \multicolumn{1}{c}{**Attraction**} & \multicolumn{1}{c}{**Cross-lingual**} & \multicolumn and 500) which we acknowledge is a reasonable size in real-world conditions. **Cross-lingual Complex NER.** We also study the cross-lingual transferability of a NER model trained on a combination of gold and generated augmentations. Thus, we evaluated a model, trained on **En**, on 4 other languages, including **Hi**, **Bn**, **De**, and **Zh** in a zero-shot setting. ACLM outperforms our neural baselines by a significant margin (absolute gains in the range of 1% - 21%). None of these systems perform well in cross-lingual transfer to **Zh** which was also observed by Hu et al. (2021). **Multi-lingual Complex NER.** Table 2 compares the performance of all our baselines with ACLM on the MultiCoNER test sets under various multi-lingual low-resource settings. As clearly evident, ACLM outperforms all our baselines by a significant margin (absolute gains in the range of 1%-21% across individual languages). All our baselines, including our Gold-Only baseline, also perform better than their monolingual counterparts which demonstrates the effectiveness of multi-lingual fine-tuning for low-resource complex NER. ## 5 Further Analysis ### Generation Quality **Quantitative Analysis.** Table 3 compares augmentations from various systems on the quantitative measures of perplexity and diversity. Perplexity Jelinek et al. (1977) is a common measure of text fluency, and we measure it using GPT2 Radford et al. (2019). We calculate 3 types of diversity metrics: for Diversity-E and Diversity-N, we calculate the average percentage of new NE and non-NE words in the generated samples compared with the original samples, respectively. For Diversity-L, we calculate the average absolute difference between the number of tokens in generated samples and the original samples. ACLM achieves the lowest perplexity and highest non-NE and length diversity compared with other baselines. NE diversity in ACLM is achieved with _mixner_ where ACLM fairs well compared to MELM which just replaces NEs. LwTR achieves the highest perplexity, thereby reaffirming that it generates incoherent augmentations. **Qualitative Analysis.** Fig. 3 illustrates the superiority of augmentations generated by ACLM when compared with our other baselines. As clearly evident, while MELM generates just minor changes in NEs, augmentations produced by LwTR often tend to be nonsensical and incoherent. On the other hand, ACLM generates meaningful and diverse sentences around NEs, which is further boosted with _mixner_. We provide examples in Appendix D.1. ### Application to other domains To evaluate the transferability of ACLM to other domains, we evaluate ACLM on 4 more datasets beyond MultiCoNER. These datasets include CoNLL 2003 Tjong Kim Sang and De Meulder (2003) (news), BC2GM Smith et al. (2008) (bio-medical), NCBI Disease Dogan et al. (2014) (bio-medical) and TDMSci Hou et al. (2021) (science). Table 4 compares our baselines with ACLM across 2 low-resource settings on all 4 datasets. ACLM outperforms all our baselines on all settings except LwTR on CoNLL 2003. This occurs because LwTR generates a large variety of effective augmentations with NE replacement on easy entities in CoNLL 2003. The results demonstrate the effectiveness of ACLM over diverse domains, including domains with an acute scarcity of data (bio-medical). Additionally, we also emphasize that ACLM produces more factual augmentations and, unlike our other baselines, avoids context-entity mismatch, which makes the NER model store wrong knowledge in \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multicolumn{1}{c}{**Model**} & **Model** & **En** & **Bn** & **Bn** & **Bn** & **Bn** & **Bn** & **Bn** & **Bn** \\ \hline \multirow{3}{*}{ \begin{tabular}{c} Gold-Only \\ **Gold-Only** \\ **Lw-T** \\ **MLM data-sensitive domains. We show samples of generated augmentations in Fig. 3 and Appendix D.1. ## 6 Conclusion In this paper, we propose ACLM, a novel data augmentation framework for low-resource complex NER. ACLM is fine-tuned on a novel text reconstruction task and is able to generate diverse augmentations while preserving the NEs in the sentence and their original word sense. ACLM effectively alleviates the context-entity mismatch problem and generates diverse, coherent, and high-quality augmentations that prove to be extremely effective for low-resource complex NER. Additionally, we also show that ACLM can be used as an effective data augmentation technique for low-resource NER in the domains of medicine and science due to its ability to generate extremely reliable augmentations. ## Limitations We list down some potential limitations of ACLM: 1) PLMs are restricted by their knowledge to generate entirely new complex entities due to their syntactically ambiguous nature. Adding to this, substituting complex NEs in existing sentences leads to context-entity mismatch. Thus, as part of future work, we would like to explore if integrating external knowledge into ACLM can help generate sentences with new complex entities in diverse contexts. 2) We do not conduct experiments in the language Farsi from the MultiCoNER dataset as neither mBart-50-large nor XLM-RoBERTa-large was pre-trained on this language. 3) The use of mBart-50-large for generation also restricts ACLM from being transferred to code-switched settings, and we would like to explore this as part of future work.
2306.05688
ModeT: Learning Deformable Image Registration via Motion Decomposition Transformer
The Transformer structures have been widely used in computer vision and have recently made an impact in the area of medical image registration. However, the use of Transformer in most registration networks is straightforward. These networks often merely use the attention mechanism to boost the feature learning as the segmentation networks do, but do not sufficiently design to be adapted for the registration task. In this paper, we propose a novel motion decomposition Transformer (ModeT) to explicitly model multiple motion modalities by fully exploiting the intrinsic capability of the Transformer structure for deformation estimation. The proposed ModeT naturally transforms the multi-head neighborhood attention relationship into the multi-coordinate relationship to model multiple motion modes. Then the competitive weighting module (CWM) fuses multiple deformation sub-fields to generate the resulting deformation field. Extensive experiments on two public brain magnetic resonance imaging (MRI) datasets show that our method outperforms current state-of-the-art registration networks and Transformers, demonstrating the potential of our ModeT for the challenging non-rigid deformation estimation problem. The benchmarks and our code are publicly available at https://github.com/ZAX130/SmileCode.
Haiqiao Wang, Dong Ni, Yi Wang
2023-06-09T06:00:05Z
http://arxiv.org/abs/2306.05688v1
# ModeT: Learning Deformable Image Registration via Motion Decomposition Transformer ###### Abstract The Transformer structures have been widely used in computer vision and have recently made an impact in the area of medical image registration. However, the use of Transformer in most registration networks is straightforward. These networks often merely use the attention mechanism to boost the feature learning as the segmentation networks do, but do not sufficiently design to be adapted for the registration task. In this paper, we propose a novel motion decomposition Transformer (ModeT) to explicitly model multiple motion modalities by fully exploiting the intrinsic capability of the Transformer structure for deformation estimation. The proposed ModeT naturally transforms the multi-head neighborhood attention relationship into the multi-coordinate relationship to model multiple motion modes. Then the competitive weighting module (CWM) fuses multiple deformation sub-fields to generate the resulting deformation field. Extensive experiments on two public brain magnetic resonance imaging (MRI) datasets show that our method outperforms current state-of-the-art registration networks and Transformers, demonstrating the potential of our ModeT for the challenging non-rigid deformation estimation problem. _The benchmarks and our code are publicly available at_[https://github.com/ZAX130/SmileCode](https://github.com/ZAX130/SmileCode). Keywords:Deformable image registration Motion decomposition Transformer Attention Pyramid structure. ## 1 Introduction Deformable image registration has always been an important focus in the society of medical imaging, which is essential for the preoperative planning, intraoperative information fusion, disease diagnosis and follow-ups [10, 22]. The deformable registration is to solve the non-rigid deformation field to warp the moving image, so that the warped image can be anatomically similar to the fixed image. Let \(I_{f},I_{m}\in\mathbb{R}^{H\times W\times L}\) be the fixed and moving images (\(H,W,L\) denote image size), in the deep-learning-based registration paradigm, it is often necessary to employ a spatial transformer network (STN) [12] to apply the estimated sampling grid \(G\in\mathbb{R}^{H\times W\times L\times 3}\) to the moving image, where \(G\) is obtained by adding the regular grid and the deformation field. For any position \(p\in\mathbb{R}^{3}\) in the sampling grid, \(G(p)\) represents the corresponding relation, which means that the voxel at position \(p\) in the fixed image corresponds to the voxel at position \(G(p)\) in the moving image. That is to say, image registration can be understood as finding the corresponding voxels between the moving and fixed images, and converting this into the relative positional relationship between voxels, which is very similar to the calculation method of Transformer [8]. Transformers have been successfully used in the society of computer vision and have recently made an impact in the field of medical image computing [11, 16]. In medical image registration, there are also several related studies that employ Transformers to enhance network structures to obtain better registration performance, such as Transmorph [5], Swin-VoxelMorph [25], Vit-V-Net [6], etc. The use of Transformer in these networks, however, often merely leverages the self-attention mechanism in Transformers to boost the feature learning (the same as the segmentation tasks do), but does not sufficiently design for the registration tasks. Some other methods use cross-attention to model the corresponding relationship between moving and fixed images, such as Attention-Reg [21] and Xmorpher [20]. The cross-attention Transformer (CAT) module is used in the bottom layer of Attention-Reg [21] and each layer in Xmorpher [20] to establish the relationship between the features of moving and fixed images. However, the usage of Transformer in [20, 21] is still limited to improving the feature learning, with no additional consideration given to the relationship between the attention mechanism and the deformation estimation. Furthermore, due to the large network structure of [20], only small windows can be created for similarity calculation, which may result in performance degradation. Few studies consider the relationship between attention and deformation estimation, such as Coordinate Translator [17] and Deformer [4]. Deformer [4] uses the calculation mode of multiplication of attention map and Value matrix in transformer to weight the predicted basis to generate the deformation field, but its attention map calculation is only the concatenation and projection of moving and fixed feature maps, without using similarity calculation part. Coordinate Translator [17] calculates the matching score of the fixed feature map and the moving feature map. Then the computed scores are employed to re-weight the deformation field. However, for feature maps with coarse-level resolution, a voxel often has multiple possibilities of different motion modes [24], which is not considered in [17]. In this study, we propose a novel motion decomposition transformer (ModeT) to explicitly model multiple motion modalities by fully exploiting the intrinsic capability of the Transformer structure for deformation estimation. Experiments on two public brain magnetic resonance imaging (MRI) datasets demonstrate our method outcompetes several cutting-edge registration networks and Transformers. The main contributions of our work are summarized as follows: * We propose to leverage the Transformer structure to naturally model the correspondence between images and convert it into the deformation field, thus explicitly separating the two tasks of feature extraction and deformation estimation in deep-learning-based registration networks in which to make the registration procedure more sensible. * The proposed ModeT makes full use of the multi-head neighborhood attention mechanism to efficiently model multiple motion modalities, and then the competitive weighting module (CWM) fuses multiple deformation sub-fields in a competitive way, which can improve the interpretability and consistency of the resulting deformation field. * The pyramid structure is employed for feature extraction and deformation propagation, and is beneficial to reduce the scope of attention calculation required for each level. ## 2 Method ### Network Overview The proposed deformable registration network is illustrated in Fig. 1. We employ a pyramidal registration structure, which has the advantage of reducing the scope of attention calculation required at each decoding level and therefore alleviating the computational consumption. Given the fixed image \(I_{f}\) and moving image \(I_{m}\) as input, the encoder extracts hierarchical features using a 5-layer convolutional block, which doubles the number of channels in each layer. This generates two sets of feature maps \(F_{1},F_{2},F_{3},F_{4},F_{5}\) and \(M_{1}\), \(M_{2}\), \(M_{3}\), \(M_{4}\), \(M_{5}\). The feature maps \(M_{5}\) and \(F_{5}\) are sent into the ModeT to generate multiple deformation sub-fields, and then the generated deformation sub-fields are input into the CWM Figure 1: Illustration of the proposed deformable registration network. The encoder takes the fixed image \(I_{f}\) and moving image \(I_{m}\) as input to extract hierarchical features \(F_{1}\)-\(F_{5}\) and \(M_{1}\)-\(M_{5}\). The motion decomposition transformer (ModeT) is used to generate multiple deformation sub-fields and the competitive weighting module (CWM) fuses them. Finally the decoding pyramid outputs the total deformation field \(\phi\). to obtain the fused deformation field \(\varphi_{1}\) of the coarsest decoding layer as the initial of the total deformation field \(\phi\). The moving feature map \(M_{4}\) is deformed using \(\phi\), and the deformed moving feature map is fed into the ModeT along with \(F_{4}\) to generate multiple sub-fields, which are input into the CWM to get \(\varphi_{2}\). Then \(\varphi_{2}\) is compounded with previous total deformation field to generate the updated \(\phi\). The feature maps \(M_{3}\) and \(F_{3}\) go through the similar operations. As the decoding feature maps become finer, the number of motion modes at position \(p\) decreases, along with the number of attention heads we need to model. At the \(F_{2}/M_{2}\) and \(F_{1}/M_{1}\) levels, we no longer generate multiple deformation sub-fields, i.e., the number of attention heads in ModeT is 1. Finally, the obtained total deformation field \(\phi\) is used to warp \(I_{m}\) to obtain the registered image. To guide the network training, the normalized cross correlation \(\mathcal{L}_{\text{ncc}}\)[18] and the deformation regularization \(\mathcal{L}_{\text{reg}}\)[3] is used: \[\mathcal{L}_{\text{train}}=\mathcal{L}_{\text{ncc}}(I_{f},I_{m}\circ\phi)+ \lambda\mathcal{L}_{\text{reg}}(\phi), \tag{1}\] where \(\circ\) is the warping operation, and \(\lambda\) is the weight of the regularization term. ### Motion Decomposition Transformer (ModeT) In deep-learning-based registration networks, a position \(p\) in the low-resolution feature map contains semantic information of a large area in the original image and therefore may often have multiple possibilities of different motion modalities. To model these possibilities, we employ a multi-head neighborhood attention mechanism to decompose different motion modalities at low-resolution level. The illustration of the motion decomposition is shown in Fig. 2. Let \(F,M\in\mathbb{R}^{c\times h\times w\times l}\) stand for the fixed and moving feature maps from a specific level of the hierarchical encoder, where \(h,w,l\) denote feature map size and \(c\) is the channel number. The feature maps \(F\) and \(M\) go through linear projection (\(proj\)) and LayerNorm (\(LN\)) [2] to get \(Q\) (\(query\)) and \(K\) (\(key\)): \[Q=LN(proj(F)),\quad K=LN(proj(M)), \tag{2}\] \[Q= \{Q^{(1)},Q^{(2)},\dots,Q^{(S)}\},\] \[K= \{K^{(1)},K^{(2)},\dots,K^{(S)}\},\] where the projection operation is shared weight, and the weight initialization is sampled from \(N(0,1e^{-5})\), the bias is initialized to 0. The \(Q\) and \(K\) are then divided according to channels, and \(S\) represents the number of divided heads. We then calculate the neighborhood attention map. We use \(c(p)\) to denote the neighborhood of voxel \(p\). For a neighborhood of size \(n\times n\times n\), \(||c(p)||=n^{3}\). The neighborhood attention map of multiple heads is obtained by: \[NA(p,s)=softmax(Q^{(s)}_{p}\cdot K^{(s)T}_{c(p)}+B^{(s)}), \tag{3}\] where \(B\in\mathbb{R}^{S\times n\times n\times n}\) is a learnable relative positional bias, initialized to all zeros. We pad the moving feature map with zeros to calculate boundary voxels because the registration task sometimes requires voxels outside the field-of-view to be warped. Equation (3) shows how the neighborhood attention is computed for the \(s\)-th head at position \(p\), so that the semantic information of voxels on low resolution can be decomposed to compute similarity one by one, in preparation for modeling different motion modalities. Moreover, the neighborhood attention operation narrows the scope of attention calculation to reduce the computational effort, which is very friendly to volumetric processing. The next step is to obtain the multiple sub-fields at this level by computing the regular displacement field weighted via the neighborhood attention map: \[\varphi_{p}^{(s)}=NA(p,s)V, \tag{4}\] where \(\varphi^{(s)}\in\mathbb{R}^{h\times w\times l\times 3}\), \(V\in\mathbb{R}^{n\times n\times n}\), and \(V\) (\(value\)) represents the relative position coordinates for the neighborhood centroid, which is not learned so that the multi-head attention relationship can be naturally transformed into a multi-coordinate relationship. With the above steps, we obtain a series of deformation sub-fields for this level: \[\varphi^{(1)},\varphi^{(2)},\ldots,\varphi^{(S)} \tag{5}\] ### Competitive Weighting Module (CWM) Multiple low-resolution deformation fields need to be reasonably fused when deforming a high-resolution feature map. As shown in Fig. 3, we first upsample these deformation sub-fields, then convolve them in three layers to get the score of each sub-field, and use softmax to compete the motion modality for each voxel. The convolution uses \(3\times 3\times 3\) convolution rather than direct projection because deformation fields often require correlation of adjacent displacements Figure 2: Illustration of the proposed motion decomposition transformer, which employs the multi-head neighborhood attention mechanism to decompose different motion modalities. (\(S=3\) in this illustration) to determine if they are reasonable. We formulate above competitive weighting operation to obtain the deformation field \(\varphi\) at this level as follows: \[\begin{split} w^{(1)},w^{(2)},\dots,w^{(S)}=WConv(cat(\varphi^{(1)},\varphi^{(2)},\dots,\varphi^{(S)})),\\ \varphi=w^{(1)}\varphi^{(1)}+w^{(2)}\varphi^{(2)}+,\dots,+w^{(S)} \varphi^{(S)},\end{split} \tag{6}\] where \(w^{(s)}\in\mathbb{R}^{h\times w\times l}\), and \(\varphi^{(s)}\) has already been upsampled. \(WConv\) represents the ConvBlock used to calculate weights, as shown in the right part of Fig. 3. ## 3 Experiments **Datasets.** Experiments were carried on two public brain MRI datasets, including LPBA [19] and Mindboggle [15]. For LPBA, each MRI volume contains 54 manually labeled region-of-interests (ROIs). All volumes in LPBA were rigidly pre-aligned to mni305. 30 volumes (\(30\times 29\) pairs) were employed for training and 10 volumes (\(10\times 9\) pairs) were used for testing. For Mindboggle, each volume contains 62 manually labeled ROIs. All volumes in Mindboggle were affinely aligned to mni152. 42 volumes (\(42\times 41\) pairs from the NKI-RS-22 and NKI-TRT-20 subsets) were employed for training, and 20 volumes from OASIS-TRT-20 (\(20\times 19\) pairs) were used for testing. All volumes were pre-processed by min-max normalization, and skull-stripping using FreeSurfer [9]. The final size of each volume was \(160\times 192\times 160\) after a center-cropping operation. #### 3.0.1 Evaluation Metrics. To quantitatively evaluate the registration performance, Dice score (DSC) [7] was calculated as the primary similarity metric to evaluate the degree of overlap between corresponding regions. In addition, the average symmetric surface distance (ASSD) [23] was evaluated, which can reflect the similarity of the region contours. The quality of the predicted deformation \(\phi\) was assessed by the percentage of voxels with non-positive Jacobian determinant (i.e., folded voxels). All above metrics were calculated in 3D. A better registration shall have larger DSC, and smaller ASSD and Jacobian. Figure 3: Illustration of the proposed competitive weighting module (CWM). Implementation Details.Our method was implemented with PyTorch, using a GPU of NVIDIA Tesla V100 with 32GB memory. The regularization term \(\lambda\) and neighborhood size \(n\) were set as \(1\) and \(3\). For the encoder part, we used the same convolution structure as [17]. In the pyramid decoder, from coarse to fine, the number of attention heads were set as \(8,4,2,1,1\), respectively. We used \(6\) channels for each attention head. The Adam optimizer [14] with a learning rate decay strategy was employed as follows: \[lr_{m}=lr_{init}\cdot(1-\frac{m-1}{M})^{0.9},m=1,2,...,M \tag{7}\] where \(lr_{m}\) represents the learning rate of \(m\)-th epoch and \(lr_{init}=1\) represents the learning rate of initial epoch. We set the batch size as \(1\), \(M\) as \(30\) for training. Comparison Methods.We compared our method with several state-of-the-art registration methods: (1) SyN[1]: a classical traditional approach, using the \(SyNOnly\) setting in ANTS. (2) VoxelMorph(VM)[3]: a popular single-stage registration network. (3) TransMorph(TM)[5]: a single-stage registration network with SwinTransformer enhanced encoder. (4) PR++[13]: a pyramid registration network using 3D correlation layer. (5) XMorpher(XM)[20]: a registration network using CAT modules for each level of encoder and decoder. (6) Im2grid(I2G)[17]: \begin{table} \begin{tabular}{l c c c|c c c} \hline \hline & \multicolumn{3}{c}{Mindboggle (62 ROIs)} & \multicolumn{3}{c}{LPBA (54 ROIs)} \\ \cline{2-7} & DSC (\%) & ASSD & \(\%|J_{\phi}|\leq 0\) & DSC (\%) & ASSD & \(\%|J_{\phi}|\leq 0\) \\ \hline SyN[1] & \(56.7\pm 1.5\) & \(1.38\pm 0.09\) & \(<0.00001\%\) & \(70.1\pm 6.2\) & \(1.72\pm 0.12\) & \(<0.0004\%\) \\ VM[3] & \(56.0\pm 1.6\) & \(1.49\pm 0.11\) & \(<1\%\) & \(64.3\pm 3.2\) & \(2.03\pm 0.21\) & \(<0.7\%\) \\ TM[5] & \(60.7\pm 1.5\) & \(1.35\pm 0.10\) & \(<0.9\%\) & \(67.0\pm 3.0\) & \(1.90\pm 0.20\) & \(<0.6\%\) \\ I2G[17] & \(59.8\pm 1.3\) & \(1.30\pm 0.07\) & \(<0.03\%\) & \(71.0\pm 1.4\) & \(1.64\pm 0.10\) & \(<0.01\%\) \\ PR++[13] & \(61.1\pm 1.4\) & \(1.34\pm 0.10\) & \(<0.5\%\) & \(69.5\pm 2.2\) & \(1.76\pm 0.17\) & \(<0.2\%\) \\ XM[20] & \(53.6\pm 1.5\) & \(1.46\pm 0.09\) & \(<1\%\) & \(66.3\pm 2.0\) & \(1.92\pm 0.15\) & \(<0.1\%\) \\ DMR[4] & \(60.6\pm 1.4\) & \(1.34\pm 0.09\) & \(<0.7\%\) & \(69.2\pm 2.4\) & \(1.79\pm 0.18\) & \(<0.4\%\) \\ Ours & \(\mathbf{62.8\pm 1.2}\) & \(\mathbf{1.22\pm 0.07}\) & \(<0.03\%\) & \(\mathbf{72.1\pm 1.4}\) & \(\mathbf{1.58\pm 0.11}\) & \(<0.007\%\) \\ \hline \hline \end{tabular} \end{table} Table 1: The numerical results of different registration methods on two datasets. Figure 4: Visualized registration results from different methods on Mindboggle (top row) and LPBA (bottom row). a pyramid network using a coordinate translator. (7) DMR[4]: a registration network using a Deformer and a multi-resolution refinement module. #### 3.3.3 Quantitative and Qualitative Analysis. The numerical results of different methods on datasets Mindboggle and LPBA are reported in Table 1. It can be observed that our method consistently attained the best registration accuracy with respect to DSC and ASSD metrics. For the DSC results, our method surpassed the second-best networks by 1.7% and 1.1% on Mindboggle and LPBA, respectively. We further investigated the statistical significance of our method over comparison methods on DSC and ASSD metrics, by conducting the paired and two-sided Wilcoxon signed-rank test. The null hypotheses for all pairs (our method \(v.s.\) other method) were not accepted at the 0.05 level. As a result, our method can be regarded as significantly better than all comparison methods on DSC and ASSD metrics. Table 1 also lists the percentage of voxels with non-positive Jacobian determinant (\(\%|J_{\phi}|\leq 0\)). Our method achieved satisfactory performance, which was the best among all deep-learning-based networks. Fig. 4 visualizes the registered images from different methods on two datasets. Our method generated more accurate registered images, and internal structures can be consistently preserved using our method. Fig. 5 takes the registration of one image pair as an example to show the multi-level deformation fields generated by our method. Our ModeT effectively modeled multiple motion modalities and our CWM fused them together at low-resolution levels. The final deformation field \(\phi\) accurately warped the moving image to registered with the fixed image. ## 4 Conclusion We present a motion decomposition Transformer (ModeT) to naturally model the correspondence between images and convert this into the deformation field, which improves the interpretability of the deep-learning-based registration network. The proposed ModeT employs the multi-head neighborhood attention mechanism to identify various motion patterns of a voxel in the low-resolution feature map. Then with the help of competitive weighting module and pyramid Figure 5: Visualization of the generated multi-level deformation fields (\(\varphi_{1}\)-\(\varphi_{5}\)) to register one image pair. At low-resolution levels, multiple deformation sub-fields are decomposed to effectively model different motion modalities. structure, the motion modes contained in a voxel can be gradually fused and determined in the coarse-to-fine pyramid decoder. The experimental results have proven the superior performance of the proposed method. ## Acknowledgements This work was supported in part by the National Natural Science Foundation of China under Grants 62071305, 61701312, 81971631 and 62171290, in part by the Guangdong Basic and Applied Basic Research Foundation under Grant 2022A1515011241, and in part by the Shenzhen Science and Technology Program (No. SGDX 20201103095613036).
2303.16994
Designing transparent conductors using forbidden optical transitions
Many semiconductors present weak or forbidden transitions at their fundamental band gaps, inducing a widened region of transparency. This occurs in high-performing n-type transparent conductors (TCs) such as Sn-doped In2O3 (ITO), however thus far the presence of forbidden transitions has been neglected in searches for new p-type TCs. To address this, we first compute high-throughput absorption spectra across ~18,000 semiconductors, showing that over half exhibit forbidden or weak optical transitions at their band edges. Next, we demonstrate that compounds with highly localized band edge states are more likely to present forbidden transitions. Lastly, we search this set for p-type and n-type TCs with forbidden or weak transitions. Defect calculations yield unexplored TC candidates such as ambipolar BeSiP2, Zr2SN2 and KSe, p-type BAs, Au2S, and AuCl, and n-type Ba2InGaO5, GaSbO4, and KSbO3, among others. We share our data set via the MPContribs platform, and we recommend that future screenings for optical properties use metrics representative of absorption features rather than band gap alone.
Rachel Woods-Robinson, Yihuang Xiong, Jimmy-Xuan Shen, Nicholas Winner, Matthew K. Horton, Mark Asta, Alex M. Ganose, Geoffroy Hautier, Kristin A. Persson
2023-03-29T19:46:40Z
http://arxiv.org/abs/2303.16994v2
# Designing transparent conductors using forbidden optical transitions ###### Abstract Many semiconductors present weak or forbidden transitions at their fundamental band gaps, inducing a widened region of transparency. This occurs in high-performing n-type transparent conductors (TCs) such as Sn-doped In\({}_{2}\)O\({}_{3}\) (ITO), however thus far the presence of forbidden transitions has been neglected in searches for new p-type TCs. To address this, we first compute high-throughput absorption spectra across \(\sim\)18,000 semiconductors, showing that over half exhibit forbidden or weak optical transitions at their band edges. Next, we demonstrate that compounds with highly localized band edge states are more likely to present forbidden transitions. Lastly, we search this set for p-type and n-type TCs with forbidden or weak transitions. Defect calculations yield unexplored TC candidates such as smipolar BeSiP\({}_{2}\), Zr\({}_{2}\)SN\({}_{2}\) and KSe, p-type BAs, Au\({}_{2}\)S, and AuCl, and n-type Ba\({}_{2}\)InGaO\({}_{5}\), GaSbO\({}_{4}\), and KSbO\({}_{3}\), among others. We share our data set via the MPContris platform, and we recommend that future screenings for optical properties use metrics representative of absorption features rather than band gap alone. ## I Introduction It is often assumed in semiconductors that a strong absorption onset occurs at the direct fundamental band gap. This is indeed the case for many materials, however some materials have forbidden transitions at their band edges such that the onset of their absorption edge occurs at higher energies than their direct gap. Four scenarios of absorption in semiconductors are depicted schematically in **Figure 1**, following the optical type (OT) classification as outlined by Yu and Zunger[1] for four hypothetical materials with similar band structures. In OTI the fundamental band gap \(E_{\rm G}\) is direct and allowed ("da"), in OT2 \(E_{\rm G}\) is direct but forbidden ("df"), in OT3 \(E_{\rm G}\) is indirect and the direct gap is allowed ("ia"), and in OT4 \(E_{\rm G}\) is indirect and the direct gap is forbidden ("if"). The presence of forbidden optical transitions can be detrimental in certain applications (e.g., LEDs, solar cell absorbers), however for others it may present a useful design criteria. In this study we focus on transparent conductors (TCs) -- materials combining wide optical transparency with high mobility and doping -- which require weak absorption within a given range of wavelengths (usually within the visible) such that forbidden transitions could be advantageous to increase transparency. In fact, many of the high-performing, commercially-available n-type transparent conducting oxides (TCOs) have dipole forbidden transitions at their band edges that induce this behavior. A notable example occurs in the most common TCO, n-type Sn-doped In\({}_{2}\)O\({}_{3}\) (ITO), with weak absorption in the upper-most 0.8 eV of the valence band (VB), allowing for an increased transparency in addition to the increase from the Burstein-Moss effect.[2] Other wide-gap oxide materials with reported forbidden transitions include SnO\({}_{2}\) and F-doped SnO\({}_{2}\) (FTO),[3; 4] spinels SnZn\({}_{2}\)O\({}_{4}\), SnCd\({}_{2}\)O\({}_{4}\) and CdIn\({}_{2}\)O\({}_{4}\),[5] Tl\({}_{2}\)O\({}_{3}\),[6] and TiO\({}_{2}\).[7] Additionally, dipole-forbidden transitions have been reported in Cu-based p-type TCs including delafossites CuAlO\({}_{2}\), CuGaO\({}_{2}\), and CuInO\({}_{2}\), as well as cuprite Cu\({}_{2}\)O.[8] Meanwhile, it is of considerable interest to identify new high-performing p-type TC for applications in photovoltaics and beyond. Over the past decade, high-throughput screening studies have proposed several n-type or p-type TC candidates such as ZnSb\({}_{2}\)O\({}_{6}\), ZrOS, BP, Ba\({}_{2}\)BiTaO\({}_{6}\), and CaTe[9; 10; 11; 12; 13]. Experimental confirmation of exceptional properties has been demonstrated in some of those computationally-identified materials such as the p-type Ba\({}_{2}\)BiTaO\({}_{6}\) and n-type ZnSb\({}_{2}\)O\({}_{6}\).[11; 14] but still no predicted p-type TC has experimentally-confirmed properties on par with n-type ITO. Most high-throughput screenings for TCs to date assume wide electronic band gap or direct band gap as a proxy for transparency.[15; 16; 17; 18; 12; 15] This assumption does not consider whether associated optical transitions are actually allowed or strong, thus overlooking materials with a small fundamental band gap but a wide absorption edge which could enable optical transparency. We note that several screenings for solar absorbers have explicitly considered forbidden transitions,[1; 19; 20]_excluding_ materials with forbidden edges to design for a sharp absorption onset; in contrast, a screening for TCs would _include_ such materials. Therefore, in this work we leverage forbidden optical transitions at band edges (referred to hereafter as simply "forbidden transitions") to improve high-throughput searches for TCs. First, we benchmark and compute optical absorption edges for \(\sim\)18,000 inorganic compounds in the Materials Project (MP) database, and classify optical types across MP to assess whether the fundamental gaps are optically allowed or forbidden. We show that over half of the selected semiconductors in MP exhibit a weak absorption edge, and that, in special cases involving transitions between localized states, the presence of forbidden transitions can be explained by orbital character. With this data, we introduce a series of high-throughput descriptors for p-type TCs to estimate the direct allowed band gap (often referred to in the literature as the "optical gap"), absorption edge onset, and average absorption spectra in the visible spectrum. Using these descriptors, we perform a high-throughput screening (as outlined in **Figure 2**) for promising p-type and n-type TCs with disperse band edges that may be transparent in the visible regime. Such compounds have low fundamental band gaps, and therefore may have previously been overlooked. To assess dopability and mobility for materials with good computed optical properties, we perform defect formation energy calculations and compute transport properties for the most promising candidates. We highlight some ambipolar TC candidates including BeSiP\({}_{2}\), p-type TC candidates including boron BAs, and n-type TC candidates including barium indium gallium oxide (Ba\({}_{2}\)InGaO\({}_{5}\)), and share our data for further exploration. ## Materials and Methods Density functional theory (DFT) calculations were performed using the projector augmented wave (PAW) method[45; 46] as implemented in the Vienna _Ab Initio_ Simulation Package (VASP)[47; 48], first within the Purdue-Berke-Ernzerhof (PBE) Generalized Gradient Approximation (GGA) formulation of the exchange-correlation functional.[49] Cutoff, convergence, and correction criteria have been benchmarked and are used throughout the MP infrastructure, as described elsewhere.[50; 51] Effective mass (\(m^{*}\)) was computed from GGA calculations using the BoltzTraP2 package,[52] assuming dopings of \(10^{18}\) cm-3 as described in the Supplementary Materials (SM). The HSED6 screened hybrid functional[53] was used to calculate gap corrections and apply scissor shifts in "screen 2." Branch point energy (BPE) was computed from GGA calculations with an HSE gap correction; BPE ratio range \(\sigma_{\rm BPE}\) was computed by varying number of valence bands (\(N_{\rm VB}\)) and number of conduction bands (\(N_{\rm CB}\)) from \(N_{\rm VB}\):\(N_{\rm CB}\)=2:4 to \(N_{\rm CB}\):\(N_{\rm CB}\)=8:4, with details described elsewhere.[25] The site-projected wave function character of orbitals at the band edges were assessed to compute the inverse participation ratios (IPRs) and the orbital overlaps (see SM). Optical absorption coefficients were calculated with VASP using the independent-particle approximation (IPA). Using the IPA, the dielectric matrix elements are calculated using a k-point reciprocal density of 1,000 A-3, which we have benchmarked and optimized for high-throughput screenings \(E_{\rm edge}\) (for optimization of precision in the extended absorption spectrum, see Yang et al.[54]). Cutoff for a transition to be considered "allowed" was selected following convention from Fabini et al.[20] Details and calculation parameters for this method are reported in the SM. Focusing on compounds that are likely to be synthesizable and are tractable for further defect calculations, we "pre-screen" (see Figure 2) the MP database using a series of filters. We include compounds in which the MP computed GGA fundamental band gap (\(E_{\rm G}\)) is greater than 0.5 eV and the energy above convex hull (\(E_{\rm hull}\)) is less that 0.1 eV/atom.[55; 56] Large compounds were filtered out with more than 5 elements or more than 12 symmetrically inequivalent sites (see pymatgen.symmetry.analyzer). Figure 1: (a) Band schematics and (b) cartoon of the resulting absorption spectra of the four optical types (OTs) in semiconductor materials. Band schematics on left are inspired by Yu and Zunger.[1] The grey regions in OT2 and OT4 correspond to the forbidden region where transitions do not occur. “Fund. gap” stands for fundamental electronic band gap and “dir. gap” stands for direct band gap. Compounds with heavy elements (\(Z\) > 82) and f-block elements are also filtered out (except for La). GGA absorption spectra of \(\sim\)800 MP compounds from Fabini et al.'s search for PV absorber materials are publicly available on MPContribs and included in our set.[20; 57] For compounds that emerge from "screen 2," defect formation energy calculations are performed using the pycdt package,[58]. Hybrid density functional theory calculations of defect formation energies are performed using the CP2K software package and HSE06 functional.[59; 53; 60] Charge-carrier mobility is calculated using the _ab initio_ scattering and transport package (amset),[38] which solves the linearized Boltzmann transport equation under the constant relaxation time approximation. Details for each of these methods are described in the SM. ## Results ### Forbidden or weak transitions are common As a result of the pre-screening, we obtain a data set of \(\sim\)18,000 semiconductors compounds for which optical absorption spectra and descriptors are assembled. Statistics and corresponding descriptors are summarized in **Figure 3**, grouped by optical type. We first assess the distribution of optical types (OTs) and forbidden optical transitions across the set. To our knowledge, this has not been assessed across known semiconductor materials, except for the several hundred from Fabini et al.[20] Figure 3(a) plots a histogram of the descriptor "forbidden energy difference" \(\Delta^{\rm d}\), defined as: \[\Delta^{\rm d}=E_{\rm G}^{\rm da}-E_{\rm G}^{\rm d}, \tag{1}\] where the direct allowed band gap \(E_{\rm G}^{\rm da}\) is defined as the energy at which dipole transition matrix elements become significant (adopting what constitutes as "significant" from the literature;[20] see Supplementary Materials, SM). We demonstrate that nearly 50% of compounds have forbidden transitions (i.e., \(\Delta^{\rm d}\) > 0 eV) at the band edges. A large subset show a strong impact of forbidden transitions, \(\sim\)18% with \(\Delta^{\rm d}\) > 0.2 eV and 7% with \(\Delta^{\rm d}\) > 0.5 eV. It is observed that OT3 (indirect gap, allowed direct transition) is the most common optical type, followed closely by OT4 (indirect fundamental gap, forbidden direct transition). Figure 2: (a) The screening method for TCs pursued in this paper, focusing on compounds from the MP database with forbidden optical transitions. The targeted property is listed in the screen, and the descriptor and cutoff value are given on the right-hand side. Note that these descriptors are computed with both GGA (for screen 1) and HSE06 (for screen 2) functionals, and are described in more detail in the manuscript. Figure 3(b) reports the distribution of the "edge energy difference" \(\Delta^{\rm d}_{\rm edge}\), defined as: \[\Delta^{\rm d}_{\rm edge}=E_{\rm edge}-E^{\rm d}_{\rm G}, \tag{2}\] where the absorption edge energy (\(E_{\rm edge}\)) is defined as the energy at which the GGA IPA absorption coefficient exceeds \(10^{4}\) cm-1 (see SM). Some materials may have transitions at the band edges that are "allowed" but are only weakly absorbing; in these cases \(E_{\rm edge}\) can provide a better metric than \(E^{\rm d}_{\rm G}\) for where the strong edge onset actually occurs. For example, in \(\rm In_{2}O_{3}\) (dashed lines) our computed \(\Delta^{\rm d}_{\rm edge}\) of 0.68 eV corresponds better to the literature-reported \(\Delta^{\rm d}\) than our computed GGA \(\Delta^{\rm d}\) of 0.22 eV. Third, in Figure 3(c) we plot the distribution of the average absorption in the visible, \(\bar{\alpha}_{\rm vis}\), defined as: \[\bar{\alpha}_{\rm vis}=\int_{u_{\rm vis}^{\rm min}}^{\nu_{\rm vis}^{\rm max} }\alpha(h\nu), \tag{3}\] i.e., the integral of the absorption spectra across the visible regime. Since GGA Kohn-Sham gap underestimates fundamental band gap, we define the limits of the integral for GGA calculations using an empirical gap correction from Morales et al. (see SM).[21] It is observed that in more than 50% of compounds, \(\bar{\alpha}_{\rm vis}\) is less than that of \(\rm In_{2}O_{3}\). Of interest to this study are the set of compounds with a significantly widened absorption edge due to forbidden transitions and correspondingly low absorption coefficients. In particular, we are interested in materials in which forbidden transitions raise the absorption edge outside of the visible spectrum and lead to optical transparency. However we note this data set may be valuable for other investigations as well. ### Underlying chemical trends Forbidden and weak optical transitions in semiconductors can arise from a variety of physical, structural, and chemical phenomena. Inversion symmetry at the band extrema can induce parity forbidden dipole transitions,[22] and a series of selection rules determine whether transitions can occur between states of different irreducible Figure 3: (Left) Schematics highlighting new optical screening descriptors: (a) “forbidden energy difference” \(\Delta^{\rm d}\), (b) edge energy difference \(\Delta^{\rm d}_{\rm edge}\), and (c) average absorption in the visible regime \(\bar{\alpha}_{\rm vis}\). (Right) Histograms of these three optical descriptors are reported across the set of 18,000 semiconductors. Corresponding values for \(\rm In_{2}O_{3}\), the best performing n-type TC, are denoted for reference. (d) Optical type (OT) distribution, showing over half of this set has a forbidden direct optical transition, 18% exhibit \(\Delta^{\rm d}\)\(>\) 0.2 eV, and 7% exhibit \(\Delta^{\rm d}\)\(>\) 0.5 eV. representation. Parity-forbidden transitions are invoked, e.g., for In\({}_{2}\)O\({}_{3}\), among other materials, to explain in part why experimental optical band gaps exceed the fundamental gap. According to Fermi's Golden Rule, if symmetry allows, transitions between localized states composed of similar chemical orbitals (i.e., with significant \(\langle\psi_{\mathrm{i}}|r|\psi_{\mathrm{f}}\rangle\), which scales with overlap \(\langle\psi_{\mathrm{i}}|\psi_{\mathrm{f}}\rangle\)) have weak dipole transition matrix elements.[22] However, due to the delocalized nature of wavefunctions in solids, understanding the mechanisms behind forbidden and allowed transitions is less straightforward in semiconductors than in molecules with discrete, localized states. Here, we explore whether the nature of forbidden transitions between the direct band edges can be correlated with two simple orbital-based descriptors, described in **Figure 5**: 1. **Inverse participation ratio of the direct VBM and CBM states, \(t_{\mathrm{IPR}}^{\mathrm{d}}\)**: We consider the inverse participation ratio (IPR) across all compounds as a proxy for localization of states at the band edges (a high IPR indicates strong localization). As shown in Figure 5(a), "D" indicates a delocalized state and "L" indicates a localized state, and values of descriptor \(t_{\mathrm{IPR}}^{\mathrm{d}}\) are assigned as shown in the call-out circle (e.g., \(t_{\mathrm{IPR}}^{\mathrm{d}}\) = "L\(\rightarrow\)L" indicates a transition from a strongly localized VBM to a strongly localized CBM). 2. **Orbital overlap of the direct VBM and CBM states, \(\sigma^{\mathrm{d}}\)**: For each compound, we consider the dominant contributors to the density of states at the direct VBM and CBM, \(\sigma(l,m)^{\mathrm{d}}\) (\(l\) is angular momentum quantum number \(s\), \(p\), \(d\), or \(f\) for each element, and \(m\) is magnetic quantum number; see SM for details). With this data, we compute a descriptor \(\sigma^{\mathrm{d}}\) (i.e., \(\langle\psi_{\mathrm{i}}|r|\psi_{\mathrm{f}}\rangle\)) to describe the similarity of CB edge and VB edge orbital contributions. This is depicted in the call-out circle in Figure 5(b) as: \[\sigma^{\mathrm{d}}=\sum_{\mathrm{l},m}\sigma(l,m)_{\mathrm{VBM}}^{\mathrm{d}} \sigma(l,m)_{\mathrm{CBM}}^{\mathrm{d}}\] (4) We will refer to this descriptor as "orbital overlap" in this paper. Basic trends between these descriptors and the forbidden nature of the gap are summarized in Figure 5. In the violin plot in (a), it is shown that in compounds in which states at both band edges are delocalized (D\(\rightarrow\)D), the forbidden energy difference \(\Delta^{\mathrm{d}}\) is low with a mean close to zero. In compounds where at least one band edge is delocalized (D\(\rightarrow\)L and L\(\rightarrow\)D), the average \(\Delta^{\mathrm{d}}\) increases slightly. However, compounds where both edges are highly localized (L\(\rightarrow\)L) are significantly more likely to have wide forbidden transitions; the average \(\Delta^{\mathrm{d}}\) across such compounds is \(\sim\)0.5 eV, and quartiles range from 0.1-0.6 eV. Therefore, transitions between two highly localized states are likely to lead to a forbidden transition. To inspect cases with localized transitions in more detail, we compute the orbital overlap \(\sigma^{\mathrm{d}}\) for the L\(\rightarrow\)L subset from (a), and report the distribution of \(\sigma^{\mathrm{d}}\) across the four optical types in the violin plot in Figure 5(b). It is observed that, systematically, compounds with allowed edges (OT1 and OT3) have significantly lower orbital overlaps than compounds with forbidden edges (OT2 and OT4). This is consistent with Fermi's Golden Rule: low transition dipole matrix elements (i.e., forbidden or weak transitions) should occur between localized states with similar orbital contributions. We note that a weaker trend occurs when \(\sigma^{\mathrm{d}}\) is plotted across all compounds (see SM), mostly likely because the selection rules from Fermi's Golden Rule break down in transitions between delocalized states. Therefore, in cases with highly localized band edges, \(\sigma^{\mathrm{d}}\) is a useful predictor for the origin of forbidden transitions. However, these L\(\rightarrow\)L cases are only a small subset (\(\sim\)10%) of compounds in which we predict forbidden transitions, and there are other factors that arise for example due to the delocalized or hybridized nature of edge states. Ultimately, due to the relatively cheap computational cost of high-throughput IPA calculations and the variety of mechanisms contributing to optical transition matrix elements, we recommend further DFT calculations at this stage. ### Screening for TCs with forbidden transitions Using this data set, we perform a high-throughput screening for transparent conductors with forbidden optical transitions at the band edges, which may have been excluded from previous screenings. We assess candidates for both p-type and n-type TCs. Our basic screening methodology is depicted in Figure 2(a). We note that the pre-screening steps restrict compounds to those with \(E_{\mathrm{hull}}\) < 0.1 eV/atom as a proxy for stability, and to those with 12 or fewer symmetrically equivalent sites to allow for subsequent hybrid and defect calculations. #### Screen 1: High-throughput absorption calculations In screen 1 we filter compounds based on high-throughput GGA optical absorption calculations and corresponding effective masses at the band edges, as shown in the red-colored screening steps of Figure 2(a). Rather than considering the fundamental band gap \(E_{\mathrm{G}}\) or direct band gap \(E_{\mathrm{G}}^{\mathrm{d}}\), as done in previous screenings for p-type TCs,[13, 15, 16, 17, 18, 19, 20, 21, 15] materials are filtered by either their direct allowed gap \(E_{\mathrm{G}}^{\mathrm{d}}\), the onset of the absorption edge \(E_{\mathrm{edge}}\), or the average absorption coefficient in the visible \(\bar{\alpha}_{\mathrm{vis}}\). Schematics of these descriptors are depicted in Figure 2(b). We also prioritize compounds with large \(\Delta^{\mathrm{d}}\) or \(\Delta^{\mathrm{d}}_{\mathrm{edge}}\), which indicate whether there is a strong presence of optically forbidden transitions at the band edges that could lead to a widening of the absorption edge. In **Figure 4**(a) and (b), the effective mass \(m^{*}\) is plotted as a function of a GGA energy edge descriptor, either \(E_{\mathrm{G}}^{\mathrm{da}}\) or \(E_{\mathrm{edge}}\), depending on which value is higher. We restrict our screen to compounds with the GGA energy edge descriptor within the range of 1.5-3.2 eV. Note that for transparency in the visible, absorption edges greater than 3.0 eV are desirable, however PBE can underestimate band gap by a factor of \(\sim\)1-2 (depending on chemistry and structure)[24] hence the cutoff is reduced by a factor of two. We also include compounds in which the average absorption coefficient \(\bar{\alpha}_{\rm vis}\) is less than that of reference compound In\({}_{2}\)O\({}_{3}\) (2.7 \(\times\) 10\({}^{3}\) cm\({}^{-1}\) using the GGA functional). In this case, even if a material's absorption edge occurs below 1.5 eV, if its total absorption is low enough we still include it in the screen. For the n-type TC screening, we restrict \(m_{\rm e}^{*}\) to less than 1, however this tolerance is loosened to \(m_{\rm h}^{*}\) < 2 for the p-type TC screening to reflect the much smaller distribution of low \(m_{\rm h}^{*}\) than low \(m_{\rm e}^{*}\) across materials.[25, 9] Figure 4(a) and (b) plot candidates emerging from screen 1. Pink-colored markers correspond to compounds with large forbidden transitions that would have likely been excluded from previous screens: 579 disperse valence band compounds (plausible p-type TCs) and 790 disperse conduction band compounds (plausible n-type TCs). In total, this amounts to 854 compounds, since most low \(m_{\rm h}^{*}\) materials also exhibit low \(m_{\rm e}^{*}\). Grey-colored markers correspond to other materials within this range with small or no \(\Delta^{\rm d}\), and these amount to 5,209 compounds. Figure 4: Screen 1 outputs using a GGA functional for (a) plausible p-type TC candidates (focusing on \(m_{\rm h}^{*}<2\)) and (b) plausible n-type TC candidates (focusing on \(m_{\rm e}^{*}<1\)). Screen 2 outputs using a HSE functional for (c) p-type TC candidates (focusing on \(m_{\rm h}^{*}<2\)) and (d) plausible n-type TC candidates (focusing on \(m_{\rm e}^{*}<1\)). In all four plots, computed \(m^{*}\) is plotted as a function of either \(E_{\rm G}^{\rm dag}\) or \(E_{\rm edge}\), depending on which value is higher, to reflect the screening procedure. Marker color denotes whether compounds would have been included (blue) or excluded (red) from a conventional screening in which the allowed or forbidden nature of the direct gap is not considered. Candidates emerging from screen 3 are highlighted in (c) and (d). In screen 2, band gap refinement calculations are applied to the outputs of screen 1 to better approximate direct allowed gap and the absorption coefficient. This approach assumes that the GGA forbidden energy difference \(\Delta^{\rm d}\) is a sufficient proxy for the difference between HSE \(E_{\rm G}^{\rm d}\) and HSE \(E_{\rm G}^{\rm dz}\); however this has not been benchmarked to our knowledge, and \(\Delta^{\rm d}\) may scale differently depending on functional. Using these HSE shifted energies and spectra, compounds are filtered that fulfill at least one of three criteria as proxies for transparency, as shown in Figure 2: \(E_{\rm G}^{\rm dz}\geq 2.9\) eV, \(E_{\rm edge}\geq 2.9\) eV, or \(\bar{\alpha}_{\rm vis}\) less than that of ITO (\(2.7\times 10^{3}\) cm\({}^{-1}\)). Outputs are reported in Figure 4(c) and (d), yielding 184 previously excluded p-type TC candidates and 243 previously excluded n-type TC candidates. At this stage, the BPE ratio \(\sigma_{\rm BPE}\) is also computed as a guideline for whether dopability may be possible. Specifically, BPE energies that lie in the upper quartile of the band gap near the conduction band minimum (CBM; i.e., \(\sigma_{\rm BPE}\) > 0.75) have been shown to correlate with unlikely p-type dopability, whereas BPE energies that lie in the lower quartile of the band gap near the valence band maximum (VBM; i.e., \(\sigma_{\rm BPE}\) < 0.25) have been shown to correlate negatively with n-type dopability.[25] Therefore, we restrict defect calculations in screen 3 to compounds with \(\sigma_{\rm BPE}\) < 0.75 for p-type candidates and \(\sigma_{\rm BPE}\) > 0.25 for n-type candidates. Most compounds have mid-gap BPEs so are not excluded from either set. We emphasize that this metric is a guideline that has been demonstrated to correlate with dopability, _not_ to predict it, so screens based on BPE should be used with caution.[25] \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c} \hline **Material** & \multirow{2}{*}{**Formula**} & **Space** & \(\mathbf{E_{\rm null}}\) & \(\mathbf{E_{\rm G}}\) & \(\mathbf{E_{\rm G}}\) & \(\mathbf{E_{\rm G}^{\rm d}}\) & \(\mathbf{E_{\rm G}^{\rm d}}\) & \(\mathbf{E_{\rm edge}}\) & \(\mathbf{\Delta^{\rm d}}\) & \(\mathbf{\Delta^{\rm d}}\) & \(\mathbf{\Delta^{\rm d}}\) & \(\mathbf{\bar{\alpha}_{\rm vis}}\) & \(\mathbf{\bar{\alpha}_{\rm vis}}\) & \(\mathbf{m_{\rm v}^{\rm+}}\) & **Doping** & \# **ICSD** \\ **ID (mpld)** & & **group** & **(eV/at.)\({}^{\dagger}\)** & **(eV)\({}^{\dagger}\)** & **(eV)\({}^{\dagger}\)** & **(eV)\({}^{\dagger}\)** & **(eV)\({}^{\dagger}\)** & **(eV)\({}^{\dagger}\)** & **(eV)\({}^{\dagger}\)** & **(eV)\({}^{\dagger}\)** & **(cm\({}^{\dagger}\))\({}^{\dagger}\)** & **(cm\({}^{\dagger}\))\({}^{\dagger}\)** & **(cm\({}^{\dagger}\))\({}^{\dagger}\)** & **(cm\({}^{\dagger}\))\({}^{\dagger}\)** & **(cm\({}^{\dagger}\))\({}^{\dagger}\)** & **(cm\({}^{\dagger}\))\({}^{\dagger}\)** & **(cm\({}^{\dagger}\))\({}^{\dagger}\)** & **(cm\({}^{\dagger}\))\({}^{\dagger}\)** \\ \hline mp-1009085 & BeSiP\({}_{2}\) & \(I\bar{4}2d\) & 0.000 & 1.15 & 1.84 & 1.84 & 3.04 & 2.78 & 1.20 & 0.94 & 7.1\(\times 10^{3}\) & 0.35 & 0.47 & p\&n & 2 \\ mp-9268 & KSe & \(P\bar{6}2m\) & 0.000 & 0.72 & 1.66 & 2.14 & 2.14 & 3.31 & 0.00 & 1.17 & 4.8\(\times 10^{3}\) & 0.33 & 1.73 & p\&n & 1 \\ mp-11583 & Zr\({}_{2}\)Sn\({}_{2}\) & \(P6_{3}/mmc\) & 0.000 & 0.56 & 1.45 & 2.65 & 3.06 & 2.76 & 0.41 & 0.24 & 8.3\(\times 10^{3}\) & 0.41 & 0.44 & p\&n & 1 \\ \hline mp-984718 & BAs & \(P6_{3}mc\) & 0.090 & 1.15 & 1.82 & 2.72 & 3.23 & 3.05 & 0.51 & 0.32 & 4.6\(\times 10^{3}\) & 0.28 & 0.38 & p-type & 0 \\ mp-947 & Au\({}_{2}\)S & \(Pn\bar{3}m\) & 0.000 & 1.91 & 3.00 & 3.00 & 3.30 & 3.31 & 0.30 & 0.13 & 3.4\(\times 10^{3}\) & 0.42 & 1.55 & p-type & 2 \\ mp-32780 & AuCl & \(I4_{1}/amd\) & 0.000 & 1.93 & 3.04 & 3.04 & 3.26 & 3.39 & 0.23 & 0.35 & 1.9\(\times 10^{3}\) & 1.13 & 0.91 & p-type & 2 \\ \hline mp-1072104 & GeO\({}_{2}\) & \(Pnnmn\) & 0.006 & 1.40 & 3.23 & 3.23 & 3.38 & 4.02 & 0.15 & 0.79 & 2.2\(\times 10^{3}\) & 0.19 & 1.62 & n-type & 6 \\ mp-1224786 & GaSbO\({}_{4}\) & \(Cmmn\) & 0.000 & 0.80 & 2.47 & 2.47 & 2.83 & 3.34 & 0.36 & 0.87 & 7.7\(\times 10^{3}\) & 0.16 & 1.45 & n-type & 0 \\ mp-16293 & KSbO\({}_{3}\) & \(F\bar{d}3m\) & 0.045 & 1.12 & 2.58 & 2.89 & 3.35 & 3.32 & 0.46 & 0.43 & 1.0\(\times 10^{3}\) & 0.24 & 0.34 & n-type & 1 \\ mp-1106089 & Ba\({}_{2}\)InGaO\({}_{5}\) & \(Ima2\) & 0.000 & 1.42 & 2.68 & 2.69 & 2.37 & 3.39 & 0.59 & 0.71 & 1.1\(\times 10^{3}\) & 0.22 & 0.77 & n-type & 1 \\ mp-1029868 & Sr\({}_{5}\)(NiN\({}_{2}\))\({}_{2}\) & \(C12/c1\) & 0.000 & 1.41 & 2.43 & 2.59 & 2.90 & 3.02 & 0.31 & 0.43 & 3.7\(\times 10^{3}\) & 0.34 & 1.20 & n-type & 0 \\ \hline mp-856 & SnO\({}_{2}\) & \(P4_{2}/mmn\) & 0.000 & 0.66 & 2.33 & 2.33 & 3.06 & 3.20 & 0.74 & 0.87 & 1.0\(\times 10^{3}\) & 0.14 & 1.56 & n (ref.) & 42 \\ mp-22598 & In\({}_{2}\)O\({}_{3}\) & \(Ia\bar{3}\) & 0.000 & 0.93 & 2.34 & 2.34 & 2.56 & 3.02 & 0.22 & 0.68 & 2.7\(\times 10^{3}\) & 0.13 & 6.44 & n (ref.) & 18 \\ \hline \end{tabular} \({}^{\dagger}\)GGA calculations. \({}^{\dagger}\)HSE06 calculations. \({}^{\dagger}\)GGA calculations with HSE06 gap correction. \end{table} Table 1: Most promising predicted TC compounds with forbidden optical transitions (see columns \(\Delta^{\rm d}\) and \(\Delta^{\rm d}_{\rm edge}\)), and a summary of their computed optical and electronic properties. See SM for full table. Figure 5: A schematic of band edge orbital descriptors and violin plots correlating each descriptor to the forbidden transitions data set for (a) \(t_{\rm IPR}^{\rm d}\), the transition localization at the VBM and CBM from the inverse participation ratio (IPR) and (b) \(\Sigma\sigma^{\rm d}\), the orbital overlap of the VBM and CBM states at \(E_{\rm G}^{\rm d}\). In (a), “D” indicates a delocalized state and “L” indicates a localized state, and the scenario in which there are transitions from a highly localized state at the VBM to another highly localized state at the CBM (L\(\rightarrow\)L) is highlighted. In (b), only compounds with L\(\rightarrow\)L are plotted, and the violin plot shows the distribution of \(\Sigma\sigma^{\rm d}\) across different optical types (note that OT2 and OT4 indicate forbidden transitions, i.e., \(\Delta^{\rm d}\) > 0 eV.) In both violin plots, #### Screen 3: Low-throughput defect and mobility calculations In the final screening step, GGA defect formation energy calculations are performed (with HSE VBM and CBM corrections[23]) to assess accessible intrinsic carrier concentrations on \(\sim\)100 of the most interesting TC candidates to emerge from screen 2. Many of these compounds have unstable defects or intrinsic pinning defects such that they are likely not highly dopable (some may be extrinsically dopable, although this has not been investigated here). We identify a subset of compounds with promising dopability, summarized in **Table I** and **Figure 6**. We will refer to these materials herein as "candidates", as each has shown the potential for dopability, but true dopability remains to be confirmed using higher levels of theory and, for instance, hybrid functionals. Figure 6: PBE defect formation energy diagrams for a representative set of (a,b,c) candidate ambipolar dopable TCs, (d,e,f) candidate p-type TCs, and (g,h,i) candidate n-type TCs. For each material, only a single representative chemical potential condition is plotted (see SM for other conditions). The KSe diagram plotted here is for K\({}_{2}\)Se\({}_{3}\)-KSe and depicts p-type dopability, whereas n-type dopability is computed at a different chemical potential condition (K\({}_{2}\)Se-KSe; see SM). We highlight that these are PBE defect calculations (with an HSE band edge “correction”[23]), and await confirmation of dopability from higher levels of theory. First, our predicted p-type dopable TC candidates include BeSiP\({}_{2}\), KSe, Zr\({}_{2}\)SN\({}_{2}\), BAs (metastable polymorph with space group \(P6_{3}mc\)), Au\({}_{2}\)S, and AuCl. All except BAs are on the convex hull and have been synthesized as bulk materials, while the latter two Au\({}_{2}\)S and AuCl have been synthesized as thin films. Au\({}_{2}\)S is a known p-type semiconductor,[26] and the dopability of AuCl is unknown. BAs is a metastable polymorph with space group \(P6_{3}mc\) (its stable cubic polymorph has had recent attention due to its high thermal conductivity[27; 28], and exhibits p-type conductivity[29]). Compounds KAITe\({}_{2}\), Cd\({}_{3}\)(BO\({}_{3}\))\({}_{2}\), ScIO, and KCuO are potentially p-type dopable within a smaller window of tolerance; each have been synthesized in bulk but not thin film form. Defect calculations of ScIO (and KCuO) suggested hole-killing behavior, but only limited chemical potentials were assessed.[30] A few of p-type candidates appear also n-type dopable at various conditions -- BeSiP\({}_{2}\), Zr\({}_{2}\)SN\({}_{2}\), and KSe (at K\({}_{2}\)Se-KSe, see SM) -- and therefore may be ambipolar dopable semiconductors, however in each case this remains to be confirmed experimentally. We highlight the variety of non-oxide chemistries emerging here; typically forbidden transitions have been studied in oxides, but we demonstrate the importance of looking beyond oxides. In chalcogenides reported here, p-type dopability is limited by anion vacancies. Compounds in which defects suggest candidate n-type TCs include rutile GeO\({}_{2}\), GaSbO\({}_{4}\), KSbO\({}_{3}\), Ba\({}_{2}\)InGaO\({}_{5}\), Sr\({}_{5}\)(SiN\({}_{3}\))\({}_{2}\), and LiYS\({}_{2}\) (as well as ambipolar candidates BeSiP\({}_{2}\), KSe, and Zr\({}_{2}\)SN\({}_{2}\)), while Rb\({}_{2}\)SnBr\({}_{6}\), Sr(YS\({}_{2}\))\({}_{2}\), and GaBiO\({}_{3}\) are potentially n-type dopable within a smaller window of tolerance (see SM). Many of these outputs corroborate literature findings. Notably, the two highest-performing, commercially-available n-type TCs -- In\({}_{2}\)O\({}_{3}\) and SnO\({}_{2}\) -- emerge from the screening at this stage; both have \(\Delta^{\rm d}\) > 0 eV, and we also include them in Table 1 for reference. This is important, as the use of screening parameters from previous studies would have filtered out the best TCs, likely due to their low GGA band gaps (0.66 and 0.93, respectively) and the presence of forbidden transition states at the band edges.[25] Rutile GeO\({}_{2}\) has been recently studied as an ultra-wide-band-gap material and has been shown to be experimentally ambipolar dopable,[31; 32] while GeO\({}_{2}\)-derived germanates (e.g., SrGeO\({}_{3}\)) have been explored as n-type TCOs.[33] Sb-based GaSbO\({}_{4}\) has been explored preliminarily as an n-type TCO.[34] Perovskite Rb\({}_{2}\)SnBr\({}_{6}\) has been recently confirmed experimentally as n-type but not yet studied as a TC[35], while SrY\({}_{2}\)S\({}_{4}\) has been grown as a thin film but doping not confirmed. To our knowledge Ba\({}_{2}\)GaInO\({}_{5}\) has been grown in bulk but not thin film form [36] (a similar compound, brownmillerite Ba\({}_{2}\)In\({}_{2}\)O\({}_{5}\), has shown both n-type and p-type doping and ionic conductivity[37]). All reported oxides are in the main group, corroborating literature consensus on conditions for low effective mass in TCOs;[10] we report several non-oxides here, but in each case \(m_{\rm e}^{*}\) is not as high as in oxides. To assess charge transport, the mobilities of a few representative candidates are computed using the amset package.[38] As shown in **Figure 7**, of these compounds, high hole computed mobilities \(\mu_{\rm h}\) (greater than 10 cm\({}^{2}\)/Vs at 300 K at both moderate and degenerate dopings) are exhibited in BeSiP\({}_{2}\), BAs, and Au\({}_{2}\)S, with the former two exhibiting \(\mu_{\rm h}\) > 100 cm\({}^{2}\)/Vs. BeSiP\({}_{2}\) also has a computed electron mobility \(\mu_{\rm e}\) higher than that of standards of In\({}_{2}\)O\({}_{3}\) and SnO\({}_{2}\), as shown in the SM. Computed mobilities incorporate polar optical phonon, ionized impurity, and acoustic deformation potential scattering modes. These calculations can be interpreted as an upper limit for scattering in high quality thin films; we do not assess grain-boundary scattering, which is common in thin films. To confirm dopability in the two most promising p-type TCs to emerge from the screening -- BeSiP\({}_{2}\) and BAs -- hybrid defect calculations are performed, as summarized in the SM. These calculations corroborate the PBE defect calculations, suggesting ambipolar dopability in BeSiP\({}_{2}\) (limited by phosphorus vacancies V\({}_{\rm P}\) for p-type doping and beryllium vacancies V\({}_{\rm Be}\) for n-type doping) and p-type dopability BAs (limited by boron vacancies, V\({}_{\rm B}\)). ### Identification of candidate TCs with forbidden transitions Here, we summarize the optical and electronic properties of the most promising candidates to emerge from GGA defect calculations, as reported in Table 1, and highlight a few examples. Each emerging compound has a GGA gap \(E_{\rm G}\) below 2 eV, but either an HSE-corrected \(E_{\rm G}^{\rm da}\) or \(E_{\rm edge}\) greater than 3 eV due to the presence of forbidden transitions or a high \(\Delta_{\rm edge}^{\rm d}\). Au\({}_{2}\)S, AuCl, and GeO\({}_{2}\) have HSE gaps \(E_{\rm G}\) greater than 3 eV so may have emerged from previous screenings but are included here due to their forbidden transitions; KSe does not have forbidden tran Figure 7: Hole mobility \(\mu_{\rm h}\) computed with amset code for a representative subset of p-type TC candidates, as a function of doping concentration \(n_{\rm h}\). See SM for electron mobility calculations. sitions, but \(\Delta^{\rm d}_{\rm edge}\) is greater than 1 eV so it is included as well. Compounds in which \(\bar{\alpha}_{\rm vis}\) is less than that of In\({}_{2}\)O\({}_{3}\) and likely exhibit a high degree of transparency are GeO\({}_{2}\), AuCl, KSbO\({}_{3}\), and Ba\({}_{2}\)InGaO\({}_{5}\). The former has been investigated in depth, but the latter three would be particularly interesting candidates for follow-up studies. In **Figure 8** we (a-c) highlight crystal structure and (d-l) optical properties for candidates chalcopyrite BeSiP\({}_{2}\) and wurtzite BAs, selected as representative materials since charge transport and hybrid defect calculations have confirmed p-type dopability and mobility, as well as predicted brownmillerite Ba\({}_{2}\)InGaO\({}_{5}\) as an n-type example. The electronic band structure diagrams of BeSiP\({}_{2}\) (d), BAs (e), and Ba\({}_{2}\)InGaO\({}_{5}\) (f) demonstrate that the direct allowed gap \(E^{\rm da}_{\rm G}\) (pink arrow) is larger than \(E^{\rm d}_{\rm G}\) (black arrow), and the band extrema at \(\Gamma\) are very disperse, which leads to low effective masses and high mobilities. The grey shading depict regions in which optical transitions are forbidden. For example, in Ba\({}_{2}\)InGaO\({}_{5}\) (f) transitions between the upper two VBs along the \(\Gamma\)-X, \(\Gamma\)-Y, \(\Gamma\)-Z paths, as well as the L-T-W path, are forbidden. Thus, the third-highest VB is the highest VB at which transitions between Figure 8: (a–c) Crystal structure, (d–f) HSE-corrected electronic band diagram, (g–i) computed absorption coefficient, and (j–l) computed transmittance for three representative candidates from our screening with forbidden optical transitions: ambipolar-dopable BeSiP\({}_{2}\), p-type dopable BAs, and n-type dopable Ba\({}_{2}\)InGaO\({}_{5}\). HSE direct and direct allowed gaps \(E^{\rm d}_{\rm G}\) and \(E^{\rm da}_{\rm G}\) are denoted with black and pink lines, respectively, and in (b) grey shading indicates the region in which optical transitions are forbidden between the VB and CB states. Rainbow shading in (c) and (d) corresponds the visible spectrum (“vis.”). the CBM are allowed, and \(E_{\rm G}^{\rm da,HSE}\) occurs at the \(\Gamma\)-point at approximately -0.6 eV. These examples demonstrate three different scenarios in which forbidden transitions can occur. In Ba\({}_{2}\)InGaO\({}_{5}\) the \(E_{\rm G}^{\rm da}\) and \(E_{\rm G}^{\rm d}\) occur at the same k-point (\(\Gamma\)) and states are forbidden at the VBM, in BAs the \(E_{\rm G}^{\rm da}\) and \(E_{\rm G}^{\rm d}\) occur at the same k-point (\(\Gamma\)) and states are forbidden at the CBM, while in BeSiP\({}_{2}\)\(E_{\rm G}^{\rm da}\) occurs at a different k-point (Z) than \(E_{\rm G}^{\rm d}\) (\(\Gamma\)) and states are forbidden both at the VBM and the CBM. Panels (g-i) show HSE-corrected absorption coefficient as a function of photon energy, with the edge energy difference \(E_{\rm edge}\) denoted. In each example material, \(E_{\rm edge}\) are within a few tens of meV of \(E_{\rm G}^{\rm da}\), although in other candidates this is not necessarily the case (see Table 1, e.g., In\({}_{2}\)O\({}_{3}\)). Importantly, in both cases \(E_{\rm edge}\) is at the violet edge of the visible spectrum, which indicates a likelihood of transparency in the visible. Panels (j-k) report transmittance from the Beer-Lambert law (see SM), and the color of the trace corresponds to the thickness of a thin film. As expected, thinner films are more transparent, however the decrease in transparency as thickness increases is material-dependent. For example, although both BeSiP\({}_{2}\) and Ba\({}_{2}\)InGaO\({}_{5}\) have \(>\)99% transmittance for 10 nm thick films, 1 \(\mu\)m thick films of Ba\({}_{2}\)InGaO\({}_{5}\) have a \(T_{\rm vis}\) of \(\sim\)75% while \(T_{\rm vis}\) drops to less than 50% in BeSiP\({}_{2}\). however, although BeSiP\({}_{2}\) has \(>\)99% transmittance for 10 nm thick films, \(T_{\rm vis}\) drops to less than 50% in 1 \(\mu\)m thick films. Therefore this metric is important when selecting materials for real device applications. ## Discussion ### Synthesis considerations So far we have used simulations to predict properties; the next step for the TC community is to synthesize these materials as thin films and assess their properties experimentally. The final column of Table 1 reports the number of experimental ICSD database entries, showing all but four (BAs, Sr\({}_{5}\)(SiN\({}_{3}\))\({}_{2}\), AlSbO\({}_{4}\), LiYS\({}_{2}\)) have been previously synthesized (although AlSbO\({}_{4}\) has been recently reported[34]). However in many compounds with ICSD entries, thin films have not yet been grown nor characterized and dopabilty has not been assessed experimentally (e.g., BeSiP\({}_{2}\), KSe, Ba\({}_{2}\)InGaO\({}_{5}\), KAlTe\({}_{2}\), ScIO, KCuO, etc.) Some of our candidates have known synthesis challenges, in particular since thin film synthesis is often at non-equilibrium conditions and presents other difficulties. In perovskite oxides KSbO\({}_{3}\) and GaBiO\({}_{3}\) low \(m_{\rm e}^{*}\) has been highlighted[10] but phase-pure thin film synthesis has proven challenging so doping remains to be confirmed.[39] Non-oxide delacogenics yield particular synthesis barriers due to decomposition: KSe has not been synthesized as a thin film to our knowledge, and KAlTe\({}_{2}\) is likely challenging to synthesize due to oxidation. For candidates that do not have ICSD entries, synthesis may have been attempted but was not successful for various reasons. Wurtzite BAs has been challenging to crystallize and has not yet been synthesized as a thin film to our knowledge; it similar in chemistry to zincblende BP, which was predicted computationally as a p-type TC[18] and has since been synthesized as a thin film.[40] We acknowledge that some of these candidates are also likely not practical or safe to scale up into device applications. In particular, Be and Be-containing compounds are toxic to humans and the environment,[41] so although BeSiP\({}_{2}\) has been synthesized it is most likely not a practical TC material. However, since this compound has a common chalcopyrite crystal structure with a small unit cell, understanding the physics behind its large forbidden transition and disperse valence band is demonstrative and could inspire design criteria of other p-type TCs (see Figure 8). ### Challenges and context From the set of 18,000 materials with absorption calculations, we have proposed a set of TC candidates with forbidden optical transitions at their band edges and plausible dopability and high mobility. There is a general correlation across all semiconductors that, as the fundamental electronic gap increases, doping becomes more challenging and band edges become less disperse.[42; 43] Previous searches for p-type TCs have endeavored to identify cases in which a single state at the VBM is both disperse over k-space and facilitates transitions which lead to a wide gap. In contrast, by decoupling these two parameters such that allowed transitions do not need to occur at the band edge, our metric could enable better electronic properties while the optical gap is widened. One challenge with this design metric is that localized band edges, which we have shown to correlate with forbidden transitions, tend to lead to _higher_ effective masses and therefore lower mobilities. However, we have also demonstrated many candidate TCs with delocalized band edges and forbidden transitions. The absorption spectra we have computed are first-order, high-throughput approximations, and therefore interpretation of results must consider their limitations. We do not include the effects of spin-orbit coupling, which may influence the orbital character of the band edges or induce spin-forbidden transitions. The IPA accounts only for interband absorption at a fixed k-point, and therefore intraband absorption matrix elements are not considered (phonon-assisted transitions such as indirect gap). This may be sufficient for a first-order approximation since indirect absorption tends to be weak, but in heavily doped TCs, strong free-carrier absorption can arise due to intraband transitions.[44]. Off-stoichiometries and dopants can also introduce shallow defect levels within the gap that reduce optical transparency, and absorption from excitons may also become significant at energies just below the fundamental absorption edge, leading to a reduction of transparency.[44] Despite these limitations, our calcula tions and data have added information and improved design metrics towards furthering the search for novel TCs. ## IV Conclusion In this study, we have described the absorption edge and optical type for \(\sim\)18,000 semiconductors in the Materials Project database, and we have shared this data publicly on the MPContribs platform. Using a set of descriptors for absorption and orbital character, we have demonstrated correlations between the presence of forbidden optical transitions, localized band edges, and orbital overlap. From this set of materials, we have screened for n-type or p-type TC materials, and propose a set of candidates with forbidden band edge transitions and promising optical and electronic properties such as chalcopyrite BeSiP\({}_{2}\) and wurtzite BAs. Notably, high-performance TCs such as ITO emerge from this screening, while being excluded from those based on the fundamental gap alone. Since over half of the set of \(\sim\)18,000 semiconductors have forbidden optical transitions at their band edges (OT2 or OT4), we recommend that future high-throughput screenings for optical properties use metrics representative of absorption spectra rather than band gap alone. ## V Acknowledgements This work was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DE-AC02-05-CH11231 (Materials Project program KC23MP). R.W.R. was supported by the U.C. Berkeley Chancellor's Fellowship and the National Science Foundation (NSF) Graduate Research Fellowship under Grant No. DGE1106400 and DGE175814. A.M.G. was supported by EPSRC Fellowship EP/T033231/1. We acknowledge compute resources from National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility. We thank Doug Fabini for helpful discussions, and Ruoxi Yang, Jason Munro, and David Mrdjenovich for fruitful discussion and insights. ## VI Author contributions R.W.R.: Conceptualization, Methodology, Coding, Computational Investigation, Writing - Original Draft, Writing - Review & Editing, Funding Acquisition; Y.X.: Computational Investigation (PBE defect formation energy calculations); J.X.S.: Supervision, Coding; M.K.H.: Supervision, Methodology, Coding; N.W.: Computational Investigation (HSE defect formation energy calculations); M.A.: Funding Acquisition, Resources, Supervision; A.G.: Computational Investigation (amset calculations); G.H.: Conceptualization, Supervision, Writing - Review & Editing; K.A.P.: Funding Acquisition, Project Administration, Resources, Supervision, Writing - Review & Editing.
2301.09015
E$^3$Pose: Energy-Efficient Edge-assisted Multi-camera System for Multi-human 3D Pose Estimation
Multi-human 3D pose estimation plays a key role in establishing a seamless connection between the real world and the virtual world. Recent efforts adopted a two-stage framework that first builds 2D pose estimations in multiple camera views from different perspectives and then synthesizes them into 3D poses. However, the focus has largely been on developing new computer vision algorithms on the offline video datasets without much consideration on the energy constraints in real-world systems with flexibly-deployed and battery-powered cameras. In this paper, we propose an energy-efficient edge-assisted multiple-camera system, dubbed E$^3$Pose, for real-time multi-human 3D pose estimation, based on the key idea of adaptive camera selection. Instead of always employing all available cameras to perform 2D pose estimations as in the existing works, E$^3$Pose selects only a subset of cameras depending on their camera view qualities in terms of occlusion and energy states in an adaptive manner, thereby reducing the energy consumption (which translates to extended battery lifetime) and improving the estimation accuracy. To achieve this goal, E$^3$Pose incorporates an attention-based LSTM to predict the occlusion information of each camera view and guide camera selection before cameras are selected to process the images of a scene, and runs a camera selection algorithm based on the Lyapunov optimization framework to make long-term adaptive selection decisions. We build a prototype of E$^3$Pose on a 5-camera testbed, demonstrate its feasibility and evaluate its performance. Our results show that a significant energy saving (up to 31.21%) can be achieved while maintaining a high 3D pose estimation accuracy comparable to state-of-the-art methods.
Letian Zhang, Jie Xu
2023-01-21T21:53:33Z
http://arxiv.org/abs/2301.09015v1
E\({}^{3}\)Pose: Energy-Efficient Edge-assisted Multi-camera System for Multi-human 3D Pose Estimation ###### Abstract. Multi-human 3D pose estimation plays a key role in establishing a seamless connection between the real world and the virtual world. Recent efforts adopted a two-stage framework that first builds 2D pose estimations in multiple camera views from different perspectives and then synthesizes them into 3D poses. However, the focus has largely been on developing new computer vision algorithms on the offline video datasets without much consideration on the energy constraints in real-world systems with flexibly-deployed and battery-powered cameras. In this paper, we propose an energy-efficient edge-assisted multiple-camera system, dubbed E\({}^{3}\)Pose, for real-time multi-human 3D pose estimation, based on the key idea of adaptive camera selection. Instead of always employing all available cameras to perform 2D pose estimations as in the existing works, E\({}^{3}\)Pose selects only a subset of cameras depending on their camera view qualities in terms of occlusion and energy states in an adaptive manner, thereby reducing the energy consumption (which translates to extended battery lifetime) and improving the estimation accuracy. To achieve this goal, E\({}^{3}\)Pose incorporates an attention-based LSTM to predict the occlusion information of each camera view and guide camera selection before cameras are selected to process the images of a scene, and runs a camera selection algorithm based on the Lyapunov optimization framework to make long-term adaptive selection decisions. We build a prototype of E\({}^{3}\)Pose on a 5-camera testbed, demonstrate its feasibility and evaluate its performance. Our results show that a significant energy saving (up to 31.21%) can be achieved while maintaining a high 3D pose estimation accuracy comparable to state-of-the-art methods. Edge-assisted multi-camera network, Multi-human 3D pose estimation, Energy efficiency + Footnote †: journal: E\({}^{3}\)Pose: Energy-Efficient Edge-assisted Multi-camera System for Multi-human 3D Pose Estimation ## 1. Introduction Multi-human 3D pose estimation is an important yet challenging computer vision problem that has a wide range of applications such as action recognition (Das et al., 2017), sports analysis (Das et al., 2018) and human-computer interaction (Das et al., 2018). It is also perceived as a key technology to enable the seamless and immersive interaction between the real world and the virtual world in the so-called "Metaverse" (Das et al., 2018). Despite the rapid development of deep learning methods, monocular-view-based approaches (Das et al., 2018; Liu et al., 2019; Liu et al., 2019) still suffer from large errors in practice due to occlusion, motion blur and lack of the absolute world coordinate in the single camera setup. Recent efforts have been shifted to studying multi-view approaches (Sandhi et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) where the 3D poses are constructed from multiple camera views. Typically, a two-stage framework is employed where 2D poses are first estimated from the individual camera views and then 3D poses are reconstructed based on these 2D poses. Although much progress has been made, the majority of existing works study the problem from a pure computer vision perspective, neglecting the system constraints in a real-world deployment scenario for real-time multi-human 3D estimation. The two-stage framework of multi-view 3D pose estimation naturally fits into an edge-assisted multi-camera system (Kang et al., 2018). Specifically, distributed smart cameras perform multi-human 2D pose estimation on their individual monocular views, which are not only able to capture high-resolution video data but also equipped with hardware accelerators to execute deep learning-based video processing. The 2D pose estimation results are then transmitted wirelessly to a relatively powerful edge server for the 3D pose reconstruction. Compared to sending the raw video frames from the cameras to the edge server, sending only the 2D pose estimation results can significantly reduce the transmitted data size of the cameras and overcome the wireless bottleneck. On the flip side, however, moving the workload of 2D pose estimation to the cameras demands much higher energy usage of the cameras. For instance, running light-weight deep neural network (DNN)-based multi-human 2D pose estimation requires twice of the power consumption in the standby mode on NVIDIA Jetson Xavier NX (Nguyen et al., 2018), a popular DL-capable embedded system. Therefore, excessive energy consumption may become a major obstacle to the flexible deployment of edge-assisted 3D pose estimation system with battery-powered cameras. In this paper, we present the design, implementation and evaluation of an energy-efficient edge-assisted multi-camera system for multi-human 3D pose estimation, dubbed E\({}^{3}\)Pose. At the core of E\({}^{3}\)Pose is a camera selection scheme that adaptively selects a subset of cameras to perform the 3D pose estimation task depending on the cameras' energy states and their view qualities (i.e., occlusions). Obviously, performing the 3D pose estimation using only a subset of cameras allows some cameras to enter a power-saving mode, thereby extending the battery lifetime of the cameras and the overall system. Less obviously, a careful selection of the subset of cameras to participate in 3D pose estimation can still obtain a comparable, sometimes even higher, estimation accuracy compared to the default full participation case. This is because, in theory, 2D poses from just a few clear camera views are sufficient to synthesize the final 3D poses and, in practice, camera's with clear views vary over time depending on the scene. To realize the function of E\({}^{3}\)Pose, two key challenges must be addressed as follows. **How to predict the cameras' view qualities of a future scene at the selection time?** Because of the various delays in data processing and transmission, camera selection decisions must be made before a scene comes up. More critically, because the selection decision is made before the selected camera's perform 2D pose estimation on a scene, the unselected cameras have no way to evaluate their view qualities in terms of occlusion since they are not supposed to perform 2D estimation. Thus, for the purpose of camera selection, the cameras' view qualities of a future scene of interest must be _predicted_. To cope with this challenge, we design an attention-based LSTM network to predict 3D human poses in a future scene (at a future time) based on the 3D human pose information in the current scene (at the current time). The predicted 3D pose results are then projected to 2D views for individual cameras to calculate the occlusion value of each camera. Although the predicted 3D pose results are not (and need not be) perfectly accurate, they provide sufficiently valuable occlusion information for camera selection. **How to select the cameras to achieve the long-term energy efficiency while considering their view qualities?** To extend the battery lifetime of the overall system, 2D pose estimation workload must be balanced among the distributed cameras without creating processing hotspots on just a few cameras. To this end, we develop a camera scheduler deployed on the edge server to make camera selection decisions in an online fashion by leveraging the Lyapunov optimization framework. When making the selection decision, the scheduler takes into consideration of both the current battery state and the predicted view quality of the cameras to make a trade-off between the long-term energy consumption and the 3D pose estimation accuracy. The edge server acts as a convenient anchor point to make the selection decision without putting extra work on the energy-limited cameras. We implement \(\mathtt{E}^{3}\)Pose on a testbed, where 5 smart camera devices connect wirelessly to an edge server. Evaluations on existing real-world datasets and live video streams on the testbed were conducted, which show that \(\mathtt{E}^{3}\)Pose is able to achieve a significant energy consumption reduction while maintaining a high multi-human 3D pose estimation accuracy. ## 2. Motivational Experiments The generic setup of an edge-assisted multi-camera system for multi-human 3D pose estimation is shown in Figure 1. First, distributed smart cameras perform 2D pose estimation on their captured video frames. The 2D human pose results are then streamed over the wireless network to a central edge server, where data association, cross-view matching, multi-view triangulation and post-processing are performed to synthesize the 2D pose results into 3D skeletons. We consider an energy-limited (e.g., battery-powered) multi-camera system that allows flexible deployment and hence energy efficiency is a key design objective for usability. Although energy consumption is less concerned for plug-in cameras, the stable power supply may not always be available and wiring represents a major obstacle for outdoor deployment. In fact, increasingly many outdoor and in-field camera systems are powered by battery and/or energy harvesting devices (e.g., solar panels) (Sutskever et al., 2016; Sutskever et al., 2016). In what follows, we provide experimental evidence to demonstrate the opportunities for designing an energy-efficient edge-assisted 3D pose estimation system. ### Power Consumption To understand how much benefits in terms of energy saving can be obtained by selecting only a subset of cameras to perform 3D pose estimation, we conduct experiments to measure the power consumption of smart camera devices in the Power-Intensive Mode (PIM) and the Power-Saving Mode (PSM). Specifically, these two modes are defined as follows: **Power-Intensive Mode**: the camera captures the video frames, runs a DNN to obtain the 2D human pose result on the local device and sends the 2D pose estimation result to the edge server via wireless. **Power-Saving Mode**: the camera captures the frames but does not perform 2D human pose estimation or communicate the result with the edge server. Figure 2 illustrates the power consumption profiles of two camera devices used in our testbed, one based on NVIDIA Jetson Xavier NX (Sutskever et al., 2016) and the other based on NVIDIA Jetson TX2 (Sutskever et al., 2016), which are measured using the embedded power consumption monitors in the NVIDIA Jetson devices. In particular, Figure 2(a) shows the total power consumption (VDD_IN) of Xavier NX in two different modes, and the breakdown in terms of processing units (CPU + GPU), base system (SOC) and other components (Others). TX2 offers a finer breakdown information of the power consumption and hence, Figure 2(b) shows the total power consumption (VDD_IN) and the power consumption of GPU, CPU, WiFi, DDR, SOC and Others. As can be seen, on both devices, PSM reduces power consumption by roughly half compared to PIM, where the power saving mainly comes from the processing units and the wireless data transmission. These results suggest a great potential of letting some cameras enter the PSM to save energy, thereby extending the battery lifetime of the overall system. Note that the switching cost (mostly delay) Figure 1. Illustration of edge-assisted 3D pose estimation. Distributed cameras first perform 2D pose estimation on the captured image and then the edge server synthesizes 3D poses based on the estimated 2D poses. Figure 2. Power consumption of cameras in different modes. (The power consumption monitors of NVIDIA Jetson Xavier NX and NVIDIA Jetson TX2 give measurement results in different granularity.) between the two modes is negligible as the DNN for 2D human pose estimation is still cached in memory even in the PSM. ### 3D Pose Estimation Accuracy Multi-human 3D pose estimation solutions (Wang et al., 2016; Wang et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2018; Wang et al., 2019) in the computer vision literature focus on how to improve the estimation accuracy with a _given_ set of camera views. As a straightforward application of these solutions to an edge-assisted multi-camera system, all smart cameras by default perform 2D human pose estimation and send the results to the edge server (Wang et al., 2018). In the previous subsection, we showed that using only a subset of cameras can potentially achieve a significant energy consumption reduction, but what is unclear is whether such an energy saving comes at the cost of a lowered estimation accuracy. In this set of experiments, we investigate the impact of activating different subsets of cameras on the 3D pose estimation accuracy. We find that it is not always necessary to use all cameras' 2D pose results to obtain an accurate 3D pose estimation result. In fact, a judiciously selected subset of cameras can even outperform the entire set of cameras in many cases. This is because errors in the 2D estimation results of cameras with severe occlusions can bring down the accuracy of the 3D pose estimation when these cameras are included in the 3D pose synthesis. To support the above findings, we illustrate the 3D pose estimation results on the Shelf dataset (Wang et al., 2017), which contains 3200 scenes from 5 camera views. Figure 3 shows the estimated 2D poses in all 5 camera views for a representative scene. As can be seen, depending on the relative positions and postures of the human objects in the scene, different camera views manifest different occlusion patterns. In particular, since the occlusions in Camera 1 and Camera 4 are more severe, the respective 2D pose estimation results also contain larger errors. Furthermore, Figure 4 shows that the 3D poses reconstructed by using only the 2D results of Cameras 0, 2, 3 incur a much smaller error than those reconstructed by using all Cameras' 2D results through a visual inspection. Figure 5 further provides numerical results to demonstrate that a subset of cameras often outperforms the entire set of cameras for 3D pose estimation. We measure the 3D pose estimation performance by both the Mean Per Joint Position Error (MPJPE) and the Percentage of Correct Parts (PCP) between the estimated 3D pose and the ground truth, following the same evaluation protocol in the literature (Wang et al., 2016; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). For each scene, we test all camera combinations, and report the average best estimation results for different numbers of cameras in the selected subset in Figure 5(a). We observe that the average estimation performance in terms of both MPJPE and PCP is the best when a subset of 3 cameras are selected to process the scene, rather than using all 5 cameras. Note, however, that the best subset of 3 cameras are not static but rather changes over time depending on the scene. In Figure 5(b), we show that the best combination (i.e., the highest PCP) of a 3-camera subset varies in different scenes over time. This suggests that although deploying only 3 cameras reduces the total energy consumption than deploying 5 cameras, it does not extend the battery lifetime of the system (because every camera is in the PIM all the time) or guarantee an improved the 3D pose estimation accuracy. On the contrary, we advocate a design where a relatively large number of cameras are deployed (considering the increasingly affordable device cost and the deployment flexibility due to wireless transmission and battery power supply), but through _adaptive_ camera selection, the system battery lifetime can be extended and the 3D pose estimation accuracy can be improved. Figure 4. 3D pose estimation results obtained on different combinations of cameras. The estimation error is smaller with only a subset of cameras in this specific case. Figure 5. Accuracy performance of different camera sets. (a) 3-camera subsets achieve the highest PCP and the lowest MPJPE. (b) The best subset(s) of 3 cameras that achieve the highest PCP varies across scenes (time slots). Figure 3. One representative scene in the Shelf dataset and the estimated 2D poses in five camera views. Due to occlusion, the estimation results in Camera 1 and Camera 4 have larger errors. ## 3. System Design In this section, we design \(\mathbb{E}^{3}\)Pose, an energy-efficient edge-assisted multi-camera system that enables real-time multi-human 3D pose estimation based on the core idea of adaptive camera selection. We consider a typical setup where the edge server is a relatively powerful machine that is powered by a stable power source (e.g., the power grid), while the cameras are lightweight and battery powered to enable flexible deployment. A typical number of cameras is 5 or 6, but more cameras can also be deployed as the cost of cameras becomes increasingly lower, in order to extend the system battery lifetime while achieving a high 3D pose estimation accuracy. Figure 6 shows the system architecture and workflow of \(\mathbb{E}^{3}\)Pose. In what follows, we first provide an overview of the data plane and control plane of \(\mathbb{E}^{3}\)Pose and then describe the design details. **Data Plane**. For each input frame captured by the camera, the camera either runs the DNN to estimate the 2D human poses if it is in the PIM (the _Active_ switch is Yes) or skips processing this frame if it is in the PSM (the _Active_ switch is No). The mode switch is controlled by the camera scheduler module residing at the edge server. Once the cameras in the PIM finish the 2D pose estimation, they send the results to the edge server via wireless for 3D pose fusion. Note that compared with the raw input images, the 2D poses are represented by just several pixel coordinates of human joints. Therefore, sending the 2D pose results to the edge server incurs a very small wireless transmission cost. Then, using the received 2D poses from the smart cameras in the PIM, the edge server matches the human poses across these 2D poses and synthesizes the matched multi-view 2D poses into 3D poses. The edge server also saves the 3D poses in each time slot for future user query. **Control Plane**. The camera scheduler runs a prediction module to predict the occlusion in each camera view of a scene in a future time slot \(t^{\prime}\). This is done by first predicting the 3D poses at \(t^{\prime}\) based on the 3D poses in the current time slot \(t\) and the past time slots. The predicted 3D poses are then projected onto each camera's 2D view. Next, bounding boxes on the projected 2D poses are generated, which are used to calculate the occlusion of persons in each camera's view. Based on the calculated occlusion and the energy state of the cameras, the camera scheduler makes the control decision to select the subset of cameras (turn the _Active_ switch on/off at time \(t^{\prime}\)). Note that we intentionally make the scheduling decisions ahead of time to ensure that the control signals can arrive at the cameras before time \(t^{\prime}\). Apparently, the time advance \(t^{\prime}-t\) must be larger than the total delay due to prediction computation and control signal transmission. _Remark on Terminologies_: We clarify the difference between "3D Pose Estimation" and "3D Pose Prediction" used in this paper. 3D Pose Estimation is the main function of \(\mathbb{E}^{3}\)Pose, which refers to using the 2D poses estimated by individual cameras on their captured images from different perspectives of the scene to estimate the 3D poses. As part of \(\mathbb{E}^{3}\)Pose, 3D Pose Prediction module is designed to get a rough idea of the 3D pose in a future scene so that the occlusion information of camera views can be calculated for the future scene. However, since the purpose of 3D pose prediction is mere to calculate/predict occlusion, its accuracy does not need to be as high as 3D pose estimation. ### Data Plane The data plane in \(\mathbb{E}^{3}\)Pose has two main modules - cross-view matching and multi-view triangulation. Cross-view matching collects the 2D poses from the cameras and determines which 2D poses from different cameras are indeed the same person. Then, based on the matched 2D poses, multi-view triangulation fuses the 2D poses into the 3D poses. #### 3.1.1. **Cross-view matching** Consider an \(\mathbb{E}^{3}\)Pose system containing \(N\) smart cameras with known projection matrices \(P_{n}\in\mathbb{R}^{3\times 4},\forall n=1,...,N\), which can be easily obtained during system setup. We also assume that they have the same frame rate. For convenience, we consider a time-slotted system where in each time slot, a frame is captured by a camera. We assume that the clocks of smart cameras and the edge server are software synchronized and each 2D pose message includes a timestamp representing the capture time of the corresponding frame. Therefore, frames from cameras in the same time slot are image captures of the same scene at the same time from different perspectives. Before performing 3D pose synthesis, the detected 2D poses must be matched across views so that 2D poses in all views belonging to the same person are found. We use an efficient iterative greedy matching algorithm proposed in (Krizhevsky et al., 2017) to associate the 2D poses across camera views. The key idea of this method is to make sure that the associated 2D poses from different views satisfy the epipolar constraint, i.e., a human body joint in one view should lie on the epipolar line associated with its correspondence in another view. For the matching, we select a camera as the starting point and choose all 2D human poses in this camera view as person candidates. We then iterate over all the other cameras and match their 2D poses with the current list of person candidates in a greedy manner using the distance between the epipolar lines and the joint locations. #### 3.1.2. **Multi-view triangulation** Once cross-view matching is done, each human object is associated with a set of 2D poses. Consider a representative human object in a particular time slot, who is associated with a set of 2D poses \(\mathbf{x}_{n}=\{x_{n}^{j}\in\mathbb{R}^{2};j=1,2,...,J\},\forall n\in\mathcal{H}\) where \(\mathcal{H}\) is the set of cameras that are selected and contain the human object in that particular time slot, and \(J\) is the number of human joints that are used to indicate the human pose skeleton. Each point \(x_{n}^{j}=[u_{n}^{j},v_{n}^{j}]\) in the 2D pose \(\mathbf{x}_{n}\) represents the 2D pixel coordinate of the \(j\)-th human body joint. These 2D poses will be used to synthesize the 3D pose of the human object, which is denoted by \(\mathbf{X}=\{X^{j}\in\mathbb{R}^{3};j=1,2,...,J\}\). Specifically, the relation Figure 6. The system architecture and workflow of \(\mathbb{E}^{3}\)Pose. between \(\{\mathbf{x}_{n}\}_{n\in\mathcal{H}}\) and \(\mathbf{X}\) is as follows: \[(\omega^{j}\circ A^{j})\bar{X}^{j}=0,\ j=1,\ldots,J \tag{2}\] \[A^{j}=\begin{bmatrix}u_{n}^{j}p_{n,3}-p_{n,1}\\ v_{n}^{j}p_{n,3}-p_{n,2}\end{bmatrix}_{n\in\mathcal{H}}\in\mathbb{R}^{2| \mathcal{H}|\times 4}\] (3) \[\omega^{j}=\begin{pmatrix}\frac{c_{n}^{j}}{||(A^{j})_{n}||}, \frac{c_{n}^{j}}{||(A^{j})_{n}||}\end{pmatrix}_{n\in\mathcal{H}}\in\mathbb{R} ^{2|\mathcal{H}|} \tag{1}\] where \(\bar{X}^{j}\in\mathbb{R}^{4}\) is the homogeneous coordinates of \(X^{j},p_{n,r}\) denotes the \(r\)-th row of the projection matrix \(P_{n}\in\mathbb{R}^{3\times 4}\), \(c_{n}^{j}\) is heatmap confidence value of joint \(j\) in camera \(n\)'s 2D human pose estimation, \((A^{j})_{n}^{1}\) and \((A^{j})_{n}^{2}\) are the rows of \(A^{j}\) corresponding to camera \(n\) and \(\circ\) is the Hadamard product. When calculating \(\omega^{j}\), \(c_{n}^{j}\) is divided by the \(L_{2}\)-norm of the corresponding row of \(A^{j}\) to compensate for the different image locations of the joints in each view. According to the Direct Linear Transform (DLT) algorithm (DLT, 2015), if there are at least two views, Equation (2) is overdetermined and can be solved by a singular value decomposition (SVD) on \(\omega^{j}\circ A^{j}\), taking the unit singular vector corresponding to the smallest singular value of \(\omega^{j}\circ A^{j}\) as solution for \(\bar{X}^{j}\). Finally, \(\bar{X}^{j}\) is divided by its fourth coordinate to obtain the 3D joint \(X^{j}=\bar{X}^{j}/(\bar{X}^{j})_{4}\). ### Control Plane At the core of the control plan is the camera scheduler that selects the subset of cameras to perform multi-human 3D pose estimation for each scene. Consider the scheduling problem for the scene in time slot \(t\). We use Intersection over Union (IoU) of the bounding boxes, denoted by \(\text{IoU}_{n}^{t}\) for camera \(n\) in time slot \(t\), to quantify the occlusion in camera \(n\)'s view. Originally, IoU is used to measure the overlapping area of the predicted bounding boxes and the ground truth in object detection. Here, we repurpose IoU to measure the overlapping area among the predicted bounding boxes of different human objects in the camera view. Specifically, let \(R_{n,1}^{t},\ldots,R_{n,K_{n}}^{t}\) be the areas of the \(K_{n}^{t}\) bounding boxes in camera \(n\)'s view in time slot \(t\). Then, \(\text{IoU}_{n}^{t}\) can be calculated as \[\text{IoU}_{n}^{t}=\frac{1}{\binom{K_{n}^{t}}{2}}\sum_{k=1}^{K_{n}^{t}}\sum_{k =k_{1}+1}^{K_{n}^{t}}\frac{R_{n,k_{1}}^{t}\cap R_{n,k_{2}}^{t}}{R_{n,k_{1}}^{t }\cup R_{n,k_{2}}^{t}} \tag{4}\] where \(\binom{K_{n}^{t}}{2}\) is the number of 2-combinations from a given \(K_{n}^{t}\) elements. Note that \(\text{IoU}_{n}^{t}\in[0,1]\) with a value 0 representing no occlusion and a value 1 representing complete overlapping. Let \(s_{n}^{t}\in\{0,1\}\) be the binary selection variable for camera \(n\) at time slot \(t\) where \(s_{n}^{t}=1\) means that camera \(n\) is selected and \(s_{n}^{t}=0\) otherwise. The goal of the camera scheduler is to select a subset of \(C\) cameras so that the total occlusion of the selected cameras is minimized. Note that ideally the objective function should be to minimize the 3D pose estimation error. However, it is extremely difficult, if not impossible to construct a 3D pose estimation error function depending on the selected cameras and even their estimated 2D poses. As such, we minimize the total occlusion of the selected cameras instead, which is much easier to calculate given the 2D bounding boxes in each camera views. In our experiments on the Shelf dataset (Shi et al., 2017) and the Panoptic dataset (Peng et al., 2019), we found that the 3D pose estimation error is monotonically related to the total occlusion, thereby justifying the choice of our objective function. Specifically, Figure 7 illustrates the MPJPE as a function of the total occlusion for different numbers of cameras in the selected subset. In all cases, the MPJPE monotonically increases with the IoU. Formally, the camera scheduler aims to solve the following long-term optimization problem for a number of \(T\) time slots: \[\min_{s_{1}^{t},\ldots,s_{T}^{t}} \frac{1}{T}\sum_{t=1}^{T}\sum_{n=1}^{N}\text{IoU}_{n}^{t}\cdot s _{n}^{t}\] \[s.t. \frac{1}{T}\sum_{t=1}^{T}s_{n}^{t}\leq E_{n},n=1,\ldots,N\] \[\sum_{n=1}^{N}s_{n}^{t}=C,\forall t \tag{5}\] where the first constraint is a long-term energy constraint determined by each camera's (normalized) battery energy capacity \(E_{n}\) and the second constraint specifies the number of cameras to select in each time slot. Note that \(E_{n}\) must satisfy \(E_{n}\geq C/N\). There are two main challenges that impede the derivation of the optimal solution of the scheduling problem. Firstly, although IoU is a meaningful metric to measure occlusion, it can only be calculated when the 2D bounding boxes in a camera are available. However, this is a chicken-egg problem because the 2D bounding boxes cannot be obtained without having the camera perform the 2D pose estimation on the frame in the first place. Secondly, optimally solving the above optimization problem requires the information of all future scenes, which is impossible to know in advance. Moreover, the long-term energy constraint couples the camera selection decision across time: consuming more energy in the current time slot will reduce the available energy for future use. Therefore, a greedy algorithm that myopically minimizes the occlusion in the current time slot may create processing hotspots on some cameras, thereby quickly depleting their battery energy and reducing the battery lifetime of the overall system. Next, we describe our designs to cope with these challenges. #### 3.2.1. **How to calculate IoU without performing 2D pose estimation?** To calculate the IoUs before cameras are selected to Figure 7. Relation of the occlusion (i.e., IoU) and the 3D pose estimation accuracy (i.e., MPJPE). perform 2D pose estimation on their captured images, our idea is to let the edge server _predict_ the 3D poses based on the past 3D pose estimation results, and then project the predicted 3D poses onto each camera's view to obtain the 2D poses. Using the projected 2D poses (and their corresponding bounding boxes), the camera scheduler then calculates the IoU for each camera. Because the IoU calculation is executed completely by the edge server and does not require any computation or message exchange at the camera side, cameras incur no extra power consumption while the transmission delay is eliminated. **Fast 3D Pose Prediction**. We design an attention-based LSTM as the 3D pose predictor because of its powerful capabilities of learning rich spatial and temporal features from a series of historical 3D poses. In addition, it incurs a very low prediction delay. The experiments on our testbed report a 5.6ms prediction delay on average. In time slot \(t\), the camera scheduler aims to predict the real-world coordinates of all 3D poses in the scene in a future time slot \(t\)+\(\tau\), given historical estimated 3D poses of the past \(M\) time slots. The time advance \(\tau\) accounts for the delay incurred during prediction and transferring the control signals to the cameras, and ensures that the cameras are informed of the selection decision before the scene of interest comes up. Moreover, \(\tau\) needs not be a fixed value but is adaptive to the deployment scenario and the dynamic wireless channel conditions. Thus, the predictor may predict the 3D poses several time slots beyond the immediate next time slot. Mathematically, we consider a loss minimization problem for training the attention-based LSTM network, denoted by \(f(\cdot;\theta)\) parameterized by \(\theta\), as follows: \[\begin{split}\min_{\theta}\mathcal{L}(\hat{\mathcal{X}}, \mathcal{X})=\min\sum_{l=1}^{\tau}(\hat{X}^{t+l}-X^{t+l})^{2}\\ s.t.\hat{\mathcal{X}}=f(\{X^{t-M+1},X^{t-M+2},\ldots,X^{t}\}; \theta)\end{split} \tag{6}\] where \(\hat{\mathcal{X}}=\{\hat{X}^{t+1},\ldots,\hat{X}^{t+\tau}\}\) is the predicted 3D poses, and \(\mathcal{X}=\{X^{t+1},\ldots,X^{t+\tau}\}\) is the ground truth 3D poses. Recently, attention-based LSTM models (Zhu et al., 2017; Wang et al., 2018; Wang et al., 2019) have shown their effectiveness in predicting time series data. As shown in Figure 8, our attention-based LSTM network consists of three modules: an encoder, an attention module and a decoder. The encoder takes the spatial and temporal information (i.e., the 3D poses' coordinates in the past \(M\) time slots) of each human pose \(\{X^{t-M+1},...,X^{t}\}\) as input, and encodes them into the feature map by passing them through the input attention module and the LSTM module. Subsequently, the attention module, which is a fully-connected layer, is adopted to select the most relevant encoded features and generate the context vector. Finally, the decoder processes the context vector associated with the historical 3D poses through a fully-connected layer, a LSTM model and a fully-connected regressor to output the final 3D pose prediction. Traditional dual-stage attention-based LSTM models usually only make the prediction for the immediate next time slot \(t+1\). In our design, in order to make the prediction for a future time slot \(t+\tau\), we feed the prediction \(\hat{X}\) back to the decoder \(\tau\) times to yield the predicted 3D poses \(\hat{X}^{t+\tau}\) in time slot \(t+\tau\). **Projection.** Next, the predicted 3D poses \(\hat{X}^{t+\tau}\) are projected onto each camera \(n\)'s view to obtain the predicted 2D poses \(\hat{x}^{t+\tau}_{n}\) using camera \(n\)'s projection matrix \(P_{n}\) as follows, \[q[\hat{x}^{t+\tau}_{n},1]^{\top}=P_{n}[\hat{X}^{t+\tau},1]^{\top} \tag{7}\] where \(q\) is the scaling constant between the pixel coordination system and the world coordination system. **Occlusion Calculation**. With the projected 2D poses, the camera scheduler generates the bounding boxes of each 2D pose in each camera's view. However, because of the discrepancy between the 2D pose and the actual human body in an image, we slightly expand (by 10% in our implementation) the bounding box of the 2D pose to obtain the bounding box of the human object before calculating the occlusion in each camera view. Using the expanded bounding boxes, the IoU in each camera view is calculated by Equation (4). #### 3.2.2. How to make the camera selection decisions without future information? We leverage the Lyapunov optimization technique (Zhu et al., 2017) to make camera selection decisions without knowing far future information to balance long-term 3D estimation performance and energy consumption. Specifically, \(\mathbb{E}^{3}\text{Pose}\) converts the long-term optimization problem (5) into a sequence of per-slot optimization problems that can be easily solved using the predicted occlusion information and the current energy state. To handle the long-term energy constraint that couples the camera selection decisions across time slots, we construct a (virtual) energy deficit queue for each camera to guide the camera selection decision to follow the long-term energy constraint. Let \(q^{t}_{n}\) denote the energy deficit queue length of camera \(n\) in time slot \(t\) with \(q^{0}_{n}=0\) as follows, \[q^{t+1}_{n}=\max\{q^{t}_{n}-E_{n},0\}+s^{t}_{n} \tag{8}\] Thus \(q^{t}_{n}\) indicates the deviation of the current power consumption from the long-term energy constraint. Following the drift-plus-penalty framework in Lyapunov optimization (Zhu et al., 2017), in time slot \(t\), the camera scheduler makes the selection decision for the scene in time slot \(t+\tau\) by solving the following problem \[\begin{split}\min_{\mathbf{s}^{t}}&\sum_{n=1}^{N} \left(V\cdot\text{IoU}^{t+\tau}_{n}s^{t+\tau}_{n}+q^{t+\tau}_{n}\cdot s^{t+ \tau}_{n}\right)\\ s.t.&\sum_{n=1}^{N}s^{t+\tau}_{n}=C\end{split} \tag{9}\] Apparently, the optimal solution can be easily obtained by first calculating \(V\cdot\text{IoU}^{t+\tau}_{n}+q^{t+\tau}_{n}\) for each camera \(n\), and then selecting \(C\) cameras with the smallest values. The first term in the objective function (9) is to minimize the occlusion of the selected cameras and the second term is added to satisfy the long-term energy constraint. The positive control parameter \(V\) is used to adjust the trade-off between these two purposes. In particular, by considering the additional term \(q^{t+\tau}_{n}\), the camera scheduler takes into account the (time-varying) energy deficit of the camera: when \(q^{t+\tau}\) is larger, minimizing the energy deficit is more critical and hence cameras Figure 8. The adopted attention-based LSTM network. We modify this network by adding a feedback loop to enable the prediction for the scene at slot \(t+\tau\). with a larger \(q^{t+\tau}\) will be less likely to be selected. Thus, the scheduler works by following the philosophy of "if violate the energy budget, then use less energy", and hence the long-term energy constraint can be satisfied in the long-run without foreseeing all future information. We summarize the camera scheduling algorithm in Algorithm 1. ``` 1:Given control parameter \(V\), energy deficit queues \(\mathbf{q}^{0}=0\). 2:for\(t=0\) to \(t=M-1\)do 3: Estimate the 3D poses using all cameras. 4:endfor 5:for\(t=M\) to \(t=T\)do 6: Estimate the 3D poses based on the selected cameras. 7: Predict the 3D poses in time slot \(t+\tau\); 8: Solve problem (9) to get the camera selection decision in time slot \(t+\tau\). 9: Update the deficit \(\mathbf{q}^{t+\tau}\) for all cameras: 10:\(q_{n}^{t+\tau+1}=\max\{q_{n}^{t+\tau}-E_{n}\}+s_{n}^{t+\tau}\) 11: Send the camera selection decision to each camera. 12:endfor ``` **Algorithm 1** Algorithm 1 ### Discussion on the Choice of Method The workflow of \(\mathbb{E}^{3}\)Pose is shown in Figure 9(a). A key innovation of our proposed \(\mathbb{E}^{3}\)Pose system is to predict future 3D poses and then project the prediction results onto each individual camera view to obtain the 2D bounding boxes to enable the IoU calculation. This is because the IoU must be calculated before 2D poses in the scene of interest are estimated. In what follows, we discuss the rationale of our proposed prediction method. **Why not directly perform 2D bounding box prediction?** Since the purpose of 3D pose prediction is to calculate the IoU in the 2D camera view, an alternative approach is to directly predict the future 2D bounding boxes using the current 2D bounding boxes in each camera view without performing 3D pose prediction, using, e.g., 2D motion vector techniques (Zang et al., 2018; Wang et al., 2019). We argue that this approach is not suitable for real-time multi-human 3D pose estimation in a multi-camera system. Consider the two possible implementations of this alternative approach. In the first implementation as shown in Figure 9(b), each camera performs the bounding box prediction, and sends the prediction results to the edge server which then makes the camera selection decision. In this implementation, not only the camera is assigned with the extra work of bounding box prediction (which entails extra energy consumption and delay), but also more wireless network bandwidth is consumed because of the extra message exchange. Because the cameras need to wait for the selection decisions of the current scene to come back, the frame rate will also be limited by the roundtrip transmission time and the edge server processing speed. In the second implementation as shown in Figure 9(c), each camera performs the bounding box prediction, calculates the IoU, and decides whether to perform 2D pose estimation based on the calculated IoU by itself (e.g., using a threshold method). In this implementation, camera selection is a fully distributed decision and hence fewer control messages are exchanged. However, because of the lack of central coordination, there is no guarantee on the number of cameras that perform 2D pose estimation or an efficient trade-off between the 3D pose estimation accuracy and the system energy efficiency. For example, it is entirely possible that not a single camera decides to perform 2D pose estimation due to the autonomous decision making, thereby resulting in a 3D pose estimation failure. **Why not use 3D motion vector to predict 3D poses?** To predict the 3D poses, one may alternatively use a 3D motion vector-based predictor instead of our proposed attention-based LSTM. However, our experiments show that 3D motion vectors can achieve a good prediction result for the immediate next time slot, but fails to perform well for time slots further into the future. This is because 3D motion vectors represent a transient motion trajectory, and there will be a large error if this transient motion trajectory is superimposed to obtain the future 3D poses after a relatively longer period of time. Hence, \(\mathbb{E}^{3}\)Pose uses a light-weight attention-based LSTM to predict the future 3D human poses, which can learn the long-range motion dependency from historical 3D human poses and use it to get a better prediction result. To further compare the 3D pose prediction performance by using 3D motion vectors and the attention-based LSTM, we incorporate 3D motion vectors into \(\mathbb{E}^{3}\)Pose and use it as a baseline in Section 5.3. **Why not use the results of 3D pose prediction as the final 3D human poses?**\(\mathbb{E}^{3}\)Pose uses 3D pose prediction to calculate the occlusions in the 2D camera views, thereby guiding the camera selection. Thus, 3D pose prediction only generates coarse 3D poses, whose accuracy is not comparable with the 3D poses estimated using the realized images of the scene. Figure 9. Comparison of different system frameworks. Due to the use of prediction-based camera selection, \(\mathbb{E}^{3}\)Pose is also able to achieve a higher frame rate. ## 4. System Implementation We implement a prototype of \(\mathsf{E^{3}Pose}\) in Python for easy integration with deep learning modules. Our implementation adopts a multi-thread parallel method with four camera threads and four edge server threads which we introduce in more detail below. **Camera Threads.** The Video Capture thread uses OpenCV (Vaswani et al., 2017) to capture the live video stream from the on-device camera and puts the frames into the frame queue. In the 2D Pose Estimation thread, we adopt the CNN architecture in (Vaswani et al., 2017) with significantly more lightweight ResNet18 (Vaswani et al., 2017) as the backbone feature extractor to obtain the 2D poses. The 2D pose estimation model is first trained offline on the COCO dataset (Vaswani et al., 2017) using PyTorch. After training, the model is converted to a TensorFlowRT-compatible model (Vaswani et al., 2017) since TensorRT enables on-device machine learning inference with low latency and small binary size. The 2D Pose Estimation thread gets frames from the global frame queue and runs 2D pose estimation if the camera is selected for the current frame. The detected 2D poses are then put into the 2D pose queue. The network communication is implemented using the Socket library (Vaswani et al., 2017) in Python in two separate threads: Client Sending thread and Client Receiving thread. The Client Sending thread sends the 2D pose results to the edge server via wireless and the Client Receiving thread receives the camera selection decision from the edge server and updates the camera mode switch. **Edge Server Threads.** In the 3D Pose Fusion thread, we use an efficient iterative greedy matching algorithm proposed in (Garshan et al., 2016) to associate the 2D poses across multiple camera views, and use the triangulation method in computer vision to build the human 3D poses. The 3D pose results are stored in the 3D pose queue. In the Camera Selection thread, we modify the attention-based LSTM network (Vaswani et al., 2017) for 3D pose predictions and solve the optimization problem (9) to obtain the camera selection decision. The network communication at the edge server is also separated into two threads: Server Sending thread and Server Receiving thread. The Server Sending thread gets the camera selection result from the decision queue and sends it to each camera. The Server Receiving thread receives the 2D pose results from the cameras and store them into the multi-view 2D pose queue. ## 5. Evaluation ### Experiment Setup **Hardware Testbed**. We build a prototype of \(\mathsf{E^{3}Pose}\) on a hardware testbed consisting of five cameras and one edge server. We use two NVIDIA Jetson Xavier NX (referred to as Xavier from now on) devices and three NVIDIA Jetson TX2 (referred to as TX2 from now on) devices as the smart cameras. Each camera is equipped with a Logitech C270 HD Webcam. To remove the impact of DVFS (Dynamic Voltage and Frequency Scaling) (Vaswani et al., 2017) and allow repeatable experiments, the CPU and the GPU are set at the highest frequencies on all the camera devices. A Dell desktop computer is employed as the edge server, which is equipped with an Intel Core i7-8700K CPU at 3.70GHz\(\times\)12, two NVIDIA GeForce GTX 1080 Ti GPUs, and 11 GB memory. The smart cameras and edge server are wirelessly connected by WiFi. Figure 10 shows a picture of the server and a camera device on our testbed. **Dataset**. Two public multi-camera datasets are used to evaluate the performance of \(\mathsf{E^{3}Pose}\). **Shelf**(Vaswani et al., 2017). The Shelf dataset contains 3200 video frames captured by five cameras in an indoor environment with four persons interacting with each other. We follow the previous works (Vaswani et al., 2017; Garshan et al., 2016; Garshan et al., 2016; Vaswani et al., 2017; Vaswani et al., 2017) to evaluate the accuracy of 3D pose estimation. **Panoptic**(Vaswani et al., 2017). The Panoptic dataset is captured in a closed studio with 480 VGA cameras and 31 HD cameras. The hundreds of cameras are distributed over the surface of a geodesic sphere with about 5 meters of width and 4 meters of height. The studio is designed to simulate and capture social activities of multiple people. We use the same set of training and testing sequences captured by the same set of five HD cameras (3, 6, 12, 13, 23) as in (Vaswani et al., 2017; Vaswani et al., 2017) for evaluation. **Live Videos from Camera** We also develop a test case that runs \(\mathsf{E^{3}Pose}\) on our hardware testbed over the live videos captured by the Logitech C270 HD Webcams. The number of people in the live videos ranges from two to three. For offline training the attention-based LSTM network in this case, five video clips are captured from different view and recorded. At runtime, we use live videos to evaluate the power consumption of the system. **Training the 3D Pose Prediction Model**. The attention-based LSTM model for 3D pose prediction is trained offline. Although the Shelf dataset has 3200 frames, only 280 frames have the 3D human pose ground truth information. As a result, we do not have enough ground truth data as the training data. Similar issues also exist for the Panoptic dataset. To overcome this issue, we create an expanded training dataset using the estimated 3D poses as the ground truth. Specifically, for each scene in each time slot, 3D human poses are estimated (in an offline fashion) by all possible camera combinations. Each training input thus is the 3D poses of all human objects in the scene for \(M\) consecutive time slots, where each human object's 3D pose in a time slot is selected randomly from the expanded training dataset in the corresponding time slot. For our live videos, due to the lack of ground truth, we pick the subset of cameras with small occlusion and use their estimated 3D poses as the ground truth. Note that the attention-based LSTM can also be trained/updated online as more sequences of 3D pose estimates are generated by the system over time. **Baselines.**\(\mathsf{E^{3}Pose}\) is compared with the following baselines: **Select-All (SA)**: All cameras perform 2D pose estimation for each video frame and send the results to the edge server for 3D pose fusion. No camera scheduling is performed. **Random Selection (RS)**: The edge server simply randomly chooses 3 cameras to perform the 2D pose estimation and synthesizes the results to estimate Figure 10. Evaluation testbed. Left: the edge server and the camera devices. Right: The setup of a camera and the runtime monitor. 3D poses. **Independent Decision (ID)**: This is the baseline shown in Figure 9(c). Each camera uses the motion vector method in (Kumar et al., 2017) to predict the 2D bounding boxes and calculates the IoU based on the predicted 2D bounding boxes. Then the cameras decide independently by themselves whether to perform 2D estimation in the future scene of interest. **E\({}^{3}\)Pose with 3D motion vector prediction (E\({}^{3}\)Pose-MV)**: Instead of using the attention-based LSTM to predict 3D poses, this baseline uses a 3D motion vector method to predict the 3D poses and incorporate them into E\({}^{3}\)Pose. **Evaluation Metrics** We evaluate E\({}^{3}\)Pose in terms of the power consumption and the 3D pose estimation accuracy. **Power consumption**: We utilize the INA3221 power monitor (Kumar et al., 2017) embedded in TX2 and Xavier to read the power consumption during runtime. We use the average power consumption for a scene as our evaluation metric. The power consumption directly translates to system battery lifetime. **3D Pose Estimation Accuracy**: For the Shelf dataset, we use the Percentage of Correctly estimated Parts (PCP) as a metric to evaluate the accuracy of the estimated 3D poses to enable a direct comparison with existing works (Kumar et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2020; Wang et al., 2020). For the Panoptic dataset, since existing works do not have a common evaluation protocol, we extend the Average Precision (\(AP_{K}\)) metric (Wang et al., 2019) to the multi-person 3D pose estimation problem, which is defined as the percentage of estimated 3D poses whose MPJPE is smaller than \(K\) millimeters. ### Performance of 3D Pose Estimation Accuracy **Estimation Accuracy**. We first compare E\({}^{3}\)Pose with existing methods on both Shelf dataset and Panoptic dataset in Table 1 and Table 2. We point out that these existing methods use _all_ 5 cameras to perform 3D pose estimation for each scene while E\({}^{3}\)Pose only adaptively selects 3 cameras. Although this is not a fair comparison for E\({}^{3}\)Pose, we can see that E\({}^{3}\)Pose achieves a comparable estimation accuracy with the state-of-the-art solutions. Among these methods, (Kumar et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019; Wang et al., 2020) study 3D pose estimation from a pure computer vision view without considering system design. The work in (Kumar et al., 2017) proposes an edge-assisted multi-camera system similar to ours but utilizes all cameras to perform 3D pose estimation. In fact, the baseline SA can be considered as a simplified version of (Kumar et al., 2017). The difference is that (Kumar et al., 2017) in addition feeds the 3D pose estimation result from the edge server back to the individual cameras to improve the 2D pose estimation performance in the next time slot. However, this closed-loop design prohibits pipeline processing and can result in a reduced frame rate when the server processing speed is low or the wireless condition is bad. E\({}^{3}\)Pose as well as the considered SA baseline use an open-loop design where the cameras do not need to wait for the 3D pose results from the edge server to start the processing of the next frame. In Table 1 and Table 2, we also perform comparison with the considered baselines on the Shelf dataset and the Panoptic dataset, respectively. The results show that E\({}^{3}\)Pose outperforms the baselines by 2.8% to 13.1% in terms of PCP on Shelf and 19.7% to 41.14% in terms of MPJPE on Panoptic. We explain the results in more details next. **SA**: Although the edge server receives all the 2D poses from the smart cameras, errors in the 2D estimation results with severe occlusions can bring down the accuracy of 3D pose estimation. This reconfirms our findings in Section 2.2. Also note that SA achieves a slightly lower PCP accuracy than (Kumar et al., 2017) as expected since the feedback mechanism is not used. **RS**: Since cameras are randomly selected to perform 2D pose estimation, cameras with low view qualities (i.e., large occlusion) can be selected and their poor 2D pose estimation results can be included in the 3D pose synthesis. **ID**: Because cameras make decisions by themselves without any central coordination, it is possible that too few or even zero cameras decide to perform 2D pose estimation in some time slots, thereby reducing the 3D estimation accuracy. **E\({}^{3}\)Pose-MV**: Instead of using an attention-based LSTM, E\({}^{3}\)Pose-MV uses the 3D motion vectors of human joints to predict the future 3D poses. However, the transient motion trajectory is not effective in predicting the 3D poses after a relatively long-period of time (e.g., several time slots beyond the current scene) since the movement of human joints can be potentially variable. This leads to poor occlusion prediction results and hence reduced 3D pose estimation accuracy. We will show more 3D pose estimation performance results shortly. **Memory Usage**. We also measure the memory usage of SA and E\({}^{3}\)Pose using the monitor tools in the NVIDIA devices. Figure 11 shows that, as expected, E\({}^{3}\)Pose incurs a similar memory usage with SA (E\({}^{3}\)Pose being just slightly lower) because even cameras in the PSM still have the DNN stored in the memory for quick mode switching. Figure 11 further shows that the memory overhead slightly increases with the number of persons in the scene, since more instances are created for 2D pose estimation. **Power Consumption.** We analyze the power consumption of Xavier and TX2 devices separately due to their different hardware \begin{table} \begin{tabular}{c|c c c c} \hline \hline & \multicolumn{4}{c}{**PCP (\%)**} \\ \cline{2-5} & Actor1 & Actor2 & Actor3 & Avg \\ \hline CVPR2014 (Kumar et al., 2017) & 66.1 & 65.0 & 83.2 & 71.4 \\ ECCV2014 (Kumar et al., 2017) & 75.0 & 67.0 & 86.0 & 76.0 \\ CVPR2019 (Kumar et al., 2017) & 98.8 & 94.1 & 97.8 & 96.9 \\ CVPR2020 (Wang et al., 2018) & 99.6 & 93.2 & 97.4 & 96.7 \\ CVPR2021 (Wang et al., 2018) & 99.3 & **96.5** & 98.0 & **97.9** \\ RSS2021 (Kumar et al., 2017) & 99.3 & 95.7 & 97.3 & 97.4 \\ \hline SA & 98.7 & 86.7 & 97.7 & 94.4 \\ RS & 95.8 & 83.2 & 95.5 & 91.5 \\ ID & 89.1 & 78.6 & 85.6 & 84.4 \\ E\({}^{3}\)Pose-MV & 97.2 & 84.5 & 96.9 & 92.9 \\ E\({}^{3}\)Pose & **99.9** & 91.9 & **99.4** & 97.1 \\ \hline \hline \end{tabular} \end{table} Table 1. Performance comparison on the Shelf dataset with existing methods and the baselines: Percentage of Correct Parts (PCP) (%). \begin{table} \begin{tabular}{c|c c c c} \hline \hline & \(AP_{25}\) & \(AP_{50}\) & \(AP_{105}\) & \(AP_{150}\) \\ \hline ECCV2020 (Kumar et al., 2017) & 83.5 & 98.3 & 99.7 & 99.9 \\ CVPR2021 (Wang et al., 2020) & **92.1** & **98.9** & 99.8 & 99.8 \\ \hline SA & 86.3 & 93.3 & 95.5 & 96.8 \\ RS & 66.3 & 83.7 & 91.3 & 91.3 \\ ID & 40.6 & 69.3 & 86.2 & 90.3 \\ E\({}^{3}\)Pose-MV & 76.7 & 94.0 & 98.0 & 98.4 \\ E\({}^{3}\)Pose & 90.5 & 98.6 & **99.8** & **100.0** \\ \hline \hline \end{tabular} \end{table} Table 2. Performance comparison on the Panoptic dataset with the baselines: Average Precision (\(AP_{K}\)) (%). and system architectures. In Figure 12, we show the power consumption of Xavier and TX2 on two datasets and the live video, respectively. The power consumption here includes the power consumption by GPU, CPU, WiFi, DDR, but does not include SOC and others, whose changes are negligible between SA and E3Pose. As can be seen, E3Pose can save 20.66% to 31.21% power on Xavier and 17.72% to 28.67% power on TX2. Figure 13 shows the breakdown power consumption of the different system components of Xavier and TX2 on the Shelf dataset. As Figure 13(a) shows, E3Pose utilizes less energy in CPU+GPU and others (DDR and WiFi) than SA for Xavier. Figure 13(b) shows a more detailed power consumption for TX2. As can be seen, the most power saving by E3Pose comes from GPU processing, memory usage and WiFi data transmission. This is again attributed to the fewer cameras performing 2D pose estimation (which involves heavy GPU computation) and sending the results to the edge server (via WiFi). We note that the CPU power consumption does not change much between SA and E3Pose. This is because in our current design of E3Pose, even if a camera is in the PSM, it still continuously captures video frames (or load the frames from the dataset), which requires CPU processing. Nevertheless, we can easily extend E3Pose to allow cameras to enter a more aggressive PSM where the unselected camera does not even capture/load the video frames. With this aggressive PSM mode, our experiment shows that the CPU power consumption can be further reduced by 17.64% compared with SA. **Demo using Live Videos.** Figure 14 demonstrates an example multi-human 3D pose estimation result on live videos captured by the Logitech C270 HD Webcams connected to the NVIDIA devices in a lab environment. The estimated 3D poses are also re-projected onto the 2D images to offer a visual evaluation. The re-projected 2D skeletons closely fit the actual persons in the images, indicating that 3D and 2D poses are reliably estimated, even with occlusions by other people. ### Performance of Occlusion Prediction In this set of experiments, we focus on the occlusion prediction accuracy as it plays a key role in the camera selection. To this end, we consider the IoU between the predicted 2D bounding boxes (which are obtained by projecting the predicted 3D poses on the camera views) and the actual estimated 2D bounding boxes (which are obtained by 2D pose estimation on the image). Figure 15(a) shows the IoU values for E3Pose and E3Pose-MV. In E3Pose-MV, we use the Figure 16. Impact of the se- Figure 17: Impact of the en-lected number of cameras \(C\). **ergy constraint \(E_{n}\). (The test dataset is Shelf.)** dataset is Shelf. Figure 11. Memory usage under different number of persons in the scene. Figure 12: Power consumption of Xavier and TX2 on three different datasets. Figure 14. Example of E3Pose on live videos from cameras. Figure 13. Power consumption breakdown on Xavier and TX2. Figure 15. Performance of the occlusion prediction. (a) Occlusion prediction accuracy in IoU v.s. number of consecutive time slots \(\tau\). (b) 3D pose estimation accuracy in PCP and 3D prediction latency vs. time advance \(\tau\). 3D poses in two consecutive previous time slots to calculate the 3D motion vector for each human joint, and then use it to predict the human pose in the future time slots. However, since \(\mathbb{E}^{3}\)Pose-MV only considers the current motion of the human joints, its occlusion prediction accuracy decreases dramatically with a larger time advance \(\tau\). On the contrary, the adopted attention-based LSTM can track the fine-grained potential motion of the humans and hence, it achieves a higher occlusion prediction accuracy than the motion vector-based prediction and is less sensitive to the time advance \(\tau\). In Figure 15(b), we show that a reduced occlusion prediction does translate to a lowered PCP of the 3D pose estimation due to selecting the suboptimal subset of cameras. However, again, the decrease is small and not very sensitive to the time advance \(\tau\). Figure 15(b) also shows that a larger \(\tau\) increases the prediction latency since the workload of the attention-based LSTM is increased. ### Impact of Camera Number Threshold Although we decided to use 3 cameras in our testbed according to the results in the motivational experiments in Section 2.2, we also try other values for the number of selected cameras. Figure 16 shows the results by varying \(C\) from 2 to 5 and fixing \(E_{n}=0.8\). Note that SA is the special case where \(C=5\). In Figure 16, the overall system power consumption increases with \(C\). This is intuitive as more cameras are selected to execute 2D pose estimation and send the results to the edge server, more energy consumption is incurred. However, there is no monotonic relationship between \(C\) and the accuracy of 3D pose estimation, and setting \(C=3\) achieves the best result. Using more cameras in 3D pose estimation is not necessarily beneficial for accuracy if cameras with poor-quality views are included. ### Impact of Energy Constraint In this subsection, we show the impact of the energy constraint on the performance of \(\mathbb{E}^{3}\)Pose. In the camera scheduling problem (5), for a fixed \(C\), there is a trade-off between the accuracy of 3D pose estimation and the power consumption of individual cameras. When the cameras' energy capacity \(E_{n},\forall n\) is large, it is easier for the camera scheduler to select the cameras with good-quality views without worrying much about creating processing hotspots. As a result, a higher 3D pose estimation accuracy can be achieved at a cost of less balanced power consumption patterns across the cameras. Conversely, when \(E_{n}\) is small, the camera scheduler must be cautious about not creating processing hotspots and using up the battery of certain cameras too soon. Therefore, the power consumption is more balanced across the cameras but the 3D pose estimation accuracy is lower. Figure 17 illustrates this phenomenon by varying \(E_{n}\) from 0.6 to 1.0 while fixing \(C=3\). We can see that both the standard deviation of power consumption across the cameras and the PCP increases with \(E_{n}\). ## 6. Related Work ### 2D Pose Estimation 2D human pose estimation aims to automatically locate human body joints from images or videos. Deep neural network (DNN) based 2D human pose estimations have received significant attention recently due to its high accuracy [(30; 40; 41; 42; 43)]. However, these DNN based 2D human pose estimation methods cannot be directly applied to resource-constrained mobile devices because they use computation-intensive deep neural network models. To overcome this limitation, some researchers focus on accelerating DNN based 2D human pose estimations on mobile devices. MobiPose [(28)] investigates human pose estimation on smartphone SoCs with three novel techniques: the motion-vector-based method for fast location of the human poses across frames, a mobile-friendly DNN model with low latency and sufficient accuracy, and an efficient parallel DNN engine. Blazepose [(44)] and PoseNet [(45)] use light-weight DNN models to enable real-time 2D human pose estimations on mobile device. In this paper, \(\mathbb{E}^{3}\)Pose uses the DNN architecture in [(30)] with significantly more lightweight ResNet18 as the backbone feature extractor to obtain 2D human poses. Moreover, \(\mathbb{E}^{3}\)Pose employs TensorRT, a DNN accelerator, to efficiently run the 2D human pose estimation on memory-limited mobile devices at real time. Note that \(\mathbb{E}^{3}\)Pose is not limited to only the DNN architecture proposed in [(30)]; any other on-device 2D human pose estimation methods can also be transferred to \(\mathbb{E}^{3}\)Pose. ### 3D Pose Estimation Depending on the number of input cameras, 3D human pose estimation methods are categorized into single-view-based methods [(46; 47; 48; 46; 5; 5; 6; 7)] and multi-view-based methods [(17; 18; 19; 20; 8; 9; 10)]. Due to the difficulty of multi-human 3D pose estimations in the monocular view, most of the single-view approaches are developed to construct a single person's 3D poses [(5; 6; 7)], where the predicted pose does not include absolute human joint 3D coordinates in the environment. Although much progress has been made for multi-human 3D pose estimation in a single view [(46; 47; 48)], there is still a large deviation when applying these techniques in different practical surveillance scenarios. In particular, the motion blur and occlusions occur in images. To retrieve absolute location and handle occlusions, the studies of multi-view 3D pose estimation attract more attention recently. It can be applied in various applications, such as sports analysis, video surveillance, animation, and healthcare [(49)]. Most existing approaches [(20; 19)] for single person 3D pose estimation are developed based on the 3D Pictorial Structure model (3DPS), which cannot be directly used in multi-person pose estimation due to the lack of cross-view matching of 2D poses. Most state-of-the-art multi-human 3D pose estimation methods [(17; 18; 19; 8; 9; 10)] match the 2D poses estimation results from cross-view cameras, and fuse the matched 2D poses into 3D human poses. However, these recent methods focus more on accuracy than efficiency, and they do not consider how to deploy these methods in a real-world system with real-time constraints. Recently, an edge-assisted 3D pose estimation system [(12)] uses distributed smart cameras for 2D pose estimation. Then, the 2D human poses are streamed over a network to a central edge server, where data association, cross-view matching, multi-view triangulation and post-processing are performed to fuse the 2D human poses into 3D human skeletons. However, this work uses all the available cameras to perform 3D pose estimation without considering the energy constraints of the system. ## 7. Conclusion In this paper, we proposed a novel energy-efficient edge-assisted multi-camera system for real-time multi-human 3D human pose estimation. We advocated an adaptive camera selection scheme, which is able to achieve the benefits of both extending the battery lifetime of the system and improving the 3D pose estimation accuracy, at the cost of a slightly larger number of cameras. As smart cameras become increasingly cheaper, our proposed system provides an affordable and flexible solution for real-time 3D human pose estimation without requiring expensive special devices.
2310.06467
Advances in Kth nearest-neighbour clutter removal
We consider the problem of feature detection in the presence of clutter in spatial point processes. Classification methods have been developed in previous studies. Among these, Byers and Raftery (1998) models the observed Kth nearest neighbour distances as a mixture distribution and classifies the clutter and feature points consequently. In this paper, we enhance such approach in two manners. First, we propose an automatic procedure for selecting the number of nearest neighbours to consider in the classification method by means of segmented regression models. Secondly, with the aim of applying the procedure multiple times to get a ``better" end result, we propose a stopping criterion that minimizes the overall entropy measure of cluster separation between clutter and feature points. The proposed procedures are suitable for a feature with clutter as two superimposed Poisson processes on any space, including linear networks. We present simulations and two case studies of environmental data to illustrate the method.
Nicoletta D'Angelo
2023-10-10T09:39:33Z
http://arxiv.org/abs/2310.06467v1
# Advances in Kth nearest-neighbour clutter removal ###### Abstract We consider the problem of feature detection in the presence of clutter in spatial point processes. Classification methods have been developed in previous studies. Among these, Byers and Raftery (1998) models the observed Kth nearest neighbour distances as a mixture distribution and classifies the _clutter_ and _feature_ points consequently. In this paper, we enhance such approach in two manners. First, we propose an automatic procedure for selecting the number of nearest neighbours to consider in the classification method by means of segmented regression models. Secondly, with the aim of applying the procedure multiple times to get a "better" end result, we propose a stopping criterion that minimizes the overall entropy measure of cluster separation between clutter and feature points. The proposed procedures are suitable for a feature with clutter as two superimposed Poisson processes on any space, including linear networks. We present simulations and two case studies of environmental data to illustrate the method. **Keywords:** Changepoints; Clutter; Entropy measure; Feature; Spatial point processes ## 1 Introduction Point processes are defined as random collections of points within a measurable space. They have found widespread utility in describing a diverse range of naturally occurring phenomena across various fields. These applications include epidemiology, ecology, forestry, mining, hydrology, astronomy, and meteorology, among others (Cox and Isham, 1980; Ripley, 2005; Daley et al., 2003; Moller and Waagepetersen, 2003; Schoenberg and Tranbarger, 2008; Tranbarger Freier and Schoenberg, 2010). In spatial point processes, each point denotes the location of a specific object or event, such as a tree or a sighting of a species (Ripley, 2005; Cressie, 2015; Diggle et al., 1976). The aim is typically to learn about the mechanism that generates these events (Moller and Waagepetersen, 2003; Diggle et al., 1976; Illian et al., 2008). The first step is usually to learn about its first-order characteristics, studying the relationship of the points with the underlying environmental variables that describe the observed heterogeneity. When the purpose of the analysis is to describe the possible interaction among points, that is, if the given data exhibit spatial inhibition or aggregation, the second-order properties of the process are analysed. One of the main interests of spatial point pattern analysis is identifying features surrounded by clutter. The conventional terminology is that a _feature_ is a point of the pattern or process of interest, and _clutter_ (also called _noise_) consists of extraneous points that are not proper to the pattern of interest. For instance, detecting surface minefields from an image from a reconnaissance aircraft can be processed to obtain a list of objects, some of which may be mines and others any other type of object (Allard and Fraley, 1997; Byers and Raftery, 1998). For spatial point processes, this problem has been addressed differently, either denoted by _feature detection_ or _clutter removal_. Allard and Fraley (1997) developed a method to find the maximum likelihood solution using Voronoi polygons. Dasgupta and Raftery (1998) used model-based clustering to extend the methodology proposed by Banfield and Raftery (1993). While these methods are based on some limiting assumptions, Byers and Raftery (1998) adopted a different approach in which they estimated and removed the clutter without making any assumptions about the shape or number of features. More recently, Gonzalez et al. (2021) considers the local contributions of the pair correlation function as functional data and describes two classification procedures to separate features from clutter points. Among these, Byers and Raftery (1998)'s approach represents a simple and intuitive method for estimating regions of different point densities in a point process, with the very useful feature being potentially for easy use in higher dimensions. Their solution uses \(K\)th nearest-neighbour distances of points in the process to classify them as clutter or otherwise. Such distances are modelled as a mixture distribution, the parameters of which are estimated by a simple EM algorithm. However, as pointed out by the authors, the value of \(K\) to be used must be specified by the user, and though they gave some guidelines, this area could benefit from further investigation. Moreover, they highlight another extension which shows promise, that is, the possibility of applying the procedure multiple times to get "better" end results. This would treat the estimated feature as a new dataset and apply the same method to this. Given the above, this paper aims at enhancing the approach of Byers and Raftery (1998) in two ways. First, we propose a procedure to automatically select the number of nearest neighbours \(K\) to consider in the classification algorithm by means of segmented regression models. Secondly, we consider the further extension of applying the procedure multiple times. In this context, a stopping criterion is needed, and we propose such a criterion based on an entropy measure of cluster separation. All the analyses are carried out through the statistical software R Core Team (2023) and are available from the author. The structure of the paper is as follows. Section 2 presents the preliminaries, including Byers and Raftery (1998)'s method for feature detection and the basics about segmented regression models. Section 3 introduces the proposed methodologies: the selection of the nearest neighbour to consider, trough segmented regression, and the stopping criterion to apply when the procedure is run iteratively. Section 4 shows a simulation study, and Section 5 shows two case studies on environmental data. Finally, Section 6 presents the conclusions. ## 2 Preliminaries ### Kth nearest neighbour clutter removal Let \(u\) be a point location in the two-dimensional plane and \(D_{K}\) be the distance of its \(K\)th nearest neighbour. If \(D_{K}\) is greater than the spatial range \(r_{u}\), then, there must be one of \(0,1,\ldots,K-1\) points at a distance less than \(r_{u}\). For all \(u\in W\), with \(W\) being the spatial window, and \(x\in[0,\infty)\), the \(K\)th nearest neighbour distribution approximation is given by \[\mathbb{P}(D_{K}\geq x)=\sum_{k=0}^{K-1}\frac{e^{-\lambda\pi x^{2}}(\lambda\pi x ^{2})^{k}}{k!}=1-F_{D_{K}}(x),\] where \(\mathbb{P}(D_{K}\geq x)\) is the probability that the \(K\)th nearest neighbour point falls out of the disk \(b(u,x)\) with \(|b(u,x)|=x\), assuming that this disk exists around \(u\). If the \(K\)th nearest neighbour point of \(u\) is outside \(b(u,x)\), it is also outside \(b(u,r_{u})\). Accordingly, the density \(f_{D_{K}}(x)\) can be found as \[f_{D_{K}}(x)=\frac{e^{-\lambda\pi x^{2}}2(\lambda\pi)^{K}x^{2K-1}}{(K-1)!}, \tag{1}\] and therefore \(Y\sim\Gamma(K,\lambda\pi)\), with \(Y=(D_{K})^{2}\). Having a closed-form and the Gamma distribution properties, the maximum likelihood estimation of the rate given the observed values of \(D_{K}\) is straightforward. Indeed, the maximum likelihood estimate of \(\lambda\) is \[\hat{\lambda}=\frac{nK}{\pi\sum_{i=1}^{n}d_{i}^{2}},\] where \(d_{i}\) is the \(i\)th observed \(K\)th nearest neighbour distance. We assume two types of processes to be classified through a mixture of the corresponding \(K\)th nearest neighbour distances coming from the clutter and feature, which are two superimposed Poisson processes. Therefore, based on equation (1), we assume that \[D_{K}\sim p\Gamma^{1/2}(K,\lambda_{1}\pi)+(1-p)\Gamma^{1/2}(K,\lambda_{2}\pi),\] where \(\lambda_{1}\) and \(\lambda_{2}\) are the intensities of the two homogeneous Poisson point processes (_clutter_ and _feature_) and \(p\) is the constant that characterizes the postulated distribution of the \(D_{K}\). A graphical example is given in Figures 1. In particular, the top panels of Figure 1 display a simulated homogeneous Poisson process, with 200 expected points and the distances among all the points of the pattern and their 10th nearest neighbours. The histogram of the distances shows an unimodality around the value 1.5. Then, the bottom panels of Figure 1 show what we assume in equation (2.1), that is, a pattern that is obtained by the superposition of the previously simulated Poisson process on the \([0,10]\times[0,10]\) square (what we shall call _clutter_), and another Poisson process (what we shall call _feature_), with 100 expected number of points, on the unit square. As expected, the computed distances show an evident bimodality, ascribable to the different distances among points of the clutter and of the features with their 10th nearest neighbours. The underlying assumption is that the new modality around the value 0.25 is attributable to the points of the _feature_. **Fig. 1**: _Top panels_: Simulated homogeneous Poisson process and its distances from the 10th nearest neighbour; _Bottom panels_: Simulated clutter Poisson process with a feature Poisson pattern superimposed their distances from the 10th nearest neighbour. The parameters \(\lambda_{1},\lambda_{2}\) and \(p\) associated with the mixture are estimated using an EM algorithm (Dempster et al., 1977), wherein we use the closed-form of a Gamma distribution in the expectation step. On the other hand, let \(\delta_{i}\in\{0,1\}\) be the two classification components for each data point, where \(\delta_{i}=1\) if the \(i\)th point belongs to the feature and \(\delta_{i}=0\) otherwise. Thus, each data point has an observation \(d_{i}\) of \(D_{K}\) and an unknown \(\delta_{i}\). Hence, the \(\mathbb{E}\) step of the algorithm consists of \[\mathbb{E}[\hat{\delta}_{i}^{(t+1)}]=\frac{\hat{p}^{(t)}f_{D_{K}}(d_{i};\hat{ \lambda}_{1}^{(t)})}{\hat{p}^{(t)}f_{D_{K}}(d_{i};\hat{\lambda}_{1}^{(t)})+(1- \hat{p}^{(t)})f_{D_{K}}(d_{i};\hat{\lambda}_{2}^{(t)})},\] and the maximization \(M\) step consists of \[\hat{\lambda}_{1}^{(t+1)}=\frac{K\sum_{i=1}^{n}\hat{\delta}_{i}^{(t+1)}}{\pi \sum_{i=1}^{n}d_{i}^{2}\hat{\delta}_{i}^{(t+1)}},\quad\hat{\lambda}_{2}^{(t+1) }=\frac{K\sum_{i=1}^{n}(1-\hat{\delta}_{i}^{(t+1)})}{\pi\sum_{i=1}^{n}d_{i}^{2 }(1-\hat{\delta}_{i}^{(t+1)})}\] and \[\hat{p}^{(t+1)}=\frac{\sum_{i=1}^{n}\hat{\delta}_{i}^{(t+1)}}{n}.\] An intuitive classification test criterion would classify the points according to the mixture component where the distances have the highest densities. We are mainly interested in identifying the feature points in this proposed classification approach; consequently, we do not consider edge effects because feature points, in practice, are predominantly far from the edges. Additionally, for large \(n\), the convergence of the EM algorithm is good since it takes less time to arrive at an approximately acceptable solution, also with the fewest number of iterations. The following steps implement the classification procedure: 1. Choose a value of \(K\). 2. Compute the \(K\)th nearest-neighbour distances for each point in the point pattern. 3. Apply the EM algorithm for estimating \(\lambda_{1}\), \(\lambda_{2}\), and \(p\). 4. Classify the points according to whether they have a higher density under the feature or clutter component of the mixture. 5. Repeat the steps 1-4 iteratively as desired. ### Segmented regression models Segmented, or broken-line models, are regression models where the relationships between the response and one or more explanatory variables are piecewise linear and, as such, represented by two or more straight lines connected at unknown points. These models are a common tool in many fields, including epidemiology, occupational medicine, toxicology and ecology, where usually it is of interest to assess threshold values where the effect of the covariate changes. The main advantage of this approach is the easy interpretation given by two components, i.e. changepoints and slopes. The segmented linear regression is expressed as \[g(E[Y|x_{i},z_{i}])=\alpha+z_{i}^{T}\theta+\beta x_{i}+\sum_{m=1}^{M_{0}}\delta_{ m}(x_{i,m}-\psi_{m})_{+} \tag{2}\] where \(g\) is the link function, \(x_{i}\) is the broken-line covariate, and \(z_{i}\) is a covariate vector whose relationship with the response variable is a non-broken-line. We denote by \(M_{0}\) the true number of changepoints and by \(\psi_{m}\) the \(M_{0}\) locations of the changepoints in the observed phenomenon. These \(M_{0}\) are selected among all the possible values in the range of \(x\). The term \((x_{i}-\psi_{m})_{+}\) is defined as \(\sum_{i}I(x_{i}>\psi_{m})\) that is \((x_{i}-\psi_{m})I(x_{i}>\psi_{m})\). The parameter estimates \(\boldsymbol{\theta}\) represent the non broken-line effects of \(z_{i}\), \(\beta\) represents the effect for \(x_{i}<\psi_{1}\), while \(\boldsymbol{\delta}\) is the vector of the differences in the effects. The parameters to be estimated usually are: the number of changepoints \(M_{0}\); their locations \(\psi_{m}\); and the broken-line effects, represented by \(\beta\) and \(\boldsymbol{\delta}\). For the estimation procedure, we refer to Muggeo (2003). In this paper, we focus on the sole objective of estimating the location of a unique changepoint, that is, \(\psi_{m}\), with \(M_{0}\) fixed at 1, and no further covariates \(z_{i}\). ## 3 Proposed approaches This section is devoted to the enhancements of the EM algorithm for the classification of _clutter_ and _feature_. Section 3.1 solves the problem of Step 1 of the algorithm by suggesting an approach to select \(K\) automatically. Section 3.2 illustrates a stopping criterion to solve the iterative problem of Step 5. By means of the entropy measure of cluster separation employed in Section 3.1, we provide a simple and intuitive way to state that the current iteration is enough to separate clutter and feature correctly. ### Selecting K through changepoint detection The development of the method of Byers and Raftery (1998) assumes a proper value of \(K\) priorly chosen. The natural way to choose the suitable \(K\)th neighbour is by analysing several increasing values of \(K\) and then selecting the \(K\) after which no improvement is found. In the literature, there are several methodological proposals for this target; in this work, we use an entropy-type measure of separation introduced in Celeux and Soromenho (1996) given by \[S=-\sum_{i=1}^{n}\delta_{i}\log_{2}(\delta_{i}),\] where \(\delta_{i}\) are the probabilities of being in the first component of the mixture in equation (2.1), which is the feature. As stated by Byers and Raftery (1998), plotting the entropies sequentially and looking for a levelling-off changepoint in the graph is an easy way to choose \(K\). An example of this procedure is shown in Figure 2, where the classification entropies for values of \(K\) up to 35 are plotted (_right panel_) for a simulated point pattern (_left panel_). However, such a graphical assessment is not formalized and, therefore, not generalizable and reproducible. Therefore, our first proposal consists of the optimal \(K\) being estimated by fitting a segmented regression model as \[\mathbb{E}\left[Y|x_{i}\right]=\beta+\delta(x_{i}-\psi)I(x_{i}>\psi),\] where the interest is estimating a unique changepoint \(\psi\), after which the slope \(\beta+\delta\) is constrained to be equal to zero. As depicted in Figure 2, the observed response variable is the entropy level, modelled as a function of the number of nearest neighbours. We implemented this automatic option using the function segmented of the package segmented(Muggeo, 2008). In this case, the fitting of the segmented model leads to a \(\hat{K}=13\). ### Stopping criterion for the iterative procedure Let's consider again the simulated pattern of Figure 2 (a). We run the EM algorithm iteratively to see if we can get better results compared to running the algorithm only once. Figure 3 shows the output of the EM procedure run iteratively up to 4 times. Figure 2: _Left panel_: Simulated clutter Poisson process with a feature Poisson pattern superimposed; _Right panel_: Entropy values of the simulated pattern. The black line represents the observed entropies, and the dotted line represents the estimated segmented model. The vertical line indicates the estimated changepoint of \(\hat{K}=13\). Note that we also let the algorithm automatically select \(K\) at each step, and the set of estimated nearest neighbours at each iteration is equal to \(\hat{K}=\{13,19,24,34\}\). As evident from Figure 3, the first iteration looks sufficient to spot the majority of the true feature points. To corroborate this statement, Table 1 contains the true-positive rate (TPR), false-positive rate (FPR), and accuracy (ACC), respectively defined as \[TPR=\frac{\text{true positives}}{\text{positives}},\quad FPR=\frac{\text{false negatives}}{\text{negatives}},\quad ACC=\frac{\text{true positives and negatives}}{\text{positives and negatives}}.\] We, of course, wish to have TPR and ACC close to 1 and FPR close to 0. Figure 3: Points of the simulated pattern classified through the EM algorithm up to four iterations. Blue denotes the _clutter_/_noise_ points, and pink denotes the _feature_ points. These results, of course, confirm that one iteration is sufficient to classify points into clutter and features correctly. However, in real-life applications, such classification rates cannot be computed. Therefore, our proposed stopping criterion to automatically select the number of iterations to run is formalized as follows. Consider a measure of the overall entropy of a unique iteration. Let's denote by \(K_{set}\) the set of possible \(K\) values investigated. Then, for the \(j\)th iteration, regardless of the \(K\) having had to be estimated or fixed, we compute the Entropy measure in equation (3.1) for each \(K\in K_{set}\). We denote the entropy measure obtained considering the \(K\)th nearest neighbour by \(S_{K}\). Then, the _overall measure of entropy_\(S_{J}\) of the \(j\)th iteration is just given by the sum of all the entropies computed for the set of \(K\) values, namely \[S_{J}=\sum_{K_{set}}S_{K}.\] Note that \(K_{set}\) is not indexed by \(J\) as we assume the same set for each iteration. The EM algorithm then stops at iteration \(J\) whenever \(S_{J+1}>S_{J}\), that is, whenever the overall measure of the entropy of the next iteration exceeds the current one. Figure 4 gives a graphical representation and justification of the idea underlying this criterion. \begin{table} \begin{tabular}{c|c c c} \hline \hline Iteration & TPR & FPR & ACC \\ \hline 1 & 0.982 & 0.349 & 0.849 \\ 2 & 0.746 & 0.240 & 0.752 \\ 3 & 0.539 & 0.146 & 0.666 \\ 4 & 0.415 & 0.125 & 0.601 \\ \hline \hline \end{tabular} \end{table} Table 1: True-positive rate (TPR), false-positive rate (FPR), and accuracy (ACC) resulting from the application of the EM algorithm iteratively to the simulated point pattern of Figure 2. The Figure shows that the first iteration provides the lower overall entropy values. The results in Table 2 confirm this result, showing that \(S_{2}>S_{1}\). Basically, the algorithm stops at the first iteration (\(\hat{J}=1\)) because it is the one providing the first value of total entropy \(S_{J}\) that does not decrease at the following iteration. Consider now another example where the clutter points are simulated in the unit square, and the feature points are simulated in a \([0.25,0.5]\times[0.25,0.5]\) window with a different intensity. \begin{table} \begin{tabular}{c|c} \hline Iteration & \(S_{J}\) \\ \hline **1** & **115** \\ 2 & 1813 \\ 3 & 1904 \\ 4 & 1345 \\ \hline \end{tabular} \end{table} Table 2: Overall measures of entropy \(S_{J}\) for the 4 iterations. Figure 4: Entropy values for all the investigated iterations. Figure 5 shows the points of such simulated pattern classified through the EM algorithm up to four iterations. Knowing the sub-window where the feature points have been simulated, we expect the stopping criterion to select the second iteration as the final one, as in the first iteration also points outside of the \([0.25,0.5]\times[0.25,0.5]\) window are classified as features. Indeed, Figure 6 and Table 3 confirm such expectation, indicating the second iteration as the one providing the value of \(S_{J}\) after which the entropy tends to increase again. In other words, \(S_{2}<S_{1}\), but \(S_{3}>S_{2}\), therefore \(\hat{J}=2\). Figure 5: Points of the simulated pattern classified through the EM algorithm up to four iterations. Blue denotes the _clutter_/_noise_ points, and pink denotes the _feature_ points. ## 4 Simulation study This section aims to study the proposed method's performance in terms of classification rates, considering different scenarios concerning both the generating processes and the ratio between the number of clutter and feature points generated. To this end, we simulate under different such scenarios to obtain a comprehensive understanding of the results of the proposed algorithm in different settings. \begin{table} \begin{tabular}{c|c} \hline Iteration & \(S_{J}\) \\ \hline 1 & 274 \\ **2** & **38** \\ 3 & 274 \\ 4 & 242 \\ \hline \end{tabular} \end{table} Table 3: Overall measures of entropy \(S_{J}\) for the 4 iterations. Figure 6: Entropy values for all the investigated interations. The simulation setup is as follows. We simulate 200 patterns from clutter Poisson point processes with \(\mathbb{E}[n_{c}]\) expected number of points. The feature point patterns, with \(\mathbb{E}[n_{f}]\) expected number of points, are simulated from the following processes: 1. Poisson cluster process with \(\kappa=7.5\) intensity of the Poisson process of cluster centres in the window \(W_{c}=[0,1]\). Each cluster consists of \(u=20\) points in a disc of radius \(0.2\); 2. Poisson cluster process with \(\kappa=15\) intensity of the Poisson process of cluster centre in the window \(W_{c}=[0,1]\). Each cluster consists of \(u=10\) points in a disc of radius \(0.2\); 3. Poisson processes in the sub-window \(W_{c}=[0,0.5]\) with \(150\) expected number of points; 4. Poisson processes in the sub-window \(W_{c}=[0.25,0.5]\) with \(20\) expected number of points. Figure 7: Patterns simulated from the considered processes. Blue denotes the _clutter/noise_ points, and pink denotes the _feature_ points. Examples of the simulated patterns are depicted in Figure 7. We show the results of the proposed procedure in Table 4, in terms of true-positive rate (TPR), false-positive rate (FPR), and accuracy (ACC), averaging over the simulated point patterns. Moreover, we compare results obtained fixing \(K=\{10,20,30\}\) nearest neighbours, estimating it by means of our proposed procedure, also applying it iteratively up to 3 iterations, in Table 5. \begin{table} \begin{tabular}{c c c c|c c c|c c c|c c} \hline & & & \(K\) & iter 1 & \(K\) & iter 2 & \(K\) & & iter 3 \\ \cline{3-13} Scenario & \(\mathbb{E}[n_{c}]\) & \(\mathbb{E}[n_{f}]\) & Rate & 10 & 20 & 30 & 10 & 20 & 30 & 10 & 20 & 30 \\ \hline Poisson cluster [1] & 300 & 150 & TPR & 0.75 & 0.79 & 0.79 & 0.66 & 0.65 & 0.62 & 0.53 & 0.52 & 0.46 \\ & & & FPR & 0.53 & 0.58 & 0.61 & 0.42 & 0.42 & 0.39 & 0.30 & 0.29 & 0.26 \\ & & & ACC & 0.56 & 0.54 & 0.52 & 0.61 & 0.61 & 0.61 & 0.64 & 0.65 & 0.65 \\ \hline Poisson cluster [2] & 300 & 150 & TPR & 0.7 & 0.75 & 0.75 & 0.60 & 0.61 & 0.59 & 0.47 & 0.46 & 0.43 \\ & & & FPR & 0.6 & 0.66 & 0.67 & 0.48 & 0.48 & 0.47 & 0.35 & 0.33 & 0.31 \\ & & & ACC & 0.5 & 0.48 & 0.47 & 0.55 & 0.54 & 0.55 & 0.59 & 0.60 & 0.60 \\ \hline Poisson [3] & 300 & 150 & TPR & 0.93 & 0.94 & 0.94 & 0.91 & 0.89 & 0.82 & 0.69 & 0.66 & 0.55 \\ & & & FPR & 0.34 & 0.32 & 0.32 & 0.28 & 0.26 & 0.23 & 0.19 & 0.17 & 0.14 \\ & & & ACC & 0.75 & 0.77 & 0.77 & 0.78 & 0.79 & 0.79 & 0.77 & 0.78 & 0.76 \\ \hline Poisson [4] & 300 & 20 & TPR & 0.98 & 0.96 & 0.91 & 1.00 & 0.99 & 0.97 & 1.00 & 1.00 & 0.97 \\ & & & FPR & 0.58 & 0.42 & 0.29 & 0.63 & 0.42 & 0.25 & 0.64 & 0.39 & 0.22 \\ & & & ACC & 0.46 & 0.60 & 0.72 & 0.40 & 0.60 & 0.76 & 0.40 & 0.63 & 0.79 \\ \hline \end{tabular} \end{table} Table 4: Classification rates averaged over 200 simulated point patterns generated on the unit square with \(\mathbb{E}[n_{c}]\) and \(\mathbb{E}[n_{f}]\) expected number of points for clutter and feature. \begin{table} \begin{tabular}{c c c c|c c c} \hline & & & & \(\hat{K}\) & \\ \cline{3-6} Scenario & \(\mathbb{E}[n_{c}]\) & \(\mathbb{E}[n_{f}]\) & Rate & iter 1 & iter 2 & iter 3 \\ \hline Poisson cluster [1] & 300 & 150 & TPR & 0.80 & 0.65 & 0.52 \\ & & & FPR & 0.65 & 0.42 & 0.31 \\ & & & ACC & 0.52 & 0.60 & 0.64 \\ \hline Poisson cluster [2] & 300 & 150 & TPR & 0.77 & 0.63 & 0.49 \\ & & & FPR & 0.69 & 0.51 & 0.36 \\ & & & ACC & 0.46 & 0.53 & 0.59 \\ \hline Poisson [3] & 300 & 150 & TPR & 0.94 & 0.90 & 0.70 \\ & & & FPR & 0.32 & 0.27 & 0.18 \\ & & & ACC & 0.77 & 0.79 & 0.78 \\ \hline Poisson [4] & 300 & 20 & TPR & 1.00 & 0.99 & 0.98 \\ & & & FPR & 0.65 & 0.44 & 0.27 \\ & & & ACC & 0.39 & 0.59 & 0.74 \\ \hline \end{tabular} \end{table} Table 5: Classification rates averaged over 200 simulated point patterns generated on the unit square with \(\mathbb{E}[n_{c}]\) and \(\mathbb{E}[n_{f}]\) expected number of points for clutter and feature. Iterating with a fixed \(K\) does not improve the classification rates much, while it does with the estimated \(\hat{K}\). In particular, the TPR decreases for the less clustered scenarios (1-3), indicating that a unique iteration is sufficient in such cases, which is reasonable for these particular cases. Anyway, the best classification rates are given by the ACC, which indeed increases notably when \(\hat{K}\) is estimated compared to when it is fixed. This is true in each considered scenario. Still discussing increasing iterations, Scenario 4 exhibits the greatest improvement, even when \(K\) is fixed. Such improvement is, however, even larger for \(\hat{K}\). In conclusion, the results on the ACC being in favour of \(\hat{K}\), together with the other classification rates being comparable with those of fixed \(K\), suggests the usage of the proposed automatic procedure to select the number of nearest neighbours to proceed with the clutter removal procedure. ## 5 Case studies ### Murchison gold data The Murchison geological survey data shown in Figure 8 record the spatial locations of gold deposits and associated geological features in the Murchison area of Western Australia. Figure 8: Murchison gold data: in grey, the locations of geological faults, and in black, the locations of gold deposits. They are extracted from a regional survey (scale 1:500,000) of the Murchison area carried out by the Geological Survey of Western Australia (Watkins and Hickman, 1990). The point pattern recorded is the known locations of gold deposits, and they come with the known or inferred locations of geological faults. The study region is contained in a \(330\times 400\) kilometer rectangle. At this scale, gold deposits are point-like, i.e. their spatial extent is negligible. Gold deposits are strongly associated with greenstone bedrock and faults, but the geology is three-dimensional, and the survey data are a two-dimensional projection. The survey may not have detected all existing faults because they are usually not observed directly; they are observed in magnetic field surveys or geologically inferred from discontinuities in the rock sequence. These data were analysed in Foxall and Baddeley (2002); Brown et al. (2002) and Groves et al. (2000); Knox-Robinson and Groves (1997). The main aim is usually to predict the intensity of the point pattern of gold deposits from the more easily observable fault pattern. We apply the EM procedure iteratively, which stops at the second iteration thanks to the proposed stopping criterion. Note that the nearest neighbours selected at each iteration are 26 and 7. The points classified as features clearly identify an underlying fault. **Fig. 9: Output of the proposed iterative procedure up to 2 iterations. Blue denotes the _clutter_/_noise_ points, and pink denotes the _feature_ points. ### Detecting seismic faults Dasgupta and Raftery (1998) considered the problem of detecting seismic faults based on an earthquake catalogue. The idea is that earthquake epicentres occur along seismically active faults and are measured with some error. So, over time, observed earthquake epicentres should be clustered along such faults. Dasgupta and Raftery (1998) considered an earthquake catalogue recorded over a 40,000 km2 region of the central coast ranges in California from 1962-1981 (McKenzie et al., 1982). An advantage of looking at this region is that the known fault structure is well documented. Dasgupta and Raftery (1998) selected a classification with seven clusters (six non-noise clusters and one noise cluster) because the BIC attains a local maximum there and the successive differences in the BIC values are small thereafter. They found that the classification obtained using six (non-noise) clusters corresponds well with the available documentation of faults in the region of interest. One or two clusters do not correspond to any of the documented faults. An application of 5th NN clutter removal produced the results on Byers and Raftery (1998). One key difference is the isolated cluster in the bottom right that NN methods pick up but that the connected component part of Allard and Fraley's method leaves out. This cluster is treated as one end of a linear cluster of earthquakes in the analysis of Dasgupta and Raftery (1998). They end up filling in the sparse part between it and other clusters with clutter to produce the linear form that they search for. It would seem that the MClust-EM method is more suited to finding features such as faults that are supposed to be roughly linear, but the differences exposed here show that less-structured methods do have contributions to make in structured situations. We analyse the same catalogue of North California earthquakes of magnitude at least 2.5, available from [https://ncedc.org/ncedc/catalog-search.html](https://ncedc.org/ncedc/catalog-search.html). We proceed to run the proposed iterative procedure, which stops at the first one. The nearest neighbour selected is 19. Figure 10 displays the detected feature points, indicating the major underlying San Andreas Fault. Figure 10: Output of the proposed iterative procedure applied to the analysed earthquake data. Blue denotes the _clutter_/_noise_ points, and pink denotes the _feature_ points. ## 6 Conclusions In this paper, we have addressed the problem of selecting the \(K\)th nearest neighbour in the clutter removal procedure for spatial point processes, as well as the problem of finding a suitable stopping criterion when applying the algorithm iteratively to get better results. The methods proposed in this paper build upon the existing classification method of Byers and Raftery (1998), which models the Kth nearest neighbour distances of an observed point pattern made by the superimposition of clutter and feature points by means of a mixture distribution. The contributions of this paper are twofold. Firstly, we introduced an automated method for determining the optimal number of nearest neighbours, utilizing segmented regression models. This enhancement aimed to formalize such selection and to refine the classification process overall, making it completely automatic and, therefore, reproducible. Secondly, with the aim in mind of improving the results in terms of classification, we explore the context of iteratively applying the classification procedure. We do so by introducing a stopping criterion that minimises the overall entropy measure of cluster separation between clutter and feature points at each iteration and stops whenever we get no further improvement in such sense. Through simulations and real-world case studies involving environmental data, we demonstrated the efficacy of our proposed procedures, showcasing their utility in practical applications. Performing similarly to the benchmark methodology in terms of accuracy, our proposed selection method represents a convenient automatic procedure to apply in real data applications when the best number of nearest neighbours to consider is unknown. These enhancements not only provide more accurate feature detection but also offer a systematic and automated approach for refining the classification process, thereby enhancing the overall reliability and applicability of the method in various spatial contexts. Note that these methodological improvements are applicable to all those scenarios where features are superimposed on clutter and, therefore, modelled as two overlapping Poisson processes, including the context of point processes linear networks and that of spatio-temporal point processes. For this reason, future works will adapt the proposed procedure to such a more complex context of point patterns. Finally, another promising extension worth investigating in the future is the alteration of the EM algorithm to search for \(r>2\) groups, each with a different rate. Byers and Raftery (1998) states that such a scenario's performance is in line with that of the two-rate case. This could be useful when each group might correspond to a set of features with a different density, e.g. seismic faults with different earthquake frequencies. ## Fundings This work has been supported by the Targeted Research Funds 2023 (FFR 2023) of the University of Palermo (Italy) and by the PNRR project, grant agreement No PE0000018 - "GRINS - Growing Resilient, INclusive and Sustainable".
2302.09545
Well-posedness and scattering for a 2D inhomogeneous NLS with Aharonov-Bohm magnetic potential
We consider the magnetic nonlinear inhomogeneous Schr\"odinger equation $$i\partial_t u -\left(-i\nabla+\frac{\alpha}{|x|^2}(-x_2,x_1)\right)^2 u =\pm|x|^{-\varrho}|u|^{p-1}u,\quad (t,x)\in \mathbb{R}\times \mathbb{R}^2,$$ where $\alpha\in\mathbb{R}\setminus\mathbb{Z},\,\varrho>0,\,p>1$. We prove a dichotomy of global existence and scattering versus blow-up of energy solutions under the ground state threshold in the inter-critical regime. The scattering is obtained by using the new approach of Dodson-Murphy (A new proof of scattering below the ground state for the 3D radial focusing cubic NLS, {Proc. Am. Math. Soc.} (2017)). This method is based on Tao's scattering criteria and Morawetz estimates. The novelty here is twice: we investigate the case $\varrho\alpha\neq0$ and we consider general energy initial data (not necessarily radially symmetric).
Mohamed Majdoub, Tarek Saanouni
2023-02-19T11:33:22Z
http://arxiv.org/abs/2302.09545v3
# Well-posedness and scattering for a 2D inhomogeneous NLS with Aharonov-Bohm magnetic potential ###### Abstract. We consider the magnetic nonlinear inhomogeneous Schrodinger equation \[i\partial_{t}u-\left(-i\nabla+\frac{\alpha}{|x|^{2}}(-x_{2},x_{1})\right)^{2} u=\pm|x|^{-\varrho}|u|^{p-1}u,\quad(t,x)\in\mathbb{R}\times\mathbb{R}^{2},\] where \(\alpha\in\mathbb{R}\setminus\mathbb{Z},\,\varrho>0,\,p>1\). We prove a dichotomy of global existence and scattering versus blow-up of energy solutions under the ground state threshold in the inter-critical regime. The scattering is obtained by using the new approach of Dodson-Murphy (A new proof of scattering below the ground state for the 3D radial focusing cubic NLS, Proc. Am. Math. Soc. (2017)). This method is based on Tao's scattering criteria and Morawetz estimates. The novelty here is twice: we investigate the case \(\varrho\alpha\neq 0\) and we consider general energy initial data (not necessarily radially symmetric). The particular case \(\alpha=0\), known as INLS, was widely investigated in the few recent years. Moreover, the particular case \(\varrho=0\), which gives the homogeneous regime, was considered recently by X. Gao and C. Xu (Scattering theory for NLS with inverse-square potential in 2D, J. Math. Anal. Appl. (2020)), where the scattering is proved for spherically symmetric datum. In the radial framework, the above problem translate to the INLS with inverse square potential, which was widely investigated in space dimensions higher than three. The Hardy inequality \(\||x|^{-1}f\|_{L^{2}(\mathbb{R}^{N})}\leq\frac{2}{N-2}\|\nabla f\|_{L^{2}( \mathbb{R}^{N})}\), which gives the norm equivalence \(\|f\|_{H^{1}}\simeq\|f\|_{H^{1}}+\||x|^{-1}f\|_{L^{2}}\), fails in two space dimensions. Thus, it is not clear how to treat the NLS with inverse square potential in \(H^{1}\) for two space dimensions. This article seems to be the first one dealing with the NLS with Aharonov-Bohm magnetic potential in the inhomogeneous regime, namely \(\varrho\neq 0\). Key words and phrases:Aharonov-Bohm magnetic potential, Scattering, Morawetz estimates, Virial identities, Gagliardo-Nirenberg inequality, blow-up. ## 1. Introduction In this paper, we study the \(\delta\)-type magnetic field (\(\mathbf{B}=\nabla\times\mathbf{A}\)) with a magnetic field \(\mathbf{B}=\nabla\times\mathbf{A}\). The magnetic field \(\mathbf{B}\) is a magnetic field \(\mathbf{B}\) with a magnetic field \(\mathbf{B}=\nabla\times\mathbf{A}\), where \(\mathbf{A}\) is the electromagnetic vector potential and \(\mathbf{A}\) is the electromagnetic field. The magnetic field \(\mathbf{B}\) is a magnetic field \(\mathbf{B}\) with a magnetic field \(\mathbf{B}=\nabla\times\mathbf{A}\), where \(\mathbf{A}\) is the electromagnetic vector potential and \(\mathbf{A}\) is the electromagnetic vector potential. The magnetic field \(\mathbf{B}\) is a magnetic field \(\mathbf{B}\) with a magnetic field \(\mathbf{B}=\nabla\times\mathbf{A}\), where \(\mathbf{A}\) is the electromagnetic vector potential and \(\mathbf{A}\) is the electromagnetic vector potential. The magnetic field \(\mathbf{B}\) is a magnetic field \(\mathbf{B}\) with a magnetic field \(\mathbf{B}=\nabla\times\mathbf{A}\), where \(\mathbf{A}\) is the electromagnetic vector potential and \(\mathbf{A}\) is the electromagnetic vector potential. The magnetic field \(\mathbf{B}\) is a magnetic field \(\mathbf{B}\) with a magnetic field \(\mathbf{B}=\nabla\times\mathbf{A}\), where \(\mathbf{A}\) is the electromagnetic vector potential and \(\mathbf{A}\) is the electromagnetic vector potential. The magnetic field \(\mathbf{B}\) is a magnetic field \(\mathbf{B}\) with a magnetic field \(\mathbf{B}=\nabla\times\mathbf{A}\), where \(\mathbf{A}\) is the electromagnetic vector potential and \(\mathbf{A}\) is the electromagnetic vector potential. The magnetic field \(\mathbf{B}\) is a magnetic field \(\mathbf{B}\) with a magnetic field \(\mathbf{B}=\nabla\times\mathbf{A}\), where the wave function is \(u:=u(t,x)\in\mathbb{C}\), \(t\in\mathbb{R}\) denotes the time variable, \(x:=(x_{1},x_{2})\in\mathbb{R}^{2}\) is the space variable, \(p>1\) is the exponent of the source term, \(\varrho>0\) gives a singular inhomogeneous term in the non-linearity, \(\alpha\in\mathbb{R}\setminus\mathbb{Z}\) gives an Aharonov-Bohm potential and \(\kappa=\pm 1\). Moreover, \(\kappa=1\) stands for the defocusing case while \(\kappa=-1\) corresponds to the focusing regime. The linear counterpart of (1.1) is the so-called electromagnetic Schrodinger equation \[i\partial_{t}u=\Big{(}-i\nabla+\frac{\mathbf{A}(\frac{x}{|x|})}{|x|}\Big{)}^{ 2}u+\frac{\mathbf{a}(\frac{x}{|x|})}{|x|^{2}}u,\] where \(\mathbf{a}\in W^{1,\infty}(\mathbf{S}^{1},\mathbb{R})\), \(\mathbf{S}^{1}\) denotes the unit circle, and \(\mathbf{A}\in W^{1,\infty}(\mathbf{S}^{1},\mathbb{R}^{2})\) is a transversal vector field, namely \[\mathbf{A}(\omega)\cdot\omega=0,\quad\forall\ \omega\in\mathbf{S}^{1}.\] The magnetic potential (1- form) \(\frac{\mathbf{A}(\frac{x}{|x|})}{|x|}\) is connected with the associated magnetic tensor (2-form) \(\mathbf{B}\) by the exterior derivative \(\mathbf{B}=d\frac{\mathbf{A}(\frac{x}{|x|})}{|x|}\). The magnetic tensor is compatible since the Maxwell equation \(d\mathbf{B}=0\) means that \(\mathbf{B}\) is a closed form. The Aharonov-Bohm potential gives a magnetic field associated to thin solenoids: if the radius of the solenoid tends to zero while the flux through it remains constant, then the particle is subject to a \(\delta\)-type magnetic field, that is so-called Aharonov-Bohm field. The Aharonov-Bohm (AB) effect [2] lies at the interface of gauge theories and quantum mechanics. In its best known form, the AB effect predicts a shift in the interference pattern of the quantum mechanical double-slit experiment which has a magnetic flux carrying solenoid placed between the slits. If a solenoid with a magnetic field \(\mathbf{B}=\nabla\times\mathbf{A}\) (where \(\mathbf{A}\) is the electromagnetic vector potential) is placed between the two slits of a double-slit experiment the phase of the wave-function of the electrons going through the slits and following some path to the screen [29, 24]. We refer to [5] for a rigorous study of the magnetic field in quantum mechanics and to [15, 31, 32] for other physical aspects with many references therein. The vector potential \[\mathbf{A}_{\alpha}(x):=\frac{\alpha}{|x|^{2}}x^{\perp},\quad x^{\perp}:=(-x_ {2},x_{1})\in\mathbb{R}^{2}\setminus\{0\}, \tag{1.2}\] generates the Aharonov-Bohm magnetic field. We know from [1] that the operator \[(-i\nabla+\mathbf{A}_{\alpha})^{2}\quad\text{on}\quad C_{0}^{\infty}(\mathbb{R }^{2}\setminus\{0\}) \tag{1.3}\] is not essentially self-adjoint and admits infinitely many self-adjoint extensions. Here we choose to work with the Friedrichs extension of (1.3) denoted by \(\mathcal{K}_{\alpha}\). To be more precise, we introduce the space \(\dot{H}_{\alpha}^{1}\) as the closure of \(C_{0}^{\infty}(\mathbb{R}^{2}\setminus\{0\})\) with respect to the norm \[\big{\|}\left(\nabla+i\mathbf{A}_{\alpha}\right)u\big{\|}_{L^{2}(\mathbb{R}^{2})}.\] We also define the associated inhomogeneous space \(H^{1}_{\alpha}(\mathbb{R}^{2}):=L^{2}(\mathbb{R}^{2})\cap\dot{H}^{1}_{\alpha}( \mathbb{R}^{2})\). The quadratic form on \(H^{1}_{\alpha}(\mathbb{R}^{2})\) given by \[u\longmapsto\big{\|}\left(\nabla+i\mathbf{A}_{\alpha}\right)u\big{\|}_{L^{2}( \mathbb{R}^{2})}^{2}\] is closed and generates a unique non-negative self-adjoint operator \(\mathcal{K}_{\alpha}\) on \(L^{2}(\mathbb{R}^{2})\) with domain \[\mathcal{D}=\left\{f\in H^{1}_{\alpha}(\mathbb{R}^{2});\ \ \mathcal{K}_{\alpha}f \in L^{2}(\mathbb{R}^{2})\right\}.\] Moreover, the unitary group \(\mathrm{e}^{it\mathcal{K}_{\alpha}}\) extends to a group of isometries on the dual \(\mathcal{D}^{*}\) of \(\mathcal{D}\). Therefore for every \(u_{0}\in L^{2}(\mathbb{R}^{2})\), the unique solution to (2.1) reads \[u(t,x):=\mathrm{e}^{it\mathcal{K}_{\alpha}}\,u_{0}(x)\in C(\mathbb{R};L^{2}( \mathbb{R}^{2}))\cap C^{1}(\mathbb{R};\mathcal{D}^{*}).\] Note that the operator \(\mathcal{K}_{\alpha}\) acts on functions as follows \[\mathcal{K}_{\alpha}u:=-\Delta u+\frac{\alpha^{2}}{|x|^{2}}u-2i\frac{\alpha}{ |x|^{2}}x^{\perp}\cdot\nabla u=-\Delta u+\frac{\alpha^{2}}{|x|^{2}}u,\] where the later equality is valid in the spherically symmetric framework. From [22, Remark 2.1, p. 3891] and [22, (4.1), p. 3895], we know that \(\mathcal{K}_{\alpha}\) and \(\mathcal{K}_{\alpha+m}\) are unitary equivalent for any \(m\in\mathbb{Z}\). Hence, we may assume without loss of generality that \(0<|\alpha|\leq\frac{1}{2}\). The following Hardy inequality was obtained in [27, Theorem 3]: \[\left(d(\alpha,\mathbb{Z})\right)^{2}\int_{\mathbb{R}^{2}}\,\frac{f(x)}{|x|^{ 2}}\,dx\leq\int_{\mathbb{R}^{2}}\,|\nabla_{\alpha}f(x)|^{2}\,dx,\ \ \ \alpha\in\mathbb{R}\setminus\mathbb{Z}, \tag{1.4}\] where \[\nabla_{\alpha}:=\nabla+i\mathbf{A}_{\alpha}=\nabla+i\frac{\alpha}{|x|^{2}}x^ {\perp}. \tag{1.5}\] From (1.5) and (1.4), we obtain the following Sobolev embedding \[H^{1}_{\alpha}(\mathbb{R}^{2})\hookrightarrow H^{1}(\mathbb{R}^{2}) \hookrightarrow L^{r}(\mathbb{R}^{2}),\ \ 2\leq r<\infty,\ \ \alpha\in\mathbb{R}\setminus\mathbb{Z}. \tag{1.6}\] The dispersive and Strichartz estimates are fundamental tools in studying the linear and nonlinear dynamics for dispersive equations. In our setting, we refer to [17, 33]. See also [18, Theorem 2.3, p. 91] for a precise statement of the dispersive estimate for the Aharonov-Bohm potential. It is now quite classical that the dispersive estimate implies Strichartz one by means of the \(TT^{*}\) argument of Keel-Tao [25]. See, among many, [33, 22] and the references therein. For the case of Schrodinger operator with inverse-square potential, we refer to [28]. The Strichartz estimates for the Aharonov-Bohm potential was successfully used to obtain well-posedness and scattering for the homogeneous case, that is (1.1) with \(\varrho=0\). In [34], the authors prove the scattering in the defocusing regime for radially symmetric initial data under the supplementary conditions \(p>3\) and \(2\alpha^{2}>1\). The Scattering in the focusing regime was obtained in [21] for radially symmetric initial data under the ground state threshold. Now, we turn back to (1.1). The main purpose of this work is to extend and somehow improve the results in [21, 9] to the case of singular weight in the nonlinearity, that is \(\varrho>0\). Since the inverse-square potential \(|x|^{-2}\) conserves the same scaling of the Laplace operator, the non-linear Schrodinger equation (1.1) satisfies the scaling invariance \[0<\mu\longmapsto u_{\mu}(t,x):=\mu^{\frac{2-\varrho}{p-1}}u(\mu^{2}t,\mu x).\] The identity \(\|u_{\mu}(t)\|_{\dot{H}^{s}}=\mu^{s-(1-\frac{2-\varrho}{p-1})}\|u(\mu^{2}t)\|_ {\dot{H}^{s}}\) gives the only one homogeneous Sobolev norm stable under the above dilatation. It is \(s_{c}:=1-\frac{2-\varrho}{p-1}\) called the critical Sobolev index. The energy-critical case corresponds to \(s_{c}=1\), or \(p=\infty\). This case is related to the energy conservation law \[E[u(t))]:=\int_{\mathbb{R}^{2}}|\nabla_{\alpha}u(t)|^{2}\,dx+\kappa\frac{2}{p +1}\int_{\mathbb{R}^{2}}|x|^{-\varrho}|u(t)|^{p+1}\,dx=E[u_{0}].\] (Energy) The mass-critical one corresponds to \(s_{c}=0\), or \(p=p_{c}:=3-\varrho\) which is related to mass conservation law \[M[u(t)]:=\int_{\mathbb{R}^{2}}|u(t,x)|^{2}\,dx=M[u_{0}].\] (Mass) It is worth to mention that \(s_{c}<1\) provided that \(\varrho<2\). This means that (1.1) is energy sub-critical for any \(p>1\). In the sequel we will focus on the inter-critical regime \(0<s_{c}<1\), that is \(3-\varrho<p<\infty\), and define the positive real number \[\lambda_{c}:=\frac{1}{s_{c}}-1=\frac{p_{c}-1}{p-p_{c}}=\frac{2-\varrho}{p+ \varrho-3}. \tag{1.7}\] Here and hereafter, one denotes for simplicity the Lebesgue norms \[\|\cdot\|_{r}:=\|\cdot\|_{L^{r}(\mathbb{R}^{2})}\quad\text{and}\quad\|\cdot\| :=\|\cdot\|_{2}.\] Define also the quantities \[\mathcal{P}[u] := \int_{\mathbb{R}^{2}}|x|^{-\varrho}|u|^{p+1}\,dx, \tag{1.8}\] \[\mathcal{Q}[u] := \|\nabla_{\alpha}\,u\|^{2}-\frac{B}{p+1}\mathcal{P}[u]:=\|\nabla _{\alpha}\,u\|^{2}-\frac{p-1+\varrho}{p+1}\mathcal{P}[u]. \tag{1.9}\] Denote the scale invariant quantities \[\mathcal{EM}[u] :=\Big{(}\frac{E[u]}{E[\phi]}\Big{)}\Big{(}\frac{M[u]}{M[\phi]} \Big{)}^{\lambda_{c}}, \tag{1.10}\] \[\mathcal{GM}[u] :=\Big{(}\frac{\|\nabla u\|}{\|\nabla\,\phi\|}\Big{)}\Big{(}\frac{ M[u]}{M[\phi]}\Big{)}^{\frac{\lambda_{c}}{2}},\] (1.11) \[\mathcal{PM}[u] :=\Big{(}\frac{\mathcal{P}[u]}{\mathcal{P}[\phi]}\Big{)}\Big{(} \frac{M[u]}{M[\phi]}\Big{)}^{\lambda_{c}}, \tag{1.12}\] where \(\phi\) is a ground state solution to (2.12). From now one hides the variable t for simplicity, spreading it out only when necessary. Our main contribution reads as follows. **Theorem 1.1**.: _Let \(0<\varrho<1\), \(3-\varrho<p<\infty\), \(\alpha\in\mathbb{R}\setminus\mathbb{Z}\) and \(u_{0}\in H^{1}_{\alpha}\). Let \(\phi\) be a ground state solution to (2.12) and \(u\in C([0,T^{*}),H^{1}_{\alpha})\) be the maximal solution of (1.1) given by Theorem 3.1 below._ 1. _Suppose that_ \[\sup_{t\in[0,T^{*})}\mathcal{PM}[u(t)]<1.\] (1.13) _Then,_ \(u\) _is global and scatters in_ \(H^{1}_{\alpha}\)_._ 2. _Suppose that_ \[\sup_{t\in[0,T^{*})}\mathcal{Q}[u(t)]<0.\] (1.14) _Then_ \[\sup_{t\in[0,T^{*})}\|\nabla u(t)\|=\infty.\] (1.15) In view of the results stated in the above theorem, some comments arise and we enumerate them in what follows. 1. The assumption \(\alpha\in\mathbb{R}\setminus\mathbb{Z}\) enables us to use the Hardy estimate (1.4). 2. By Proposition 2.4, it follows that the energy is well-defined for \(0<\varrho<2\). However, one needs the restriction \(0<\varrho<1\) in the local theory. This is due to the method based on using a fix point argument and Strichartz estimates. Moreover, in order to control the source term, one decomposes the integrals on the unit ball of \(\mathbb{R}^{2}\) and its complementary. In a paper in progress, the authors try to improve the range of the inhomogeneous term exponent \(\varrho\) by use of Lorentz spaces in the spirit of [3]. 3. The scattering under the ground state threshold, in the spirit of the pioneering work [26], is a consequence of the above result. This is given in Corollary 1.2 below. 4. The two above criteria about scattering and blow-up are expressed in terms of non-conserved quantities in the spirit of [12]. This makes more difficult to check their availability. But the assumptions (1.13) and (1.14) are weaker than the classical ones, namely (1.16)-(1.17) and (1.16)-(1.18), which are expressed in term of mass and energy and so more simple to check. * The scattering is obtained by using the new approach of Dodson-Murphy [14]. This method is based on Tao's scattering criteria [30] and Morawetz estimates. * Thanks to the identity \(\mathcal{Q}[u]=\frac{B}{2}E[u]-\frac{B-2}{2}\|\nabla_{\alpha}u\|^{2}\), the above blow-up result holds for negative energy. * In the homogeneous case \(\varrho=0\), the scattering under the ground state threshold with radial data was obtained in [21]. * Our results extend and improve the ones in [21, 34]. * It is expected that the energy critical non-linearity in (1.1) should be of exponential type. This fact was confirmed and extensively studied in the last decade for \(\alpha=0\), that is the 2D-NLS with exponential non-linearity. See [6, 7, 10, 13, 23] and the references therein. For the inverse-square potential and for both NLS and Klein-Gordon equations, see [11]. * In a paper in progress, the authors treat the problem (1.1) with exponential type non-linearity. As a consequence of the above result, one has the next dichotomy of global/non global existence of energy solutions under the ground state threshold. **Corollary 1.2**.: _Take the assumptions of Theorem 1.1 and suppose further that_ \[\mathcal{EM}[u_{0}]<1. \tag{1.16}\] _Then,_ * _the solution of (_1.1_) is global and scatters if_ \[\mathcal{GM}[u_{0}]<1.\] (1.17) * _the solution blows-up in finite or infinite time in the sense of (_1.15_) if_ \[\mathcal{GM}[u_{0}]>1.\] (1.18) The rest of this paper is organized as follows. The next section contains the main results and some standard estimates needed in the sequel. Section 3 develops a local theory in the energy space. In section 4, one proves the main result of this note about two criteria of scattering versus blow-up of energy solutions. The last section proves a dichotomy of global existence and scattering versus blow-up of solutions under the ground state threshold. Finally, for \(a\in\mathbb{R}\), one denotes \(a^{+}\) to be a real number close to \(a\) such that \(a^{+}>a\) and \(a^{-}\) to be a real number close to \(a\) such that \(a^{-}<a\) ## 2. Useful tools and auxiliary results For future convenience, we recall some known and useful tools which will play an important role in the proof of our main results. First, let us collect some standard estimates related to the linear electromagnetic Schrodinger equation: \[\begin{cases}i\partial_{t}u-\mathcal{K}_{\alpha}u=F(t,x),\\ u(t=0,x)=u_{0}(x).\end{cases} \tag{2.1}\] The following dispersive estimate can be found in [16, 17]. **Lemma 2.1**.: _Let \(\alpha\in\mathbb{R}\setminus\mathbb{Z}\), \(t\in\mathbb{R}\setminus\{0\}\) and \(2\leq r\leq\infty\). Then,_ \[\left\|e^{it\mathcal{K}_{\alpha}}\right\|_{L^{r^{\prime}}\to L^{r}} \lesssim|t|^{\frac{2}{r}-1}. \tag{2.2}\] To state the Strichartz estimates, we need the following definition of admissible pairs. **Definition 2.2**.: _A pair \((q,r)\) is said to be admissible if_ \[\frac{1}{q}+\frac{1}{r}=\frac{1}{2},\quad q,r\geq 2,\quad(q,r)\neq(2,\infty). \tag{2.3}\] Let \(\Gamma\) be the set of all admissible pairs. For \(T>0\) and \(\Omega\subset\mathbb{R}^{2}\) a measurable set, define \[\|u\|_{S(\Omega,T)} = \sup_{(q,r)\in\Gamma}\|u\|_{L^{q}(0,T;L^{r}(\Omega))},\] \[\|u\|_{S^{\prime}(\Omega,T)} = \inf_{(q,r)\in\Gamma}\|u\|_{L^{q^{\prime}}(0,T;L^{r^{\prime}}( \Omega))}.\] For \(\Omega=\mathbb{R}^{2}\), we simply write \[\|u\|_{S(T)}=\|u\|_{S(\mathbb{R}^{2},T)},\quad\|u\|_{S^{\prime}(T)}=\|u\|_{S^{ \prime}(\mathbb{R}^{2},T)}.\] Thanks to the above dispersive estimate (2.2) and an argument of Keel-Tao [25], one obtains some Strichartz estimates as stated below. **Proposition 2.3** ([33, 22]).: _Let \(\alpha\in\mathbb{R}\setminus\mathbb{Z}\), \(T>0\) and \(u\) the solution of (2.1). Then_ \[\|u\|_{S(T)}\lesssim\|u_{0}\|_{L^{2}}+\|F\|_{S^{\prime}(T)}. \tag{2.4}\] Employing \([\nabla_{\alpha},\mathcal{K}_{\alpha}]=0\) together with (2.4), we infer that \[\|\nabla_{\alpha}u\|_{S(T)}\lesssim\|u_{0}\|_{\dot{H}^{1}_{\alpha} }+\|\nabla_{\alpha}F\|_{S^{\prime}(T)}, \tag{2.5}\] \[\|u\|_{L^{\infty}_{T}(H^{1}_{\alpha})}\lesssim\|u_{0}\|_{H^{1}_{ \alpha}}+\|\langle\nabla_{\alpha}\rangle F\|_{S^{\prime}(T)}, \tag{2.6}\] where \[\langle\nabla_{\alpha}\rangle=\sqrt{1+|\nabla_{\alpha}|^{2}}. \tag{2.7}\] We also recall the following local-in-time Strichatz estimate (see, for instance, [8]) \[\left\|\int_{a}^{b}\,\mathrm{e}^{i(t-\tau)\mathcal{K}_{\alpha}}g(\cdot,\tau)d \tau\right\|_{S(\mathbb{R})}\lesssim\|g\|_{S^{\prime}(\mathbb{R})}. \tag{2.8}\] The following Gagliardo-Nirenberg inequality will be of interest in the proofs of our main results. **Proposition 2.4**.: _Let \(\alpha\in\mathbb{R}\setminus\mathbb{Z}\), \(1<p<\infty\), \(0<\varrho<2\). Then the following sharp Gagliardo-Nirenberg inequality holds_ \[\int_{\mathbb{R}^{2}}|x|^{-\varrho}|f(x)|^{p+1}\,dx\leq\mathbb{K}_{opt}\|f\|^{ A}\|f\|_{\dot{H}^{1}_{\alpha}}^{B},\quad f\in H^{1}_{\alpha}(\mathbb{R}^{2}), \tag{2.9}\] _where \(A\) and \(B\) are given by_ \[B:=p-1+\varrho\quad\text{and}\quad A:=1+p-B. \tag{2.10}\] _Moreover, the sharp constant \(K_{opt}\) is given by_ \[\mathbb{K}_{opt}=\frac{p+1}{A}\Big{(}\frac{A}{B}\Big{)}^{\frac{B}{2}}\|\phi\|^ {-(p-1)}, \tag{2.11}\] _where \(\phi\) is a ground state solution to_ \[\mathcal{K}_{\alpha}\phi+\phi=|x|^{-\varrho}|\phi|^{p-1}\phi,\quad 0\neq \phi\in H^{1}_{\alpha}(\mathbb{R}^{2}). \tag{2.12}\] Before getting to the proof of Proposition 2.4, we prove the following compact Sobolev embedding. **Lemma 2.5**.: _Let \(p>1\), \(0<\varrho<2\) and \(\alpha\in\mathbb{R}\setminus\mathbb{Z}\). Then we have the compact Sobolev embedding_ \[H^{1}_{\alpha}(\mathbb{R}^{2})\hookrightarrowhookrightarrow L^{p+1}(|x|^{- \varrho}\,dx). \tag{2.13}\] Proof.: Suppose that \(f_{n}\rightharpoonup 0\) in \(H^{1}_{\alpha}\) and let \(R>\frac{1}{2-\varrho}\). It follows from Holder's inequality that \[\int_{\mathbb{R}^{2}}|x|^{-\varrho}|f_{n}|^{p+1}\,dx = \int_{|x|<R}|x|^{-\varrho}|f_{n}|^{p+1}\,dx+\int_{|x|>R}|x|^{- \varrho}|f_{n}|^{p+1}\,dx,\] \[\leq \||x|^{-\varrho}\|_{L^{\frac{2}{\varrho+\frac{1}{R}}}\,(|x|<R)} \|f_{n}\|_{L^{\frac{2(1+p)}{2-\varrho}-\frac{1}{R}}\,(|x|<R)}^{p+1}+|R|^{- \varrho}\|f_{n}\|_{L^{p+1}}^{p+1},\] \[\leq C_{R}\|f_{n}\|_{L^{\frac{2(1+p)}{2(1+p)}}\atop L^{2-\varrho- \frac{1}{R}}\,(|x|<R)}^{p+1}+C|R|^{-\varrho}.\] Owing to \(\frac{2(1+p)}{2-\varrho-\frac{1}{R}}>2\) and using a Rellich-Kondrachov compactness Theorem, we easily conclude the proof of Lemma 2.5. We turn now to the proof of Proposition 2.4. Proof of Proposition 2.4.: Define \[J(u):=\frac{\|u\|^{A}\|u\|_{\dot{H}^{1}_{\alpha}}^{B}}{\mathcal{P}[u]}, \tag{2.14}\] and consider the minimization problem \[\frac{1}{\mathbb{k}_{opt}}=\inf_{0\neq u\in H^{1}_{\alpha}}\,J(u). \tag{2.15}\] Let \((v_{n})\) be a minimizing sequence for (2.15), that is \(v_{n}\in H^{1}_{\alpha}\) and \[\frac{1}{\mathsf{k}_{opt}}=\lim_{n}J(v_{n}).\] Pick \[\mu_{n}:=\frac{\|v_{n}\|}{\|v_{n}\|_{\dot{H}^{1}_{\alpha}}},\quad\lambda_{n}:= \frac{1}{\|v_{n}\|_{\dot{H}^{1}_{\alpha}}}, \tag{2.16}\] and define \[\psi_{n}(x)=\lambda_{n}v_{n}(\mu_{n}x). \tag{2.17}\] One can easily verify that \[J(\psi_{n})=J(v_{n}).\] Hence \[\|\psi_{n}\|=\|\psi_{n}\|_{\dot{H}^{1}_{\alpha}}=1\quad\text{and}\quad\frac{1} {\mathsf{k}_{opt}}=\lim_{n}\,J(\psi_{n}). \tag{2.18}\] Then, up to a sub-sequence extraction and owing to (2.13), there exists \(\psi\in H^{1}_{\alpha}\) such that \(\psi_{n}\rightharpoonup\psi\) in \(H^{1}_{\alpha}\) and \(\psi_{n}\to\psi\) in \(L^{p+1}(|x|^{-\varrho}\,dx)\). Consequently, \[J(\psi_{n})=\frac{1}{\mathcal{P}[\psi_{n}]}\to\frac{1}{\mathcal{P}[\psi]} \quad\text{as}\quad n\to\infty.\] Using lower semi-continuity of the \(H^{1}_{\alpha}-\)norm, one gets \[\max\{\|\psi\|,\|\psi\|_{\dot{H}^{1}_{\alpha}}\}\leq 1.\] Hence, \(J(\psi)<\frac{1}{\mathsf{k}_{opt}}\) if \(\|\psi\|\|\psi\|_{\dot{H}^{1}_{\alpha}}<1\), which implies that \[\|\psi\|=\|\psi\|_{\dot{H}^{1}_{\alpha}}=1.\] It follows that \(\psi_{n}\to\psi\quad\text{in}\quad H^{1}_{\alpha}\) and \[\frac{1}{\mathsf{k}_{opt}}=J(\psi)=\frac{1}{\mathcal{P}[\psi]}.\] Note that the minimizer \(\psi\) satisfies the Euler-Lagrange equation \[\partial_{\varepsilon}J(\psi+\varepsilon\eta)_{|\varepsilon=0}=0,\quad\forall \eta\in C^{\infty}_{0}(\mathbb{R}^{2}).\] Using \(\operatorname{div}\left(\frac{x^{\perp}}{|x|^{2}}\right)=0\), one can see that \(\psi\) solves \[(p-1+\varrho)\mathcal{K}_{\alpha}\psi+(2-\varrho)\psi-\beta(1+p)|x|^{-\varrho }|\psi|^{p-1}\psi=0. \tag{2.19}\] Let us pick \[\lambda:=\left((\frac{A}{B})^{-\frac{\varrho}{2}}\frac{A}{\beta(1+p)}\right) ^{\frac{1}{p-1}},\quad\mu:=\left(\frac{A}{B}\right)^{\frac{1}{2}},\] and re-scale the function \(\psi\) as \(\psi(x)=\lambda\phi(\mu x)\). A straightforward computation yields \[\mathcal{K}_{\alpha}\phi+\phi-|x|^{-\varrho}|\phi|^{p-1}\phi=0.\] Moreover, since \(\|\psi\|=1=\lambda\mu^{-1}\|\phi\|\), one gets \[\frac{1}{\mathsf{k}_{opt}}=\frac{A}{p+1}\Big{(}\frac{A}{B}\Big{)}^{-\frac{B} {2}}\|\phi\|^{p-1}.\] This ends the proof of Proposition 2.4. ## 3. Local Theory Our aim in this section is to investigate the local well-posedness of (1.1) in the energy space \(H^{1}_{\alpha}\). We have the following: **Theorem 3.1**.: _Let \(\alpha\in\mathbb{R}\setminus\mathbb{Z}\), \(0<\varrho<1\), \(p>1\) and \(u_{0}\in H^{1}_{\alpha}(\mathbb{R}^{2})\). Then there exists \(T=T(\|u_{0}\|_{H^{1}_{\alpha}})>0\) and a unique solution \(u\) to (1.1) with_ \[u,\ \nabla_{\alpha}u\in C(0,T;L^{2})\cap\left(\bigcap_{(q,r)\in\Gamma}L^{q}(0,T; L^{r})\right). \tag{3.1}\] Before getting into the proof of Theorem 3.1, we need some technical lemmas. **Lemma 3.2**.: _Let \(0<\varrho<1\) and \(1<p<\infty\). There exist \(2<r<\infty\), \(0<\gamma<\frac{2}{\varrho}\) and \(\frac{2}{p-1}<\nu<\infty\) such that_ \[\frac{1}{r}=\frac{1}{2}-\frac{1}{\gamma}-\frac{1}{\nu}. \tag{3.2}\] Proof.: Since \(0<\varrho<1\) there exists \(\varepsilon\in(0,1)\) small enough such that \[\frac{1-\varrho}{2}-\frac{p-1}{2}\varepsilon>0.\] Choose \(2<r<\infty\) satisfying \[\frac{1}{r}<\frac{1-\varrho}{2}-\frac{p-1}{2}\varepsilon,\] and define \[\nu=\frac{2}{(p-1)\varepsilon},\ \ \frac{1}{\gamma}=\frac{1}{2}-\frac{1}{\nu}- \frac{1}{r}.\] It follows that \(\frac{2}{p-1}<\nu<\infty\) and \[\frac{1}{\gamma}>\frac{1}{2}-\frac{1}{\nu}-\frac{1-\varrho}{2}+\frac{1}{\nu} =\frac{\varrho}{2}.\] This finishes the proof of Lemma 3.2. **Lemma 3.3**.: _Let \(0<\varrho<1\) and \(1<p<\infty\). There exist \(2<r<\infty\), \(0<\gamma<\frac{2}{\varrho+1}\) and \(2p<\nu<\infty\) such that_ \[1=\frac{1}{r}+\frac{1}{\gamma}+\frac{p}{\nu}. \tag{3.3}\] Proof.: Since \(0<\varrho<1\) there exists \(\varepsilon\in(0,1)\) small enough such that \[\frac{1-\varrho}{2}-\frac{\varepsilon}{2}>0.\] Choose \(2<r<\infty\) satisfying \[\frac{1}{r}<\frac{1-\varrho}{2}-\frac{\varepsilon}{2},\] and define \[\nu=\frac{2p}{\varepsilon},\ \ \frac{1}{\gamma}=1-\frac{1}{r}-\frac{p}{\nu}.\] It follows that \(2p<\nu<\infty\) and \[\frac{1}{\gamma}>1-\frac{1-\varrho}{2}+\frac{\varepsilon}{2}-\frac{ \varepsilon}{2}=\frac{1+\varrho}{2}.\] This finishes the proof of Lemma 3.3. **Lemma 3.4**.: _Let \(1<p<\infty\) and \((q_{0},r_{0})=\left(\frac{2(p+1)}{p-1},p+1\right)\). Then \((q_{0},r_{0})\in\Gamma\) and for any \(T>0\),_ \[\|\|u|^{p-1}v\|_{L^{q^{\prime}_{0}}_{T}(L^{r^{\prime}_{0}})}\lesssim T^{\frac{ 2}{p+1}}\,\|\langle\nabla_{\alpha}\rangle u\|_{L^{\infty}_{T}(L^{2})}^{p-1}\, \|v\|_{L^{q_{0}}_{T}(L^{r_{0}})}. \tag{3.4}\] Proof.: We use Holder's inequality, the Sobolev embeddin (1.6) and the equality \(\frac{1}{r^{\prime}_{0}}=\frac{1}{r_{0}}+\frac{p-1}{r_{0}}\) and obtain that \[\||u|^{p-1}v\|_{L^{q^{\prime}_{0}}_{T}(L^{r^{\prime}_{0}})} \leq \||u|^{p-1}\|_{L^{\frac{p+1}{2}}_{T}(L^{\frac{r_{0}}{p-1}})}\,\|v \|_{L^{q_{0}}_{T}(L^{r_{0}})},\] \[\leq \|u\|_{L^{\frac{p^{2}-1}{2}}_{T}(L^{r_{0}})}^{p-1}\,\|v\|_{L^{q_{ 0}}_{T}(L^{r_{0}})},\] \[\lesssim \|\langle\nabla_{\alpha}\rangle u\|_{L^{\frac{p^{2}-1}{2}}_{T}( L^{2})}^{p-1}\,\|v\|_{L^{q_{0}}_{T}(L^{r_{0}})},\] \[\lesssim T^{\frac{2}{p+1}}\,\|\langle\nabla_{\alpha}\rangle u\|_{L^{ \infty}_{T}(L^{2})}^{p-1}\,\|v\|_{L^{q_{0}}_{T}(L^{r_{0}})}.\] This gives (3.4) as desired. **Lemma 3.5**.: _Let \(0<\varrho<1\), \(1<p<\infty\), \(T>0\) and \(\alpha\in\mathbb{R}\setminus\mathbb{Z}\). Then there exist \(a,b>0\) depending only on \(p\) and \(\varrho\) such that_ \[\||x|^{-\varrho}|u|^{p-1}v\|_{S^{\prime}(T)}\lesssim\left(T^{a}+T^{b}\right)\, \|\langle\nabla_{\alpha}\rangle u\|_{S(T)}^{p-1}\,\|v\|_{S(T)}, \tag{3.5}\] _and_ \[\|\nabla_{\alpha}\left(|x|^{-\varrho}|u|^{p-1}u\right)\|_{S^{\prime}(T)} \lesssim\left(T^{a}+T^{b}\right)\,\|\langle\nabla_{\alpha}\rangle u\|_{S(T)}^{ p}. \tag{3.6}\] Proof.: We will making use of the elementary observation: \[|x|^{-\varrho}\in L^{\gamma}(\mathbf{B})\ \ \text{if}\ \ \frac{2}{\gamma}>\varrho\ \ \text{and}\ \ \ |x|^{-\varrho}\in L^{\gamma}(\mathbf{B}^{c})\ \ \text{if}\ \ \frac{2}{\gamma}<\varrho, \tag{3.7}\] where \(\mathbf{B}=\{x\in\mathbb{R}^{2};\ \ |x|<1\}\) is the unit ball in \(\mathbb{R}^{2}\). First, let us prove the estimate (3.5). Write \[\||x|^{-\varrho}|u|^{p-1}v\|_{S^{\prime}(T)}\leq\mathbf{I}_{1}+\mathbf{I}_{2},\] where \[\mathbf{I}_{1} = \||x|^{-\varrho}|u|^{p-1}v\|_{S^{\prime}(\mathbf{B},T)},\] \[\mathbf{I}_{2} = \||x|^{-\varrho}|u|^{p-1}v\|_{S^{\prime}(\mathbf{B}^{c},T)}.\] Let \(r,\gamma,\nu\) be as in Lemma 3.2 and \(q\) such that \((q,r)\in\Gamma\). By Holder's inequality, (3.7) and (1.6), \[\mathbf{I}_{1} \leq \||x|^{-\varrho}\|_{L^{\gamma}(\mathbf{B})}\,\||u|^{p-1}\|_{L^{q^ {\prime}}_{T}(L^{\nu})}\,\|v\|_{L^{\infty}_{T}(L^{2})},\] \[\lesssim \|u\|_{L^{(p^{-1})q^{\prime}}_{T}(L^{\nu(p^{-1})})}^{p-1}\,\|v\|_ {L^{\infty}_{T}(L^{2})},\] \[\lesssim T^{\frac{1}{q^{\prime}}}\,\|u\|_{L^{\infty}_{T}(H^{1}_{A})}^{p-1 }\,\|v\|_{L^{\infty}_{T}(L^{2})},\] \[\lesssim T^{\frac{1}{q^{\prime}}}\,\|\langle\nabla_{\alpha}\rangle u\|_{S (T)}^{p-1}\,\|v\|_{S(T)}.\] The estimate of \(\mathbf{I}_{2}\) easily follows from Lemma 3.4. Indeed, let \((q_{0},r_{0})\) be as in Lemma 3.4. Then \[\mathbf{I}_{2} \lesssim \||u|^{p-1}v\|_{L^{q^{\prime}_{0}}_{T}(L^{r^{\prime}_{0}})},\] \[\lesssim T^{\frac{2}{p+1}}\,\|\langle\nabla_{\alpha}\rangle u\|_{L^{ \infty}_{T}(L^{2})}^{p-1}\,\|v\|_{L^{q_{0}}_{T}(L^{r_{0}})},\] \[\lesssim T^{\frac{2}{p+1}}\,\|\langle\nabla_{\alpha}\rangle u\|_{S(T)}^{p -1}\,\|v\|_{S(T)}.\] This finishes the proof of (3.5). We turn now to (3.6). Clearly \[\|\nabla_{\alpha}\left(|x|^{-\varrho}|u|^{p-1}u\right)\|_{S^{\prime}(T)}\leq \mathbf{J}_{1}+\mathbf{J}_{2},\] where \[\mathbf{J}_{1} = \|\nabla_{\alpha}\left(|x|^{-\varrho}|u|^{p-1}u\right)\|_{S^{ \prime}(\mathbf{B},T)},\] \[\mathbf{J}_{1} = \|\nabla_{\alpha}\left(|x|^{-\varrho}|u|^{p-1}u\right)\|_{S^{ \prime}(\mathbf{B}^{c},T)}.\] Using the fact that \(|\nabla_{\alpha}\,f|\lesssim|\nabla\,f|+\frac{|f|}{|x|}\), we get \[\mathbf{J}_{1} \lesssim \||x|^{-\varrho}|u|^{p-1}\nabla\,u\|_{S^{\prime}(\mathbf{B},T)}+ \||x|^{-\varrho-1}|u|^{p-1}u\|_{S^{\prime}(\mathbf{B},T)},\] \[\mathbf{J}_{2} \lesssim \||x|^{-\varrho}|u|^{p-1}\nabla\,u\|_{S^{\prime}(\mathbf{B}^{c},T)}+\||x|^{-\varrho-1}|u|^{p-1}u\|_{S^{\prime}(\mathbf{B}^{c},T)}.\] Arguing as for \(\mathbf{I}_{1}\) and owing to \(\|\nabla\,f\|\lesssim\|\nabla_{\alpha}\,f\|\), we infer that \[\||x|^{-\varrho}|u|^{p-1}\nabla\,u\|_{S^{\prime}(\mathbf{B},T)}\lesssim T^{a} \,\|\langle\nabla_{\alpha}\rangle u\|_{S(T)}^{p-1}\,\|\nabla_{\alpha}\,u\|_{S (T)}\lesssim T^{a}\,\|\langle\nabla_{\alpha}\rangle u\|_{S(T)}^{p},\] for some positive constant \(a\). Next we bound \(\||x|^{-\varrho-1}|u|^{p-1}u\|_{S^{\prime}(\mathbf{B},T)}\). Let \(r,\gamma,\nu\) be as in Lemma 3.3 and \(q\) such that \((q,r)\in\Gamma\). By Holder's inequality, (3.7) and (1.6), we get \[\||x|^{-\varrho-1}|u|^{p-1}u\|_{S^{\prime}(\mathbf{B},T)} \leq \||x|^{-\varrho-1}\|_{L^{\gamma}(\mathbf{B})}\,\|u\|_{L^{(p-1)q^ {\prime}}_{T}(L^{\nu})}^{p-1}\,\|u\|_{L^{\infty}_{T}(L^{\nu})},\] \[\lesssim \|\langle\nabla_{\alpha}\rangle u\|_{L^{(p-1)q^{\prime}}_{T}(L^ {2})}^{p-1}\,\|\langle\nabla\rangle u\|_{L^{\infty}_{T}(L^{2})},\] \[\lesssim T^{\frac{1}{q^{\prime}}}\,\|\langle\nabla_{\alpha}\rangle u\|_{S (T)}^{p}.\] Therefore \(\mathbf{J}_{1}\lesssim T^{a}\left\|\langle\nabla_{\alpha}\rangle u\right\|_{S(T)}^{p}\). We turn now to the term \(\mathbf{J}_{2}\). Arguing as for \(\mathbf{I}_{2}\), we obtain that \[\||x|^{-\varrho}|u|^{p-1}\nabla\,u\|_{S^{\prime}(\mathbf{B}^{c},T)} \lesssim T^{\frac{2}{p+1}}\left\|\langle\nabla_{\alpha}\rangle u\right\|_{ S(T)}^{p-1}\left\|\nabla u\right\|_{S(T)},\] \[\||x|^{-\varrho-1}|u|^{p-1}u\|_{S^{\prime}(\mathbf{B}^{c},T)} \lesssim T^{\frac{2}{p+1}}\left\|\langle\nabla_{\alpha}\rangle u\right\|_{ S(T)}^{p-1}\left\|u\right\|_{S(T)}.\] It follows that \(\mathbf{J}_{2}\lesssim T^{\frac{2}{p+1}}\left\|\langle\nabla_{\alpha}\rangle u \right\|_{S(T)}^{p}\). This finishes the proof of (3.6). Having at hand the above technical results, we are now able to prove Theorem 3.1. Proof of Theorem 3.1.: Thanks to the Duhamel formula, solutions of (1.1) are fixed points of the integral functional \[\Phi(u)(t):=e^{it\mathcal{K}_{\alpha}}u_{0}-i\int_{0}^{t}e^{i(t-s)\mathcal{K }_{\alpha}}[|x|^{-\varrho}|u(s)|^{p-1}u(s)]\,ds. \tag{3.8}\] For \(R,T>0\) to be chosen later, let \[\mathbf{X}(T,R)=\bigg{\{}\,u\in C_{T}(H_{\alpha}^{1})\,\,\,\text{s.t.}\,\,\,u,\nabla_{\alpha}u\in\bigcap_{(q,r)\in\Gamma}L_{T}^{q}(L^{r})\,\,\,\text{and} \,\,\,\|u\|_{S(T)}\leq R\,\bigg{\}},\] endowed with the distance \(d(u,v)=\|u-v\|_{S(T)}\). Clearly \((\mathbf{X}(T,R),d)\) is a complete metric space. Applying Strichartz estimates (2.4) and (2.5), we get \[\|\Phi(u)\|_{S(T)} \lesssim \|u_{0}\|_{L^{2}}+\||x|^{-\varrho}|u|^{p-1}u\|_{S^{\prime}(T)},\] \[\|\nabla_{\alpha}\Phi(u)\|_{S(T)} \lesssim \|u_{0}\|_{\dot{H}_{\alpha}^{1}}+\|\nabla_{\alpha}\left(|x|^{- \varrho}|u|^{p-1}u\right)\|_{S^{\prime}(T)},\] \[\|\Phi(u)-\Phi(v)\|_{S(T)} \lesssim \||x|^{-\varrho}\left(|u|^{p-1}u-|v|^{p-1}v\right)\|_{S^{\prime} (T)}.\] Employing Lemma 3.5 yields \[\|\Phi(u)\|_{S(T)} \leq C\|u_{0}\|_{L^{2}}+C\left(T^{a}+T^{b}\right)\,\|u\|_{S(T)}^{p}, \tag{3.9}\] \[\|\nabla_{\alpha}\Phi(u)\|_{S(T)} \leq C\|u_{0}\|_{\dot{H}_{\alpha}^{1}}+C\left(T^{a}+T^{b}\right)\,\| \langle\nabla_{\alpha}\rangle u\|_{S(T)}^{p},\] (3.10) \[\|\Phi(u)-\Phi(v)\|_{S(T)} \leq C\left(T^{a}+T^{b}\right)\,\left(\|u\|_{S(T)}^{p-1}+\|v\|_{S(T) }^{p-1}\right)\,\|u-v\|_{S(T)}, \tag{3.11}\] for some positive constant \(C\). This shows that \[\Phi\left(C_{T}(H_{\alpha}^{1})\bigcap_{(q,r)\in\Gamma}L_{T}^{q}(W_{\alpha}^{ 1,r})\right)\subset C_{T}(H_{\alpha}^{1})\bigcap_{(q,r)\in\Gamma}L_{T}^{q}(W_ {\alpha}^{1,r}).\] Let \(R=2C\|u_{0}\|_{H_{\alpha}^{1}}\). For \(u,v\in\mathbf{X}(T,R)\), we have from (3.9) and (3.11), \[\|\Phi(u)\|_{S(T)} \leq \frac{R}{2}+C\left(T^{a}+T^{b}\right)R^{p}, \tag{3.12}\] \[\|\Phi(u)-\Phi(v)\|_{S(T)} \leq 2C\left(T^{a}+T^{b}\right)R^{p-1}\|u-v\|_{S(T)}. \tag{3.13}\] Choosing \(T>0\) such that \(2C\left(T^{a}+T^{b}\right)R^{p-1}<1\), we conclude the proof by a classical fixed point argument. ## 4. Proof of Theorem 1.1 Let us prepare the proof of the scattering. Here and hereafter, we denote by \(B(R):=\{x\in\mathbb{R}^{2};\ |x|\leq R\}\) the ball of \(\mathbb{R}^{2}\) centered at the origin and with radius \(R>0\) and \(B^{c}(R)\) its complementary in \(\mathbb{R}^{2}\). Also, for \(0<R_{1}<R_{2}\), one denotes by \(C(R_{1},R_{2}):=\{x\in\mathbb{R}^{2};\ R_{1}\leq|x|\leq R_{2}\}\) the annulus of \(\mathbb{R}^{2}\). Let \(\psi\in C_{0}^{\infty}(\mathbb{R}^{2})\) be a radial bump function such that \[\psi=1\quad\text{on}\quad B\left(\frac{1}{2}\right),\quad\psi=0\quad\text{on} \quad B^{c}(1)\quad\text{and}\quad 0\leq\psi\leq 1. \tag{4.1}\] For \(R>0\), define \[\psi_{R}(x)=\psi\left(\frac{|x|}{R}\right). \tag{4.2}\] ### Variational Analysis Recall that \(\phi\) stands for a radially symmetric decreasing solution to (2.12). The following inequality will be useful in obtaining a coercivity result. **Lemma 4.1**.: _Let \(u\in H^{1}_{\alpha}(\mathbb{R}^{2})\). Then,_ \[\mathcal{P}[u]\leq\frac{p+1}{B}\bigg{(}\frac{M[u]^{\lambda_{c}}\mathcal{P}[u] }{M[\phi]^{\lambda_{c}}\mathcal{P}[\phi]}\bigg{)}^{\frac{B-2}{B}}\ \|u\|_{\dot{H}^{1}_{\alpha}}^{2}, \tag{4.3}\] _where \(B\) is given by (2.10) and \(\lambda_{c}\) is given by (1.7)._ Proof.: Thanks to Pohozaev identities, one has \[\mathcal{P}[\phi]=\frac{p+1}{A}\,M[\phi]=\frac{p+1}{B}\Big{(}\|\nabla\phi\|^{2 }+\alpha^{2}\||x|^{-1}\phi\|^{2}\Big{)}. \tag{4.4}\] Indeed, multiplying (2.12) with \(\bar{\phi}\) and integrating, it follows that \[\int_{\mathbb{R}^{2}}\mathcal{K}_{\alpha}(\phi)\bar{\phi}\,dx+\|\phi\|^{2}= \mathcal{P}[\phi].\] Moreover, since \(\operatorname{div}\Big{(}\frac{x^{\perp}}{|x|^{2}}\Big{)}=0\), an integration by parts gives \[\int_{\mathbb{R}^{2}}\mathcal{K}_{\alpha}(\phi)\bar{\phi}\,dx = \int_{\mathbb{R}^{2}}\Big{(}-\Delta\phi-2i\frac{\alpha}{|x|^{2}} x^{\perp}\cdot\nabla\phi+\frac{\alpha^{2}}{|x|^{2}}\Big{)}\bar{\phi}\,dx\] \[= \|\nabla\phi\|^{2}-2i\alpha\int_{\mathbb{R}^{2}}\frac{x^{\perp}}{ |x|^{2}}\cdot\nabla\phi\,\bar{\phi}\,dx+\alpha^{2}\left\|\frac{\phi}{|x|} \right\|^{2}\] \[= \|\nabla\phi\|^{2}+2\alpha\Re\left(i\int_{\mathbb{R}^{2}}\frac{x ^{\perp}}{|x|^{2}}\cdot\overline{\nabla\phi}\phi\,dx\right)+\alpha^{2}\left\| \frac{\phi}{|x|}\right\|^{2}\] \[= \|\nabla_{\alpha}\phi\|^{2}.\] Thus, \[\|\nabla_{\alpha}\phi\|^{2}+\|\phi\|^{2}=\mathcal{P}[\phi].\] Define the action \[S(\phi):=\|\nabla_{\alpha}\phi\|^{2}+\|\phi\|^{2}-\frac{2}{p+1}\mathcal{P}[\phi].\] Since \(S^{\prime}(\phi)=0\), one gets \(\partial_{\lambda}\Big{(}S(\phi^{\lambda}_{\alpha,\beta})\Big{)}_{\lambda=1}=0\), where \(\phi^{\lambda}_{\alpha,\beta}(x):=\lambda^{\alpha}(\phi(\lambda^{\beta}x)\). A straightforward computation gives \[\|\nabla_{\alpha}\phi^{\lambda}_{\alpha,\beta}\|^{2}=\lambda^{2\alpha}\Big{(} \|\nabla\phi\|^{2}+\||x|^{-1}u\|^{2}\Big{)},\] \[\|\phi^{\lambda}_{\alpha,\beta}\|=\lambda^{\alpha-\beta}\|\phi\|,\] \[\mathcal{P}[\phi^{\lambda}_{\alpha,\beta}]=\lambda^{\alpha(1+p)+\beta(-2+ \varrho)}\mathcal{P}[\phi].\] Therefore \[\partial_{\lambda}\Big{(}S(\phi^{\lambda}_{\alpha,\beta})\Big{)}_ {|\lambda=1} = 2\alpha\|\nabla_{\alpha}\phi\|^{2}+2(\alpha-\beta)\|u\|^{2}-2 \frac{\alpha(1+p)+\beta(-2+\varrho)}{p+1}\mathcal{P}[\phi].\] By taking \(\alpha=\beta=1\), we obtain that \[\|\nabla_{\alpha}\phi\|^{2}=\frac{B}{p+1}\mathcal{P}[\phi].\] This leads to \[\|\phi\|^{2}=(1-\frac{B}{p+1})\mathcal{P}[\phi]=\frac{A}{p+1}\mathcal{P}[\phi].\] Using the Gagliardo-Nirenberg inequality (2.9), the expression of \(\mathtt{K}_{opt}\) given by (2.11), the pohozaev identities (4.4) and the identities \((p-1)i_{c}=B-2\) and \(\lambda_{c}(B-2)=A\), one writes \[[\mathcal{P}[u]]^{\frac{B}{2}} \leq \mathtt{K}_{opt}\left(\|u\|^{2\lambda_{c}}\mathcal{P}[u]\right)^{ \frac{B}{2}-1}\,\|u\|^{B}_{\dot{H}^{1}_{\alpha}}\] \[\leq \frac{p+1}{A}\left(\frac{A}{B}\right)^{\frac{B}{2}}\,\|\phi\|^{-( p-1)}\left(M[u]^{\lambda_{c}}\mathcal{P}[u]\right)^{\frac{B}{2}-1}\|u\|^{B}_{\dot{H} ^{1}_{\alpha}}\] \[\leq \frac{p+1}{A}\left(\frac{A}{B}\right)^{\frac{B}{2}}M[\phi]^{\frac {A-(p-1)}{2}}[\mathcal{P}[\phi]]^{\frac{B}{2}-1}\bigg{(}\frac{M[u]^{\lambda_{ c}}\mathcal{P}[u]}{M[\phi]^{\lambda_{c}}\mathcal{P}[\phi]}\bigg{)}^{\frac{B}{2}-1} \|u\|^{B}_{\dot{H}^{1}_{\alpha}}\] \[\leq \left(\frac{A}{B}\frac{\mathcal{P}[\phi]}{M[\phi]}\right)^{\frac{ B}{2}}\left(\frac{M[u]^{\lambda_{c}}\mathcal{P}[u]}{M[\phi]^{\lambda_{c}} \mathcal{P}[\phi]}\right)^{\frac{B}{2}-1}\|u\|^{B}_{\dot{H}^{1}_{\alpha}}\] \[\leq \left(\frac{M[u]^{\lambda_{c}}\mathcal{P}[u]}{M[\phi]^{\lambda_{c} }\mathcal{P}[\phi]}\right)^{\frac{B}{2}-1}\bigg{(}\frac{p+1}{B}\|u\|^{2}_{ \dot{H}^{1}_{\alpha}}\bigg{)}^{\frac{B}{2}}.\] This leads to (4.3) as desired. As a consequence of the above lemma, we obtain the following coercivity result. **Corollary 4.2**.: _Let \(u\in H^{1}_{\alpha}(\mathbb{R}^{2})\) and \(\varepsilon\in(0,1)\) satisfying_ \[\mathcal{P}[u][M[u]]^{\lambda_{c}}\leq(1-\varepsilon)\mathcal{P}[\phi][M[\phi ]]^{\lambda_{c}}. \tag{4.5}\] _Then,_ \[\mathcal{P}[u]\leq\frac{p+1}{B}\,\,(1-\varepsilon)^{\frac{B-2}{B}}\,\|u\|^{2} _{\dot{H}^{1}_{\alpha}}\leq\frac{p+1}{B}\,\,\|u\|^{2}_{\dot{H}^{1}_{\alpha}}, \tag{4.6}\] _and_ \[\|u\|^{2}_{\dot{H}^{1}_{\alpha}}-\frac{B}{p+1}\mathcal{P}[u]\geq c(\varepsilon,B) \;\|u\|^{2}_{\dot{H}^{1}_{\alpha}}, \tag{4.7}\] _where \(c(\varepsilon,B):=1-(1-\varepsilon)^{\frac{B-2}{B}}>0\) since \(B>2\). Moreover, for \(\varepsilon\) small enough, we have_ \[E[u]\geq\frac{B-2}{B}\,\|u\|^{2}_{\dot{H}^{1}_{\alpha}}. \tag{4.8}\] Proof.: Inequality (4.6) follows immediately from (4.3) and (4.5). To prove (4.7) we use the first inequality in (4.6) and the fact that \(B>2\) and \(\varepsilon\in(0,1)\). Finally, using the first inequality in (4.6), we infer \[E[u] = \|u\|^{2}_{\dot{H}^{1}_{\alpha}}-\frac{2}{p+1}\mathcal{P}[u]\] \[\geq \bigg{(}1-\frac{2}{B}\Big{(}1-\varepsilon\Big{)}^{\frac{B-2}{B}} \bigg{)}\|u\|^{2}_{\dot{H}^{1}_{\alpha}}.\] Since \(1-\frac{2}{B}\Big{(}1-\varepsilon\Big{)}^{\frac{B-2}{B}}\to 1-\frac{2}{B}\) as \(\varepsilon\to 0\) and \(B>2\), we get (4.8). **Remark 4.3**.: _Since_ \[\mathcal{P}[\psi_{R}\,u]\leq\mathcal{P}[u]\quad\text{and}\quad M[\psi_{R}\,u ]\leq M[u],\;\;\;\forall\;\;\;R>0,\] _inequalities (4.6)-(4.7) remain true for \(\psi_{R}\,u\) instead of \(u\). Namely, we have_ \[\mathcal{P}[\psi_{R}\,u]\leq\frac{p+1}{B}\;\|\psi_{R}u\|^{2}_{\dot{H}^{1}_{ \alpha}}, \tag{4.9}\] _and_ \[\|\psi_{R}u\|^{2}_{\dot{H}^{1}_{\alpha}}-\frac{B}{p+1}\mathcal{P}[\psi_{R}\,u ]\geq c(\varepsilon,B)\;\|\psi_{R}u\|^{2}_{\dot{H}^{1}_{\alpha}}. \tag{4.10}\] **Remark 4.4**.: _The solution is global by (4.8)._ ### Morawetz estimate Let \(b=b(x)=b(|x|):\mathbb{R}^{2}\to\mathbb{R}\) be a radial smooth function sufficiently decaying at infinity and \(u\in C([0,T^{*});H^{1}_{\alpha})\) be the maximal solution of (1.1). One denotes the virial potential \[V_{b}:t\mapsto\int_{\mathbb{R}^{2}}b(x)|u(t,x)|^{2}\,dx. \tag{4.11}\] Here and hereafter, subscripts denote partial derivatives and repeated indexes are summed and \(\partial_{r}\) is the radial derivative. Now, taking into account [19, Theorem 1.2, p. 252] and [20, Theorem 3.1, p. 9] for \(V:=-|x|^{-\varrho}|u|^{p-1}\), one has \[V_{b}^{\prime\prime}(t) = -\int_{\mathbb{R}^{2}}\Delta^{2}b|u|^{2}\,dx+4\int_{\mathbb{R}^{2 }}\nabla_{\alpha}uD^{2}b\overline{\nabla_{\alpha}u}\,dx\] \[+ 4\Im\Big{(}\int_{\mathbb{R}^{2}}ub^{\prime}\mathbf{B}\frac{x}{|x |}\cdot\overline{\nabla_{\alpha}u}\,dx\Big{)}-2\int_{\mathbb{R}^{2}}\left( \nabla b\cdot\nabla V\right)|u|^{2}\,dx.\] Here, \[\mathbf{B}(x)\in\mathcal{M}_{2\times 2}(\mathbb{R}),\quad\mathbf{B}_{ij}:= \partial_{x_{j}}\mathbf{A}^{i}_{\alpha}-\partial_{x_{i}}\mathbf{A}^{j}_{ \alpha},\] where \({\bf A}_{\alpha}\) is given by (1.2). A direct calculus gives \({\bf B}=0\) and so with integration by parts \[V_{b}^{\prime\prime}(t) = -\int_{\mathbb{R}^{2}}\Delta^{2}b|u|^{2}\,dx+4\int_{\mathbb{R}^{2} }\nabla_{\alpha}uD^{2}b\overline{\nabla_{\alpha}u}\,dx\] \[+ 2\int_{\mathbb{R}^{2}}\nabla b\cdot\nabla(|x|^{-\varrho}|u|^{p-1 })|u|^{2}\,dx\] \[= -\int_{\mathbb{R}^{2}}\Delta^{2}b|u|^{2}\,dx+4\int_{\mathbb{R}^{2 }}\nabla_{\alpha}uD^{2}b\overline{\nabla_{\alpha}u}\,dx\] \[- 2\int_{\mathbb{R}^{2}}\Delta b|x|^{-\varrho}|u|^{1+p}\,dx-2\int _{\mathbb{R}^{2}}\nabla b\cdot\nabla(|u|^{2})|x|^{-\varrho}|u|^{p-1}\,dx.\] Also, with integration by parts \[V_{b}^{\prime\prime}(t) = -\int_{\mathbb{R}^{2}}\Delta^{2}b|u|^{2}\,dx+4\int_{\mathbb{R}^{2 }}\nabla_{\alpha}uD^{2}b\overline{\nabla_{\alpha}u}\,dx \tag{4.12}\] \[- 2\int_{\mathbb{R}^{2}}\Delta b|x|^{-\varrho}|u|^{1+p}\,dx-\frac {4}{1+p}\int_{\mathbb{R}^{2}}\nabla b\cdot\nabla(|u|^{1+p})|x|^{-\varrho}\,dx\] \[= -\int_{\mathbb{R}^{2}}\Delta^{2}b|u|^{2}\,dx+4\int_{\mathbb{R}^{2 }}\nabla_{\alpha}uD^{2}b\overline{\nabla_{\alpha}u}\,dx\] \[- \frac{2(p-1)}{1+p}\int_{\mathbb{R}^{2}}\Delta b|x|^{-\varrho}|u| ^{1+p}\,dx+\frac{4}{1+p}\int_{\mathbb{R}^{2}}\nabla b\cdot\nabla(|x|^{-\varrho })|u|^{1+p}\,dx.\] Consider, for \(R>0\), a smooth radial real-valued function \(f:=f_{R}\) such that \[f(r):=\left\{\begin{array}{ll}r^{2},\quad\mbox{if}\quad 0\leq r\leq R,\\ 3Rr,\quad\mbox{if}\quad r>2R,\end{array}\right. \tag{4.13}\] such that on the annulus \(C(R,2R)\) one has \[\min\{\partial_{r}f,\partial_{r}^{2}f\}\geq 0,\quad|\partial^{\gamma}f|\lesssim R |x|^{1-|\gamma|}. \tag{4.14}\] Under these conditions, the matrix \((f_{jk})\) is non-negative. Moreover, by the radial identity \[\frac{\partial^{2}}{\partial x_{l}\partial x_{k}}:=\partial_{l} \partial_{k}=\Big{(}\frac{\delta_{lk}}{r}-\frac{x_{l}x_{k}}{r^{3}}\Big{)} \partial_{r}+\frac{x_{l}x_{k}}{r^{2}}\partial_{r}^{2}, \tag{4.15}\] one gets \[\left\{\begin{array}{ll}f_{jk}=2\delta_{j}^{k},\quad\Delta f=4, \quad\Delta^{2}f=0,\quad 0\leq r\leq R,\\ f_{jk}=\frac{3R}{r}[\delta_{j}^{k}-\frac{x_{j}x_{k}}{r^{2}}],\quad\Delta f= \frac{3R}{r},\quad\Delta^{2}f=\frac{3R}{r^{3}},\quad r>2R.\end{array}\right.\] Denote by \(V_{R}:=V_{f_{R}}\) and \(\,\not\!\nabla_{\alpha}:=\nabla_{\alpha}-\frac{x\cdot\nabla_{\alpha}}{|x|^{2}}x\) the angular gradient. Thus, \[V_{R}^{\prime\prime}(t) =8\int_{B(R)}\left(|\nabla_{\alpha}u|^{2}-\frac{B}{p+1}|x|^{- \varrho}|u|^{p+1}\right)dx\] \[+\int_{B^{c}(2R)}\left(\frac{12R}{|x|}|\not\!\nabla_{\alpha}u|^{2 }-3R\frac{|u|^{2}}{|x|^{3}}-\frac{6R(p-1+2\varrho)}{p+1}|x|^{-\varrho-1}|u|^{p +1}\right)dx\] \[+\int_{C(R,2R)}\Big{(}-\Delta^{2}f|u|^{2}+4\nabla_{\alpha}uD^{2}f \overline{\nabla_{\alpha}u}-\frac{2(p-1)}{p+1}\Delta f|x|^{-\varrho}|u|^{p+1} \Big{)}dx\] \[+\int_{C(R,2R)}\Big{(}\frac{4}{p+1}\nabla f\cdot\nabla(|x|^{- \varrho})|u|^{p+1}\Big{)}\,dx.\] It follows that \[V_{R}^{\prime\prime}(t) \geq 8\int_{B(R)}\left(|\nabla_{\alpha}u|^{2}-\frac{B}{p+1}|x|^{- \varrho}|u|^{p+1}\right)dx\] \[-\int_{B^{c}(2R)}\Big{(}3R\frac{|u|^{2}}{|x|^{3}}+\frac{6R(p-1+2 \varrho)}{p+1}|x|^{-\varrho-1}|u|^{p+1}\Big{)}\,dx\] \[+\int_{C(R,2R)}\Big{(}-\Delta^{2}f|u|^{2}+4\nabla_{\alpha}uD^{2}f \overline{\nabla_{\alpha}u}-\frac{2(p-1)}{p+1}\Delta f|x|^{-\varrho}|u|^{p+1 }\Big{)}dx\] \[+\int_{C(R,2R)}\Big{(}\frac{4}{p+1}\nabla f\cdot\nabla(|x|^{- \varrho})|u|^{p+1}\Big{)}\,dx.\] Moreover, by (4.15) and (4.14), one writes \[\int_{C(R,2R)}\nabla_{\alpha}uD^{2}f\overline{\nabla_{\alpha}u} \,dx = \int_{C(R,2R)}(\nabla_{\alpha}u)_{l}\Big{[}\Big{(}\frac{\delta_{ lk}}{r}-\frac{x_{l}x_{k}}{r^{3}}\Big{)}f^{\prime}+\frac{x_{l}x_{k}}{r^{2}}f^{ \prime\prime}\Big{]}(\overline{\nabla_{\alpha}u})_{k}\,dx\] \[= \int_{C(R,2R)}\Big{[}|\not\!\nabla_{\alpha}u|^{2}\frac{f^{\prime }}{|x|}+|x\cdot\nabla_{\alpha}u|^{2}\frac{f^{\prime\prime}}{|x|^{2}}\Big{]}\,dx\] \[\geq 0.\] Hence, by (4.14) and Sobolev embeddings, one gets \[V_{R}^{\prime\prime}(t)\gtrsim\int_{B(R)}\Big{(}|\nabla_{\alpha}u|^{2}-\frac{B }{p+1}|x|^{-\varrho}|u|^{p+1}\Big{)}\,dx-\frac{\|u\|^{2}}{R^{2}}-\frac{\|u\|^{ p+1}_{H^{1}}}{R^{\varrho}}.\] Owing to (4.8) and (4.10), we infer that \[\frac{1}{R^{2}}+\frac{1}{R^{\varrho}}+V_{R}^{\prime\prime}(t) \gtrsim\int_{B(R)}\left(|\nabla_{\alpha}u|^{2}-\frac{B}{p+1}|x|^{- \varrho}|u|^{p+1}\right)dx\] \[\gtrsim\|\psi_{R}u\|^{2}_{H^{1}_{\alpha}}\] \[\gtrsim\int_{\mathbb{R}^{2}}|x|^{-\varrho}|\psi_{R}u|^{p+1}\,dx\] \[\gtrsim\int_{B(R)}|x|^{-\varrho}|u|^{p+1}\,dx. \tag{4.16}\] As a consequence, we have: **Proposition 4.5**.: _Let \(T>0\), \(0<\varrho<1\) and \(u\in C_{T}(H^{1}_{\alpha})\) be a solution of (1.1). Then_ \[\int_{0}^{T}\,\int_{\mathbb{R}^{2}}|x|^{-\varrho}|u(t,x)|^{p+1}\,dx\,dt\lesssim T ^{\frac{1}{1+\varrho}}. \tag{4.17}\] Proof.: By (4.16) together with Sobolev embedding, one has \[\int_{\mathbb{R}^{2}}|x|^{-\varrho}|u(t,x)|^{p+1}\,dx = \int_{B(R)}|x|^{-\varrho}|u(t,x)|^{p+1}\,dx+\int_{B(R)^{c}}|x|^{- \varrho}|u(t,x)|^{p+1}\,dx\] \[\lesssim \frac{1}{R^{2}}+\frac{1}{R^{\varrho}}+V_{R}^{\prime\prime}(t)+ \frac{1}{R^{\varrho}}\int_{\mathbb{R}^{2}}|u(t,x)|^{p+1}\,dx\] \[\lesssim \frac{1}{R^{2}}+\frac{1}{R^{\varrho}}+V_{R}^{\prime\prime}(t).\] This gives \[\int_{0}^{T}\,\int_{\mathbb{R}^{2}}|x|^{-\varrho}|u(t,x)|^{p+1} \,dx\,dt \lesssim \frac{T}{R^{2}}+\frac{T}{R^{\varrho}}+V_{R}^{\prime}(T)-V_{R}^{ \prime}(0)\] \[\lesssim \frac{T}{R^{\varrho}}+R.\] Taking \(R=T^{\frac{1}{1+\varrho}}\), yields (4.17). From (4.17), one ca see that there exist \(t_{n},R_{n}\to\infty\) such that \[\lim_{n\to\infty}\int_{B(R_{n})}|x|^{-\varrho}|u(t_{n},x)|^{p+1}\,dx=0. \tag{4.18}\] Indeed, using (4.17), one has \[\frac{2}{T}\int_{T/2}^{T}\int_{\mathbb{R}^{2}}|x|^{-\varrho}|u(t,x)|^{p+1}\,dx \,dt\lesssim T^{-\frac{\varrho}{1+\varrho}}.\] We conclude the proof by using the mean value Theorem. ### Scattering Criterion In this section we give a scattering criterion as stated below. **Proposition 4.6**.: _Under the same assumptions as in Theorem 1.1, let \(u\in C(\mathbb{R},H^{1}_{\alpha})\) be a global solution to (1.1) satisfying_ \[0<\sup_{t\geq 0}\|u(t)\|_{H^{1}_{\alpha}}:=E<\infty. \tag{4.19}\] _Then, there exist \(R,\varepsilon>0\) depending on \(E,p,\varrho\) such that if_ \[\liminf_{t\to\infty}\int_{|x|<R}|u(t,x)|^{2}\,dx<\varepsilon^{2}, \tag{4.20}\] _then, \(u\) scatters for positive time._ The key of the proof of the scattering criterion is the next result. **Proposition 4.7**.: _Let the assumptions of Proposition 4.6 be fulfilled. Then, for any \(\varepsilon>0\), there exist \(T,\mu>0\) satisfying_ \[\|e^{i(t-T)\mathcal{K}_{\alpha}}u(T)\|_{L^{\frac{8(p-1)}{3(2-\varrho)}}((T, \infty),L^{\frac{8(p-1)}{2-\varrho}})}\lesssim\varepsilon^{\mu}.\] **Remark 4.8**.: _Note that \(\left(\frac{8(p-1)}{3(2-\varrho)},\frac{8(p-1)}{2-\varrho}\right)=\left( \frac{8}{3(1-s_{c})},\frac{8}{1-s_{c}}\right)\) where \(s_{c}=1-\frac{2-\varrho}{p-1}\) is the critical Sobolev index._ Proof of Proposition 4.7.: Take \(0<\varepsilon<<1\), \(R(\varepsilon)>>1\) to be fixed later and \(I=[0,T]\) a time slab. By the integral formula (3.8), \[e^{i(t-T)\mathcal{K}_{\alpha}}u(T) = e^{it\mathcal{K}_{\alpha}}u_{0}+i\int_{0}^{T}e^{i(t-\tau) \mathcal{K}_{\alpha}}[|x|^{-\varrho}|u|^{p-1}u]\,d\tau\] \[= e^{it\mathcal{K}_{\alpha}}u_{0}+i\Big{(}\int_{0}^{T-\varepsilon ^{-\beta}}+\int_{T-\varepsilon^{-\beta}}^{T}\Big{)}e^{i(t-\tau)\mathcal{K}_{ \alpha}}[|x|^{-\varrho}|u|^{p-1}u]\,d\tau\] \[:= e^{it\mathcal{K}_{\alpha}}u_{0}+i\Big{(}\int_{I_{1}}+\int_{I_{2 }}\Big{)}e^{i(t-\tau)\mathcal{K}_{\alpha}}[|x|^{-\varrho}|u|^{p-1}u]\,d\tau\] \[:= e^{it\mathcal{K}_{\alpha}}u_{0}+F_{1}+F_{2}.\] * **The linear term**. Employing the dispersive estimate (2.2) together with Sobolev embedding and \(p>p_{c}=3-\varrho\), we get \[\|e^{it\mathcal{K}_{\alpha}}u_{0}\|_{L^{\frac{8(p-1)}{3(2-\varrho )}}((T,\infty),L^{\frac{8(p-1)}{2-\varrho}})} \lesssim \|\frac{1}{t^{1-\frac{2-\varrho}{4(p-1)}}}\|_{L^{\frac{8(p-1)}{3( 2-\varrho)}}(T,\infty)}\|u_{0}\|_{L^{\infty}(H^{1})}\] \[\lesssim \left(\int_{T}^{\infty}\frac{dt}{t^{\frac{2}{3}[\frac{4(p-1)}{2- \varrho}-1]}}\right)^{\frac{3(2-\varrho)}{8(p-1)}}.\] Note that \(t\longmapsto t^{\frac{2}{3}[1-\frac{4(p-1)}{2-\varrho}]}\in L^{1}(1,\infty)\) since \(p>3-\varrho>1+\frac{5}{8}(2-\varrho)\). Thus, one may choose \(T_{0}>\varepsilon^{-\beta}>0\), where \(\beta>0\), such that \[\|e^{it\mathcal{K}_{\alpha}}u_{0}\|_{L^{\frac{8(p-1)}{3(2-\varrho)}}((T_{0}, \infty),L^{\frac{8(p-1)}{2-\varrho}})}\leq\varepsilon^{2}\] * **The term \(F_{2}\)**. By the assumption (4.20), one has for \(T>\varepsilon^{-\beta}\) large enough, \[\int_{\mathbb{R}^{2}}\psi_{R}(x)|u(T,x)|^{2}\,dx<\varepsilon^{2}.\] Moreover, a computation with the use of (1.1) gives \[\left|\frac{d}{dt}\int_{\mathbb{R}^{2}}\psi_{R}(x)|u(t,x)|^{2}\,dx\right| = 2\left|\int_{\mathbb{R}^{2}}\psi_{R}(x)\Re[\dot{u}(t,x)\bar{u}(t,x )]\,dx\right|\] \[= 2\left|\int_{\mathbb{R}^{2}}\psi_{R}(x)\Im[\Delta u(t,x)\bar{u}( t,x)]\,dx\right|\] \[= 2\left|\int_{\mathbb{R}^{2}}\nabla\psi_{R}(x)\Im[\nabla u(t,x) \bar{u}(t,x)]\,dx\right|\] \[\lesssim \frac{1}{\sqrt{R}}.\] Then, for any \(T-\varepsilon^{-\beta}\leq t\leq T\) and \(R>\varepsilon^{-2(2+\beta)}\), one gets \[\|\psi_{R}u(t)\|\leq\left(\int_{\mathbb{R}^{2}}\psi_{R}(x)|u(T,x)|^{2}\,dx+C \frac{T-t}{\sqrt{R}}\right)^{\frac{1}{2}}\leq C\varepsilon.\] This gives \[\|\psi_{R}u\|_{L^{\infty}([T-\varepsilon^{-\beta},T],L^{2})}\leq C\varepsilon.\] Moreover, by Fatou's lemma, \[\|u\|_{L^{\infty}([T-\varepsilon^{-\beta},T],L^{2})}\leq\liminf_{R\to \infty}\|\psi_{R}u\|_{L^{\infty}([T-\varepsilon^{-\beta},T],L^{2})}\leq C\varepsilon.\] Thus, by Holder's inequality and Strichartz estimate (2.8), one writes for \(\mathcal{N}:=|x|^{-\varrho}|u|^{p-1}u\) and \((q,r)=\big{(}2(1+\varepsilon),2\frac{1+\varepsilon}{\varepsilon}\big{)}\in\Gamma\), \[\|F_{2}\|_{L^{\frac{8(p-1)}{3(2-\varrho)}}((T,\infty),L^{\frac{8( p-1)}{2-\varrho}})} \lesssim \|F_{2}\|_{L^{\frac{8}{8}}((T,\infty),L^{8})}^{\frac{2-\varrho}{ p-1}}\|F_{2}\|_{L^{\infty}((T,\infty),H^{1}_{\alpha})}^{\frac{p+\varrho-3}{p-1}}\] \[\lesssim \|\mathcal{N}\|_{L^{q^{\prime}}(I_{2},L^{r^{\prime}})}+\|\nabla \mathcal{N}\|_{L^{q^{\prime}}(I_{2},L^{r^{\prime}})}+\|\|x|^{-1}\mathcal{N}\| _{L^{q^{\prime}}(I_{2},L^{r^{\prime}})}\] \[\lesssim (I)+(II)+(III).\] For the first term, we have \[(I) = \||x|^{-\varrho}|u|^{p-1}u\|_{L^{q^{\prime}}(I_{2},L^{r^{\prime}})}\] \[\leq \|u\|_{L^{\infty}(I_{2},L^{2})}\||x|^{-\varrho}|u|^{p-1}\|_{L^{q^ {\prime}}(I_{2},L^{1/(\frac{1}{2}-\frac{1}{p})})}\] \[\leq \|u\|_{L^{\infty}(I_{2},L^{2})}\Big{(}\||x|^{-\varrho}\|_{L^{a_{ 1}}(\mathbf{B})}\||u\|_{L^{b_{1}}}^{p-1}\|_{L^{q^{\prime}}(I_{2})}+\||x|^{- \varrho}\|_{L^{a_{2}}(\mathbf{B}^{c})}\||u\|_{L^{b_{2}}}^{p-1}\|_{L^{q^{\prime }}(I_{2})}\Big{)}\] \[\leq c\varepsilon\Big{(}\|u\|_{L^{(p-1)q^{\prime}}(I_{2},L^{b_{1}})}^{ p-1}+\|u\|_{L^{(p-1)q^{\prime}}(I_{2},L^{b_{2}})}^{p-1}\Big{)}.\] Here \[\left\{\begin{array}{l}\frac{1}{2}-\frac{1}{r}=\frac{1}{2(1+\varepsilon)}= \frac{1}{a_{1}}+\frac{p-1}{b_{1}}=\frac{1}{a_{2}}+\frac{p-1}{b_{2}};\\ a_{1}<\frac{2}{\varrho}<a_{2}.\end{array}\right.\] Hence, \[\frac{1}{2(1+\varepsilon)}-\frac{p-1}{b_{1}}=\frac{1}{a_{1}}>\frac{\varrho} {2}>\frac{1}{a_{2}}=\frac{1}{2(1+\varepsilon)}-\frac{p-1}{b_{2}},\] which is possible if \[0<\frac{1}{b_{1}}<\frac{1}{p-1}\Big{(}\frac{1}{2(1+\varepsilon)}-\frac{\varrho}{2 }\Big{)}<\frac{1}{b_{2}}\leq\frac{1}{2}.\] The above condition translate to \(p>2-\varrho\) which is trivially satisfied when \(0<\varrho<1\). Since \(p>p_{c}=3-\varrho\), we have by Sobolev embedding \[(I) \leq c\varepsilon\Big{(}\|u\|_{L^{(p-1)q^{\prime}}(I_{2},L^{b_{1}})}^{p-1}+ \|u\|_{L^{(p-1)q^{\prime}}(I_{2},L^{b_{2}})}^{p-1}\Big{)}\] \[\leq c\varepsilon^{1-\frac{\beta}{q^{\prime}(p-1)}}\Big{(}\|u\|_{L^{ \infty}(I_{2},L^{(\frac{2(p-1)(1+\varepsilon)}{1-\varrho(1+\varepsilon)})^{+} })}^{p-1}+\|u\|_{L^{\infty}(I_{2},L^{(\frac{2(p-1)(1+\varepsilon)}{1-\varrho( 1+\varepsilon)})^{-}})}^{p-1}\Big{)}\] \[\leq c\varepsilon^{1-\frac{\beta}{q^{\prime}(p-1)}}\Big{(}\|u\|_{L^{ \infty}(I_{2},L^{2})}^{\lambda^{+}(p-1)}\|u\|_{L^{\infty}(I_{2},L^{A^{+}})}^{ (1-\lambda^{+})(p-1)}+\|u\|_{L^{\infty}(I_{2},L^{2})}^{\lambda^{-}(p-1)}\|u\|_ {L^{\infty}(I_{2},L^{A^{-}})}^{(1-\lambda^{-})(p-1)}\Big{)}\] \[\leq c\varepsilon^{1-\frac{2\beta(1+\varepsilon)}{(p-1)(1+2\varepsilon )}+(\frac{1-\varrho(1+\varepsilon)}{(p-1)(1+\varepsilon)})^{-}}.\] Here, one uses an interpolations \(L^{(\frac{2(p-1)(1+\varepsilon)}{1-\varrho(1+\varepsilon)})^{\pm}}\hookrightarrow L ^{2}\cap L^{A^{\pm}}\), with the Sobolev injection \(L^{A^{\pm}}\hookrightarrow H^{1}\). Moreover, \(\lambda^{\pm}\rightarrow(\frac{1-\varrho(1+\varepsilon)}{(p-1)(1+\varepsilon )})^{\mp}\) when \(A^{\pm}\rightarrow\infty\). For the second term \[(II) = \|\nabla\Big{(}|x|^{-\varrho}|u|^{p-1}u\Big{)}\|_{L^{q^{\prime}}( I_{2},L^{r^{\prime}})}\] \[\lesssim \||x|^{-\varrho}|u|^{p-1}\nabla u\|_{L^{q^{\prime}}(I_{2},L^{r^{ \prime}})}+\||x|^{-(1+\varrho)}|u|^{p-1}u\|_{L^{q^{\prime}}(I_{2},L^{r^{\prime }})}\] \[:= (II)_{1}+(II)_{2}.\] Moreover, using the previous calculus, we get \[(II)_{1} = \||x|^{-\varrho}|u|^{p-1}\nabla u\|_{L^{q^{\prime}}(I_{2},L^{r^{ \prime}})}\] \[\leq \|\nabla u\|_{L^{\infty}(I_{2},L^{2})}\||x|^{-\varrho}|u|^{p-1} \|_{L^{q^{\prime}}(I_{2},L^{1/(\frac{1}{2}-\frac{1}{p})})}\] \[\leq c\varepsilon^{-\frac{2\beta(1+\varepsilon)}{(p-1)(1+2\varepsilon )}+(\frac{1-\varrho(1+\varepsilon)}{(p-1)(1+\varepsilon)})^{-}}.\] In the same manner as above, we get \[(II)_{2} = \||x|^{-(1+\varrho)}|u|^{p-1}u\|_{L^{q^{\prime}}(I_{2},L^{r^{ \prime}})}\] \[\leq \||x|^{-(1+\varrho)}\|_{L^{c_{1}}({\bf B})}\||u\|_{L^{q^{\prime} }(I_{2})}^{p}+\||x|^{-(1+\varrho)}\|_{L^{c_{2}}({\bf B}^{c})}\||u\|_{L^{q^{ \prime}}(I_{2})}^{p}\Big{)}\] \[\leq c\Big{(}\|u\|_{L^{pq^{\prime}}(I_{2},L^{d_{1}})}^{p}+\|u\|_{L^{pq^ {\prime}}(I_{2},L^{d_{2}})}^{p}\Big{)}.\] Here \[\left\{\begin{array}{l}1-\frac{1}{r}=1-\frac{\varepsilon}{2(1+\varepsilon)}= \frac{1}{c_{1}}+\frac{p}{d_{1}}=\frac{1}{c_{2}}+\frac{p}{d_{2}};\\ c_{1}<\frac{N}{\varrho}<c_{2}.\end{array}\right.\] Thus, \[1-\frac{\varepsilon}{2(1+\varepsilon)}-\frac{p}{d_{1}}=\frac{1}{c_{1}}>\frac {\varrho}{N}>\frac{1}{c_{2}}=1-\frac{\varepsilon}{2(1+\varepsilon)}-\frac{p}{ d_{2}},\] which is possible if \[0<\frac{1}{d_{1}}<\frac{1}{p}\Big{(}1-\frac{\varepsilon}{2(1+\varepsilon)}-\frac {\varrho}{N}\Big{)}<\frac{1}{d_{2}}\leq\frac{1}{2}.\] The above condition can be written as \(p>2-\varrho\) which is obviously satisfied since \(0<\varrho<1\). Therefore, \[(II)_{2} \leq c\Big{(}\|u\|_{L^{p\varrho^{\prime}}(I_{2},L^{\frac{p}{(1- \frac{\varepsilon}{2(1+\varepsilon)}-\frac{\varrho}{N})})^{+}}}^{p}\,+\|u\|_{L ^{p\varrho^{\prime}}(I_{2},L^{\frac{p}{(1-\frac{\varepsilon}{2(1+\varepsilon)}- \frac{\varrho}{N})})^{-}}}^{p}\Big{)}\] \[\leq c\varepsilon^{-\frac{\beta}{\varrho^{\prime}p}}\Big{(}\|u\|_{L ^{\infty}(I_{2},L^{\frac{p}{(1-\frac{\varepsilon}{2(1+\varepsilon)}-\frac{ \varrho}{N})})^{+}}}^{p}\,+\|u\|_{L^{\infty}(I_{2},L^{\frac{p}{(1-\frac{ \varrho}{2(1+\varepsilon)}-\frac{\varrho}{N})})^{-}}}^{p}\Big{)}\] \[\leq c\varepsilon^{-\frac{\beta}{\varrho^{\prime}p}}\Big{(}\|u\|_{L ^{\infty}(I_{2},L^{2})}^{p\mu^{+}}\|u\|_{L^{\infty}(I_{2},L^{C+})}^{p\mu^{-}} +\|u\|_{L^{\infty}(I_{2},L^{2})}^{p\mu^{-}}\|u\|_{L^{\infty}(I_{2},L^{C-})}^{ p\mu^{-}}\Big{)}\] \[\leq c\varepsilon^{-\frac{\beta}{\varrho^{\prime}p}}\Big{(}\|u\|_{L ^{\infty}(I_{2},L^{2})}^{p\mu^{+}}+\|u\|_{L^{\infty}(I_{2},L^{2})}^{p\mu^{-}}\Big{)}\] \[\leq c\varepsilon^{-\frac{\beta}{\varrho^{\prime}p}+p\mu^{-}}\] \[\leq c\varepsilon^{-\frac{\beta}{\varrho^{\prime}p}(1-\frac{1}{2(1+ \varepsilon)})+(2-\frac{\varepsilon}{1+\varepsilon}-\varrho)^{-}}.\] The last term is estimated similarly as \((II)_{2}\). Regrouping the above estimates, one can see that there is \(\gamma>0\) such that \[\|F_{2}\|_{L^{\frac{8(p-1)}{3(2-\varrho)}}((T,\infty),L^{\frac{8( p-1)}{2-\varrho}})}\] \[\lesssim \Big{(}(I)+(II)+(III)\Big{)}^{\frac{2-\varrho}{2(p-1)}}\] \[\lesssim \Big{(}\varepsilon^{1-\frac{2\beta(1+\varepsilon)}{(p-1)(1+2 \varepsilon)}+(\frac{1-\varrho(1+\varepsilon)}{(p-1)(1+\varepsilon)})^{-}}+ \varepsilon^{-\frac{2\beta(1+\varepsilon)}{(p-1)(1+2\varepsilon)}+(\frac{1- \varrho(1+\varepsilon)}{(p-1)(1+\varepsilon)})^{-}}+\varepsilon^{-\frac{ \beta}{p}(1-\frac{1}{2(1+\varepsilon)})+(2-\frac{\varepsilon}{1+\varepsilon} -\varrho)^{-}}\Big{)}^{\frac{2-\varrho}{2(p-1)}}\] \[\lesssim \varepsilon^{\gamma}.\] * **The term** \(F_{1}\). Using the dispersive estimate (2.2) and Holder's inequality, one writes \[\|F_{1}\|_{\infty} \lesssim \int_{I_{1}}|t-s|^{-1}\||x|^{-\varrho}|u|^{p-1}u\|_{1}\,ds\] \[\lesssim \int_{I_{1}}|t-s|^{-1}\Big{(}\||x|^{-\varrho}|u|^{p-1}u\|_{L^{1} ({\bf B})}+\||x|^{-\varrho}|u|^{p-1}u\|_{L^{1}({\bf B}^{c})}\Big{)}\,ds\] \[\lesssim \int_{I_{1}}|t-s|^{-1}\Big{(}\||[|x|^{-\frac{\varrho}{p+1}}|u|]^ {p}|x|^{-\frac{\varrho}{p+1}}\|_{L^{1}({\bf B})}+\|u\|_{H^{1}}^{p}\Big{)}\,ds.\] By Holder's inequality and the fact that \(0<\varrho<2\), one has \[\||[x|^{-\frac{\varrho}{p+1}}|u|]^{p}|x|^{-\frac{\varrho}{p+1}} \|_{L^{1}({\bf B})} \leq \||x|^{-\frac{\varrho}{p+1}}\|_{L^{p+1}({\bf B})}\||[|x|^{-\frac{ \varrho}{p+1}}|u|]^{p}\|_{\frac{p+1}{p}}\] \[\leq \||x|^{-\varrho}\|_{L^{1}({\bf B})}^{\frac{1}{p+1}}\||x|^{- \varrho}|u|^{p+1}\|_{1}^{\frac{p}{p+1}}\] \[\lesssim \||x|^{-\varrho}|u|^{p+1}\|_{1}^{\frac{p}{p+1}}.\] Hence, using (2.9) and (4.19), we infer that \[\|F_{1}(t)\|_{\infty} \lesssim \int_{I_{1}}|t-s|^{-1}\|u\|_{H^{1}_{\alpha}}^{p}\,ds\] \[\lesssim E^{p}\,\int_{I_{1}}|t-s|^{-1}\,ds\] \[\lesssim E^{p}\,T\,\varepsilon^{\beta}.\] Now, using an interpolation inequality via Strichartz estimates, one gets \[\|F_{1}\|_{L^{\frac{8(p-1)}{8(2-\varrho)}}((T,\infty),L^{\frac{8( p-1)}{2-\varrho}})} \leq \|F_{1}\|_{L^{\frac{8}{8}}((T,\infty),L^{8})}^{\frac{2-\varrho}{ p-1}}\|F_{1}\|_{L^{\infty}((T,\infty),L^{\infty})}^{1-\frac{2-\varrho}{p-1}}\] \[\lesssim \|e^{i(\cdot-(T-\varepsilon^{-\beta}))\mathcal{K}_{\alpha}}u(T- \varepsilon^{-\beta})-e^{i\cdot\mathcal{K}_{\alpha}}u_{0}\|_{L^{\frac{8}{8}}( (T,\infty),L^{8})}^{\frac{2-\varrho}{p-1}}\Big{(}E^{p}T\varepsilon^{\beta} \Big{)}^{\frac{p-p_{c}}{p-1}}\] \[\lesssim \Big{(}E^{p}T\varepsilon^{\beta}\Big{)}^{\frac{p-p_{c}}{p-1}},\] where \(p_{c}=3-\varrho\). The proof of Proposition 4.7 is ended if one picks \(0<T<\varepsilon^{-\frac{\beta}{2}}\). Now, one can easily proves Proposition 4.6. First, \(u\in L^{\frac{8(p-1)}{3(2-\varrho)}}((T,\infty),L^{\frac{8(p-1)}{2-\varrho}})\cap L ^{\infty}(\mathbb{R},H^{1}_{\alpha})\). By interpolation together with the local theory, one gets \[u\in L^{\frac{8(p-1)}{3(2-\varrho)}}\left(\mathbb{R};\ \ L^{\frac{8(p-1)}{4(p-1)-3(2- \varrho)}}\right),\quad\left(\frac{8(p-1)}{3(2-\varrho)},\frac{8(p-1)}{4(p-1) -3(2-\varrho)}\right)\in\Gamma.\] The scattering follows with standard arguments. ### Proof of the scattering in Theorem 1.1 Take \(R,\varepsilon>0\) given by Proposition 4.6 and \(t_{n},R_{n}\to\infty\) given by (4.18). Letting \(n>>1\) such that \(R_{n}>R\), one gets by Holder's inequality \[\int_{|x|\leq R}|u(t_{n},x)|^{2}\,dx = R^{\frac{2\varrho}{p+1}}\int_{|x|\leq R}|x|^{-\frac{2\varrho}{p+ 1}}|u(t_{n},x)|^{2}\,dx\] \[\leq R^{\frac{2\varrho}{p+1}}|B(R)|^{\frac{p-1}{p+1}}\ \||x|^{-\frac{2\varrho}{p+1}}|u(t_{n},x)|^{2}\|_{L^{\frac{p+1}{2}}(|x|\leq R _{n})}\] \[\leq R^{\frac{2(\varrho+p-1)}{p+1}}\left(\int_{|x|\leq R_{n}}|x|^{- \varrho}|u(t_{n},x)|^{p+1}\,dx\right)^{\frac{2}{p+1}}\] \[\lesssim \varepsilon^{2}.\] Hence, the scattering of energy global solutions to the focusing problem (1.1) follows from Proposition 4.6. ### Proof of the blow-up in Theorem 1.1 Let us pick \(b_{R}(x):=R^{2}b(\frac{r}{R})\), \(R>0\), where \(b\in C_{0}^{\infty}([0,\infty))\) satisfies \[b:r\mapsto\left\{\begin{array}{ll}\frac{r^{2}}{2},&\mbox{if}\quad r\leq 1, \\ 0,&\mbox{if}\quad r\geq 2,\end{array}\right.\quad\mbox{and}\quad b^{\prime\prime} \leq 1.\] It follows that \[b_{R}^{\prime\prime}\leq 1,\quad b_{R}^{\prime}(r)\leq r\quad\mbox{and}\quad \Delta b_{R}\leq 2.\] Using the spherically symmetric property (4.15), by Cauchy-Schwarz's inequality via the properties of \(b\), one gets \[\int_{\mathbb{R}^{2}}\nabla_{\alpha}uD^{2}b_{R}\overline{\nabla_{ \alpha}u}\,dx = \int_{\mathbb{R}^{2}}(\nabla_{\alpha}u)_{l}\Big{[}\Big{(}\frac{ \delta_{lk}}{r}-\frac{x_{l}x_{k}}{r^{3}}\Big{)}b_{R}^{\prime}+\frac{x_{l}x_{k} }{r^{2}}b_{R}^{\prime\prime}\Big{]}(\overline{\nabla_{\alpha}u})_{k}\,dx\] \[= \int_{\mathbb{R}^{2}}|\nabla_{\alpha}u|^{2}\frac{b_{R}^{\prime}} {r}\,dx+\int_{\mathbb{R}^{2}}|x\cdot\nabla_{\alpha}u|^{2}(\frac{b_{R}^{\prime \prime}}{r^{2}}-\frac{b_{R}^{\prime}}{r^{3}})\,dx\] \[\leq \int_{\mathbb{R}^{2}}|\nabla_{\alpha}u|^{2}\frac{b_{R}^{\prime}} {r}\,dx+\int_{\mathbb{R}^{2}}(\frac{|x\cdot\nabla_{\alpha}u|}{r})^{2}(1-\frac{ b_{R}^{\prime}}{r})\,dx\] \[\leq \int_{\mathbb{R}^{2}}|\nabla_{\alpha}u|^{2}\frac{b_{R}^{\prime}} {r}\,dx+\int_{\mathbb{R}^{2}}|\nabla_{\alpha}u|^{2}(1-\frac{b_{R}^{\prime}}{r} )\,dx\] \[= \int_{\mathbb{R}^{2}}|\nabla_{\alpha}u|^{2}\,dx.\] Let \(V_{R}:=V_{b_{R}}\) be given by (4.11). By (4.12) and using the above estimates together with the standard Gagliardo-Nirenberg inequality, we infer that \[V_{R}^{\prime\prime}(t) \leq -\int_{\mathbb{R}^{2}}\Delta^{2}b_{R}|u|^{2}\,dx+4\int_{\mathbb{R }^{2}}|\nabla_{\alpha}u|^{2}\,dx \tag{4.21}\] \[- \frac{2(p-1)}{p+1}\int_{\mathbb{R}^{2}}\Delta b_{R}|x|^{-\varrho} |u|^{p+1}\,dx+\frac{4}{p+1}\int_{\mathbb{R}^{2}}\nabla b_{R}\cdot\nabla(|x|^{- \varrho})|u|^{p+1}\,dx\] \[\leq \frac{c}{R^{2}}\|u\|^{2}+4\int_{\mathbb{R}^{2}}|\nabla_{\alpha}u| ^{2}\,dx-\frac{4B}{p+1}\int_{\mathbb{R}^{2}}|x|^{-\varrho}|u|^{p+1}\,dx+c\int _{\{|x|>R\}}|x|^{-\varrho}|u|^{p+1}\,dx\] \[\lesssim \mathcal{Q}[u(t)]+R^{-2}+R^{-\varrho}\|\nabla u(t)\|^{2(p-1)}.\] Then the conclusion (1.15) easily follows from (1.14). Indeed, suppose that \(\sup\limits_{0\leq t<T^{*}}\|\nabla u(t)\|<\infty\). Then by (1.14) and (4.21) it follows that \(T^{*}<\infty\). Hence, \(\|\nabla u(t)\|\to\infty\) as \(t\to T^{*}\). This obviously contradicts the fact that \(\sup\limits_{0\leq t<T^{*}}\|\nabla u(t)\|<\infty\) and the proof is completed. ## 5. Proof of Corollary 1.2 Recall that \(\phi\) stands for a ground state solution to (2.12). ### Proof of the scattering part in Corollary 1.2 This part follows by Theorem 1.1 with the next result. Indeed, the classical scattering condition below the ground state threshold is stronger than (1.13). **Lemma 5.1**.: _Suppose that assumptions (1.16) and (1.17) hold. Then, there exists \(\varepsilon>0\) such that (4.5) is satisfied._ Proof.: Take the real function \(g:t\mapsto t^{2}-\frac{2\mathbb{K}_{opt}}{p+1}t^{B}\) and compute using the identity \(A+2\lambda_{c}=B\lambda_{c}\), \[E[u][M[u]]^{\lambda_{c}} \geq \|u\|_{\dot{H}^{1}_{\alpha}}^{2}\|u\|^{2\lambda_{c}}-\frac{2 \mathbb{K}_{opt}}{p+1}\|u\|^{A+2\lambda_{c}}\|u\|_{\dot{H}^{1}_{\alpha}}^{B}\] \[= g(\|u\|_{\dot{H}^{1}_{\alpha}}\|u\|^{\lambda_{c}}),\] where the optimal constant \(\mathbb{K}_{opt}\) is given by (2.9). Now, with Pohozaev identities and the conservation laws, one has for some \(0<\varepsilon<1\), \[g(\|u\|_{\dot{H}^{1}_{\alpha}}\|u\|^{\lambda_{c}}) \leq E[u][M[u]]^{\lambda_{c}}\] \[< (1-\varepsilon)E[\phi][M[\phi]]^{\lambda_{c}}\] \[= (1-\varepsilon)g(\|\phi\|_{\dot{H}^{1}_{\alpha}}\|\phi\|^{ \lambda_{c}}).\] Thus, with time continuity, (1.17) is invariant under the flow (1.1) and \(T^{*}=\infty\). Moreover, by Pohozaev identities, one writes \[E[\phi][M[\phi]]^{\lambda_{c}}=\frac{B-2}{B}\Big{(}\|\phi\|_{\dot{H}^{1}_{ \alpha}}\|\phi\|^{\lambda_{c}}\Big{)}^{2}=\frac{\mathbb{K}_{opt}(B-2)}{p+1} \Big{(}\|\phi\|_{\dot{H}^{1}_{\alpha}}\|\phi\|^{\lambda_{c}}\Big{)}^{B}\] and so \[1-\varepsilon\geq\frac{B}{B-2}\Big{(}\frac{\|u\|_{\dot{H}^{1}_{\alpha}}\|u\|^ {\lambda_{c}}}{\|\phi\|_{\dot{H}^{1}_{\alpha}}\|\phi\|^{\lambda_{c}}}\Big{)}^ {2}-\frac{2}{B-2}\Big{(}\frac{\|u\|_{\dot{H}^{1}_{\alpha}}\|u\|^{\lambda_{c}} }{\|\phi\|_{\dot{H}^{1}_{\alpha}}\|\phi\|^{\lambda_{c}}}\Big{)}^{B}.\] Following the variations of \(t\mapsto\frac{B}{B-2}t^{2}-\frac{2}{B-2}t^{B}\) via the assumption (1.17) and a continuity argument, there is a real number denoted also by \(0<\varepsilon<1\), such that \[\|u(t)\|_{\dot{H}^{1}_{\alpha}}\|u(t)\|^{\lambda_{c}}\leq(1-\varepsilon)\| \phi\|_{\dot{H}^{1}_{\alpha}}\|\phi\|^{\lambda_{c}}\quad\text{on}\quad\mathbb{R}.\] Now, by the last line and Pohozaev identities, for some real number denoted also by \(0<\varepsilon<1\), \[\mathcal{P}[u][M[u]]^{\lambda_{c}} \leq \mathbb{K}_{opt}\|u\|_{\dot{H}^{1}_{\alpha}}^{B}\|u\|^{A+2\lambda_ {c}}\] \[\leq \mathbb{K}_{opt}(1-\varepsilon)(\|\phi\|_{\dot{H}^{1}_{\alpha}}\| \phi\|^{\lambda_{c}})^{B}\] \[\leq (1-\varepsilon)\frac{p+1}{B}(\|\phi\|_{\dot{H}^{1}_{\alpha}}\| \phi\|^{\lambda_{c}})^{2}\] \[\leq (1-\varepsilon)\mathcal{P}[\phi]M[\phi]^{\lambda_{c}}.\] This finishes the proof. ### Proof of the blow-up part in Corollary 1.2 Assume that (1.16) and (1.18) are satisfied. Let us prove that \(\mathcal{GM}[u(t)]>1\) on \([0,T^{*})\) where \(\mathcal{GM}[u(t)]\) is given by (1.11). **Lemma 5.2**.: _The conditions (1.16) and (1.18) are stable under the flow of (1.1)._ Proof.: Taking into account Proposition 2.4, one writes \[M[\phi]^{\lambda_{c}}E[\phi] > (1+\varepsilon)\|u\|^{2\lambda_{c}}\Big{(}\|u\|_{\dot{H}^{1}_{ \alpha}}^{2}-\frac{2\mathrm{K}_{opt}}{p+1}\mathcal{P}[u]\Big{)}\] \[> (1+\varepsilon)\|u\|^{2\lambda_{c}}\Big{(}\|u\|_{\dot{H}^{1}_{ \alpha}}^{2}-\frac{2\mathrm{K}_{opt}}{p+1}\|u\|^{A}\|u\|_{\dot{H}^{1}_{\alpha} }^{B}\Big{)}\] \[:= (1+\varepsilon)f(\|u\|^{2\lambda_{c}}\|u\|_{\dot{H}^{1}_{\alpha} }).\] Compute also by Proposition 2.4, \[f(\|\phi\|^{2\lambda_{c}}\|\phi\|_{\dot{H}^{1}_{\alpha}}) = \|\phi\|^{2\lambda_{c}}\Big{(}\|\phi\|_{\dot{H}^{1}_{\alpha}}^{2} -\frac{2\mathrm{K}_{opt}}{p+1}\|\phi\|^{A}\|\phi\|_{\dot{H}^{1}_{\alpha}}^{B}\Big{)}\] \[= \|\phi\|^{2\lambda_{c}}\Big{(}\|\phi\|_{\dot{H}^{1}_{\alpha}}^{2} -\frac{2}{p+1}\mathcal{P}[\phi]\Big{)}\] \[= M[\phi]^{\lambda_{c}}E[\phi].\] Thus, \(f(\|\phi\|^{2\lambda_{c}}\|\phi\|_{\dot{H}^{1}_{\alpha}})>(1+\varepsilon)f(\|u \|^{2\lambda_{c}}\|u\|_{\dot{H}^{1}_{\alpha}})\). The proof follows by a continuity argument. By Pohozaev's identity, we get \(B\,E[\phi]=(B-2)\|\phi\|_{\dot{H}^{1}_{\alpha}}^{2}\). Therefore \[\mathcal{Q}[u][M[u]]^{\lambda_{c}} = \Big{(}\|u\|_{\dot{H}^{1}_{\alpha}}^{2}-\frac{B}{p+1}\mathcal{P} [u]\Big{)}[M[u]]^{\lambda_{c}}\] \[= \frac{B}{2}E[u][M[u]]^{\lambda_{c}}-(\frac{B}{2}-1)\|u\|_{\dot{H} ^{1}_{\alpha}}^{2}[M[u]]^{\lambda_{c}}\] \[\leq \frac{B}{2}(1-\varepsilon)E[\phi][M[\phi]]^{\lambda_{c}}-(\frac{ B}{2}-1)\|\phi\|_{\dot{H}^{1}_{\alpha}}^{2}[M[\phi]]^{\lambda_{c}}\] \[\leq -\varepsilon\|\phi\|_{\dot{H}^{1}_{\alpha}}^{2}[M[\phi]]^{\lambda_ {c}}.\] The proof follows by the use of Theorem 1.1.
2302.06676
Netflix and Forget: Efficient and Exact Machine Unlearning from Bi-linear Recommendations
People break up, miscarry, and lose loved ones. Their online streaming and shopping recommendations, however, do not necessarily update, and may serve as unhappy reminders of their loss. When users want to renege on their past actions, they expect the recommender platforms to erase selective data at the model level. Ideally, given any specified user history, the recommender can unwind or "forget", as if the record was not part of training. To that end, this paper focuses on simple but widely deployed bi-linear models for recommendations based on matrix completion. Without incurring the cost of re-training, and without degrading the model unnecessarily, we develop Unlearn-ALS by making a few key modifications to the fine-tuning procedure under Alternating Least Squares optimisation, thus applicable to any bi-linear models regardless of the training procedure. We show that Unlearn-ALS is consistent with retraining without \emph{any} model degradation and exhibits rapid convergence, making it suitable for a large class of existing recommenders.
Mimee Xu, Jiankai Sun, Xin Yang, Kevin Yao, Chong Wang
2023-02-13T20:27:45Z
http://arxiv.org/abs/2302.06676v1
# Netflix and Forget: ###### Abstract People break up, miscarry, and lose loved ones. Their online streaming and shopping recommendations, however, do not necessarily update, and may serve as unhappy reminders of their loss. When users want to renege on their past actions, they expect the recommender platforms to erase selective data at the model level. Ideally, given any specified user history, the recommender can unwind or "forget", as if the record was not part of training. To that end, this paper focuses on simple but widely deployed bi-linear models for recommendations based on matrix completion. Without incurring the cost of re-training, and without degrading the model unnecessarily, we develop Unlearn-ALS by making a few key modifications to the fine-tuning procedure under Alternating Least Squares optimisation, thus applicable to any bi-linear models regardless of the training procedure. We show that Unlearn-ALS is consistent with retraining without _any_ model degradation and exhibits rapid convergence, making it suitable for a large class of existing recommenders. Machine Learning, Recommender, Back-Augmented Learning, ICML ## 1 Introduction Break-ups, pregnancy losses, and bereevments are particularly painful in the age of ubiquitous machine learning systems. Per General Data Protection Regulation (GDPR), an individual may request their personal data erased under the right to erasure, or "Right To Be Forgotten" (Council of European Union, 2018; Voigt and Von dem Bussche, 2017). Suppose a Netflix user watches a Korean drama with their significant other but breaks up mid-season. They are subsequently bombarded with new episode alerts and recommended shows with the same actors and art styles, potentially causing distress. To move on, the user may wish that Netflix recommenders expunge some of their watch history. The platform could accommodate deletion, not only in user history but also in subsequent recommendations. Ideally, this deletion is both swift and seamless. An incomplete "under-deletion" likely persists the underlying concepts learned from the deleted records, preventing the user from cultivating a new path forward due to "echo-chamber" style feedback (Chaney et al., 2018; Jiang et al., 2019; Mansoury et al., 2020). Yet, an "over-deletion" may needlessly degrade model utility; for instance, a callous reset can cause degeneracy where trendy items are conspicuously missing. Fortunately, many deployed recommendation systems are uncomplicated: assuming low-rank user and item features, they solve a matrix completion problem. For building industrial recommenders, a widely-used optimization is Alternating Least Squares (ALS) which is fast-convergent (Hu et al., 2008; Koren et al., 2009; Takacs et al., 2011; He et al., 2016). We focus on data erasure from these practical systems. Despite the simplicity of bi-linear recommendation models, few works explored performing unlearning from them post-training. Thus, the first step of our exploration examines when linear models of few parameters indeed fail to memorize samples, or are otherwise "robust" against a small number of random deletions, summarized in Section 4. For arbitrary deletion, we develop Unlearn-ALS, which modifies the intermediate confidence matrix used in ALS to achieve fast forgetting. Mathematically, Unlearn-ALS is equivalent to minimizing the loss of the model on the remaining data by retraining with ALS, making it a method for _exact_ deletion. Section 5 presents its analyses and results. We further ask, is our work done? In a similar setting, industrial recommendations and systems trained with differential privacy do leak training data with specific users (Calandrino et al., 2011; Rahman et al., 2018), underscoring the importance of empirical evaluation. While our theory works well in the random deletion setting, in practice, however, there may still be privacy risks with respect to the deleted data, in the model after performing deletion. In completion, we develop a membership inference variant to evaluate the privacy risks of our unlearning procedure. Our contributions(1) We clarify that practical bi-linear recommendation models can have privacy risks from memorizing training data. (2) We propose Untrain-ALS, a crafty and fast heuristic that unlearns a bi-linear model, and makes no compromise to recommendation accuracy. (3) We devise an empirical test using de-noised membership inference, which is more sensitive to bi-linear models' memorization. ## 2 Related Works Machine unlearningis an emerging field motivated by performance and computation trade-offs to implement the Right to Be Forgotten on machine learning models (Grau, 2006). When a user seeks to retract data used in training, the derived model ought to update with respect to the change. Unlearning thus trades off computation, accuracy, and privacy, and is often compared with retraining (Neel et al., 2021; Ginart et al., 2019; Bourtoule et al., 2021; Golatkar et al., 2020). Unlearning recommendation systemsis concurrently explored by Li et al. (2022) and Chen et al. (2022), which target unlearning for industrial scale recommendations built through collaborative filtering. Sharding and user clustering are key to their methods, which we do not consider. Instead, our work complements the line of work through a much simpler unlearning algorithm that applies to all bi-linear models with minimal architectural change. Differentially-private recommendationsMcSherry and Mironov (2009); Liu et al. (2015) may be naturally compliant towards the Right to Be Forgotten by reducing the risk related to the model output revealing information about the inclusion of certain data. However these methods would need to anticipate to a certain extent the likelihood of deletion, and build that into training. Evaluations against privacy risksif no privacy risk is shown, it would mean that no computation needs to be expended on unlearning. Membership Inference is a popular method that measures training data memorization by a model. Typical membership inference uses a collection of samples that are not in the training data, feed them to the model, and take the outputs as the baseline negative training set. The positive training set is the data that the model has seen in the training set. Other membership inference methods have been developed, usually requiring access to the model or the training procedure more metrics (Chen et al., 2021). The central idea is to make the empirical attack model more powerful. Recently, Carlini et al. (2018) took a different approach. They developed a very effective empirical evaluation would be applicable to any model after it has been trained. For large scale language models, feature injection can test if a data point had been deleted (Izzo et al., 2021). This negative dataset is manufactured "poison" to the training procedure. The intuition is that if the model is prone to memorization, it would be able to reproduce the exact random string that was injected in the training set. The membership inference variant thus focuses on engineering a better dataset, thus making it more effective at uncovering memorization. While powerful, it requires internal access to model training. Differential PrivacySimilar to a well-behaving matrix completion solution's inherent privacy (Section 4), some models may be less prone to memorizing individual data points. As a result, they are less at risk for membership attacks after deletion requests. By definition, pure differentially private models are robust to deletion, as each individual data point's membership should not be inferrable (Dwork and Lei, 2009). Yet, not all models trained with differential privacy are robust. In practice, assumptions on the independences between data points do not hold and the number of deletion requests may not be known ahead of training; additionally, businesses often opt for approximations, since pure differential privacy poses degradation on model utility. As a result, Rahman et al. (2018) finds that models trained to be differentially private are yet vulnerable. ## 3 Preliminaries **Matrix Completion**. We assume a base collaborative filtering model based on matrix factorization, learned through a user-item ratings matrix \(P\) as in MovieLens (Bennett et al., 2007). The downstream recommendation for each user is \begin{table} \begin{tabular}{l c c} \hline \hline Model & Datasets & Baseline For \\ \hline \(\mathcal{M}_{\text{undeleted}}\) & \(\mathcal{D}_{\text{obs}}\) & Performance \\ \(\mathcal{M}_{\text{retrain}}\) & \(\mathcal{D}_{\text{remain}}\) & Privacy Loss \\ \(\mathcal{M}_{\text{untrain}}\) & \(\mathcal{D}_{\text{obs}},\mathcal{D}_{\text{removal}}\) & (Our Method) \\ \hline \hline \end{tabular} \end{table} Table 1: Notations for Untrain-ALS comparisons. Figure 1: A user suffers a sudden breakup, and requests the recommendation owner Netflix to erase selective watch histories. given based on the ranking of items (Koren et al., 2009). Assume matrix \(M\) where \(m_{ij}:=M[i][j]\) denotes the _ground truth_ preference of user \(i\) with respect to item \(j\). The entries of \(P\) are assumed sampled from \(M\); if the interaction is not observed, \(p_{ij}=0\). In matrix factorization, \(M\) can be recovered through a low rank multiplication, \[M=XY^{\mathsf{T}}. \tag{1}\] where \(X\) depicts user features over all users, and \(Y\) is the underlying item factors e.g. movie features. Given only \(P\), we aim to recover \(X,Y\). **Alternating Least Squares (ALS)**. Unless otherwise mentioned, we simulate training (and re-training) with AlternatingLeastSquares (ALS), a widely deployed heuristic by Hu et al. (2008); Takacs et al. (2011) outlined in Algorithm 1. ALS is exceedingly simple and parallelizable; despite having little theoretic guarantee it converges fast empirically for recommendation data (Koren et al., 2009; Jain et al., 2013; Uschmajew, 2012). For given ratings matrix \(P\) and desirable rank \(k\), we learn the model parameters \(\hat{\theta}=\{\hat{X},\hat{Y}\}\). The regularized matrix completion with parameter \(\lambda\) also associates each entry with a confidence score \(c_{ui}\). Using \(\mathcal{D}_{\mathrm{obs}}=\{(u,i)\}\) to denote the coordinates of \(M\) that contain explicit observations1, the loss function \(L_{\mathrm{ALS}}(X,Y)\) is written as Footnote 1: \(m_{ui}\neq 0\:\forall(u,i)\in\mathcal{D}_{\mathrm{obs}}\) \[\sum_{(u,i)\in\mathcal{D}_{\mathrm{obs}}}c_{ui}(p_{ui}-x_{u}^{ \mathsf{T}}y_{i})^{2}+\lambda(\sum_{u}||x_{u}||^{2}+\sum_{i}||y_{i}||^{2}). \tag{2}\] Algorithm 1 makes a non-convex optimization convex at each of the alternating minimizations. To tackle implicit feedback, a confidence matrix \(C\) is constructed as a soft copy of the ratings, where \(c_{ui}:=1+\alpha p_{ui}\) for \(\alpha\in\mathbf{R}^{+}\): if the ratings were high, the confidence is high, and if the ratings are missing, the confidence is low. \(C\) is then used throughout the iterations instead of \(P\). Though we treat ALS as the baseline ground truth for training (and re-training), our unlearning algorithm, UntrainALS, applies to any bi-linear model. See Appendix B for experiment parameters. **Additional assumptions**. The removal set, \(\mathcal{D}_{\mathrm{removal}}\), is uniformly sampled from \(\mathcal{D}_{\mathrm{removal}}\) without replacement, and it cannot be known prior to training. Our theoretical analysis replies on uniform sampling. Further, the coordinates in \(\mathcal{D}_{\mathrm{obs}}\) are assumed to be i.i.d., to ensure that models trained without access to the deleted data are statistically independent from the removal set. Lastly, \(|\mathcal{D}_{\mathrm{obs}}|\gg|\mathcal{D}_{\mathrm{removal}}|\) to simulate occasional deletion requests. **Re-training as privacy baseline**. As our goal is to neither over- nor under-delete, the ideal removal of \(P[m][n]\) is to train another model with new preference matrix \(P^{\prime}\) where \(P^{\prime}[m][n]=0;P^{\prime}[i][j]=P[i][j]\:\mathrm{otherwise}\). The retrained model will thus treat the removed samples as simply missing data, as Hu et al. (2008)'s _implicit_ feedback, ensuring privacy requirements. Additionally, we are only concerned with cases where \(P_{mn}\neq 0\) so that the deletion is meaningful. **Empirical evaluations.** In our setup, after unlearning procedure, the removed data should be indistinguishable from unobserved data. In Membership Inference (MI), the trained model's outputs can be exploited to judge whether a data sample was part of the training data. Typically, an MI classifier \(\sigma(\mathcal{M}):(x)\rightarrow\{0,1\}\) is a binary logistic regressor. Our MI training set is constructed with positive data of actual training samples' outputs, and negative data of removed training samples' outputs. Nonetheless, a robust unlearning does not require an associated low MI accuracy. Instead, we are concerned with _increased_ confidence in membership attack caused by the unlearning procedure. **Vulnerability**. Fixing the training procedure, the retrained model and the trained model can be seen as a function of their observed ratings matrix. Let \(\mathrm{MI}(\cdot):(\theta,\mathcal{D}_{\mathrm{removal}},\mathcal{D}_{ \mathrm{remain}})\rightarrow[0,1]\), which refers to the membership inference accuracy on a particular model given the removal set and the remaining set. Because all the evaluations fix the datasets between retraining and untraining, we simply write \(\mathrm{MI}(\mathrm{untrain})\) to refer to membership inference accuracy with untraining. Typically, \(\mathrm{MI}\) is directly used as a vulnerability measure. As we compare against re-training from scratch, the _additional_ vulnerability caused by the choosing untraining over retraining is written as \(\mathrm{MI}(\mathrm{untrain})-\mathrm{MI}(\mathrm{retrain})\). In Section 6.3.1, we propose instead to use \(\mathrm{MI}(\mathrm{unlearn})-\mathrm{MI}(\mathrm{train})-\mathrm{MI}( \mathrm{undeleted})\) under fixed data splits, to denoise the effect of the base undeleted model. ## 4 Intuitions on default privacy against random data removal in matrix completion A model is defaultly private when it does not need to change under the removal of some training data. When user-requested deletions are drawn uniformly from user data, two factors indicate "inherent" robustness of a matrix completion model: 1. having high test accuracies (or trained with calibration) and 2. the low rank nature of the data. We comb through these arguments, extended n Appendix D, and present why empirical methods are still needed. **Arguments for inherent privacy**. First, implicit feedback datasets mark missing data as \(0\) in \(P\). The model training treats unobserved entries as zero, including \(\mathcal{D}_{\mathrm{test}}\). Second, identical sampling for user requests \(\mathcal{D}_{\mathrm{removal}}\) and \(\mathcal{D}_{\mathrm{test}}\). For \(x_{r}\in\mathcal{D}_{\mathrm{removal}},x_{t}\in\mathcal{D}_{\mathrm{test}},x_{r} \overset{i.i.d.}{\sim}\mathrm{U}(\mathcal{D}_{\mathrm{obs}})\). Naturally \(\mathcal{M}_{\mathrm{undeleted}}\) cannot distinguish between a sample drawn from one set or another based on query output as in Carlini et al. (2022)'s "membership inference security game", or \(\mathbb{E}[L_{x\sim\mathcal{D}_{\mathrm{removal}}}(x)]=\mathbb{E}[L_{x\sim \mathcal{D}_{\mathrm{test}}}(x)]\). Moreover, empirical recommenders can be highly accurate without memorization. At the optimal AUC in Figure 2, the base model predicts well even with large removal fractions. Further, in linear models, per Kearns (1995) and Blum et al. (1999), appropriate calibration in training results in in-domain generalization, expecting low prediction losses on missing data for both \(\mathcal{M}_{\mathrm{retrained}}\) and \(\mathcal{M}_{\mathrm{undeleted}}\). Lastly, the impact of using a small number of parameters in matrix completion is extended in Appendix D. Rehashing the key claim in Recht (2011) we show that the exact solutions to matrix completion is inherently robust to randomly sampled deletions under data coherence assumptions. **Real-world ALS training breaks theoretic assumptions**. Model training typically employs regularization (Equation 2), and early-stopped at the best fit (Algorithm 1), not to completion. Matrix coherence of real world data, as Recht (2011) requires, is not testable in practice. Lastly, the decompositions learned using ALS can be non-unique (nor equivalent up to a rotation) (Jain et al., 2013), so the removal samples may be especially vulnerable with respect to the currently deployed model, thus requiring manual deletion. Nevertheless, average membership attack accuracies may be especially low against a matrix completion model. An attacker aims to discriminate between the predictions influenced by \(\mathcal{D}_{\mathrm{removal}}\) and \(\mathcal{D}_{\mathrm{remain}}\) after seeing some samples from each (Carlini et al., 2022). Varying data splits, a well-calibrated \(\mathcal{M}\) has similar expected losses across. Optimizing for area-under-curve (AUC) is used in 1. thresholding membership inference model on the removal data and 2. selecting the optimal model, we have \(\mathbb{P}_{(u,i)\sim\mathcal{D}_{\mathrm{removal}}}(p_{ui}=1)\approx\mathbb{P }_{(u,i)\sim\mathcal{D}_{\mathrm{obs}}}(p_{ui}=1)=\mathrm{AUC}\). For both \(\mathcal{M}_{\mathrm{retrain}}\) and \(\mathcal{M}_{\mathrm{undeleted}}\), lower validation loss hinders attacker accuracy, making the _difference_ of the attacks i.e. the privacy improvements from re-training numerically small, which we discuss further in Section 6.3.2 and discard the averaging across data splits for more effective membership inference. ``` 0:\(P,\alpha,\lambda,\hat{X},\hat{Y},C_{0},\mathcal{D}_{\mathrm{removal}}\) \(X,Y\leftarrow\hat{X},\hat{Y}\) {from Algorithm 1} for all\((u,i)\in\mathcal{D}_{\mathrm{removal}}\)do \(p_{ui}\gets 0,c_{ui}\gets 0\) {delete and block} endfor while model does not converge do for all\(u\)do \(x_{u}\leftarrow(Y^{\intercal}C^{u}Y+\lambda I)^{-1}C^{u}P^{u}\) endfor for all\(i\)do \(y_{i}\leftarrow(X^{\intercal}C^{i}X+\lambda I)^{-1}C^{i}P^{i}\) endfor endwhile ``` **Algorithm 2** Untrain-ALS ## 5 Exact Deletion with Untrain-ALS: ### Untraining Alternating Least Squares Our unlearning strategy, Untrain-ALS outlined in Algorithm 2, makes slight modifications to the fast ALS heuristic used in training implicit feedback recommendations. 1. **Pre-train**. Use the resulting user and item features \(X_{0}\), \(Y_{0}\) in Algorithm 1 to initialize ALS. 2. **Deleting preferences**. Set \(p_{ui}=0\) for deleted item-user interaction \(i,u\), common practice for fine-tuning. 3. **Blocking confidence on removed data**. Set \(c_{ui}\gets 0\) Figure 2: **Baseline: re-training dynamics. When training from scratch, model AUC on same test set (1% of MovieLens-1M) across 6 different fractions of removal. Dotted: the optimal number of training iterations. Results checkpointed at every 10 iterations.** for deleted item-user interaction \(i,u\) at all subsequent iterations. Crucially this prevents further influence of the deleted data, thus allowing the model to refit to the remaining data fast. Optionally, use adjusted inverse. ### Untrain loss = retrain loss, functionally Recall that the holy grail of unlearning is to approximate retraining. Under these modifications to \(p_{ui}\) and \(c_{ui}\), UntrainALS objective is functionally equivalent to re-training. **Theorem 5.1**.: \(L_{\mathrm{UntrainALS}}=L_{\mathrm{retrain}}\) _on \(\mathcal{D}_{\mathrm{obs}},\mathcal{D}_{\mathrm{removal}}\)._ Proof Sketch.: \[L_{\mathrm{UntrainALS}}(\mathcal{D}_{\mathrm{obs}},\mathcal{D}_ {\mathrm{rm}})=\\ \sum_{(u,i)\in\mathcal{D}_{\mathrm{obs}}}f_{c}(c_{ui})(p_{ui}-x_ {u}^{\intercal}y_{i})^{2}+\lambda(\sum_{u}||x_{u}||^{2}+\sum_{i}||y_{i}||^{2})\] where \(f_{c}(\cdot)\) transforms the confidence score. Using Kronecker delta \(\delta\) for set membership, our algorithm has \[f_{c}(c_{ui}) =\delta_{(u,i)\in(\mathcal{D}_{\mathrm{obs}}\setminus\mathcal{D}_ {\mathrm{rm}})}c_{ui}\] \[=(1-\delta_{(u,i)\in\mathcal{D}_{\mathrm{rm}}})c_{ui}=c_{ui}- \delta_{(u,i)\in\mathcal{D}_{\mathrm{rm}}}c_{ui}.\] Expanding untraining loss along \(\mathcal{D}_{\mathrm{remain}}\) and \(\mathcal{D}_{\mathrm{removal}}\), \[L_{\mathrm{UntrainALS}} =\lambda(\sum_{u\in\mathcal{D}_{\mathrm{obs}}}||x_{u}||^{2}+\sum_ {i\in\mathcal{D}_{\mathrm{obs}}}||y_{i}||^{2})\] \[+\sum_{(u,i)\in\mathcal{D}_{\mathrm{remain}}}c_{ui}(p_{ui}-x_{u} ^{\intercal}y_{i})^{2}\] \[+\sum_{u,i\in\mathcal{D}_{\mathrm{removal}}}(0)(p_{ui}-x_{u}^{ \intercal}y_{i})^{2}\] Because random requests \(|\mathcal{D}_{\mathrm{removal}}|\ll|\mathcal{D}_{\mathrm{obs}}|\), the set of contributing \(u,i\) is not expected to change, therefore \[L_{\mathrm{UntrainALS}} =\lambda(\sum_{u\in\mathcal{D}_{\mathrm{remain}}}||x_{u}||^{2}+ \sum_{i\in\mathcal{D}_{\mathrm{remain}}}||y_{i}||^{2})\] \[+\sum_{(u,i)\in\mathcal{D}_{\mathrm{remain}}}c_{ui}(p_{ui}-x_{u} ^{\intercal}y_{i})^{2}\] \[=L_{\mathrm{ALS}}(\mathcal{D}_{\mathrm{remain}})\quad\text{( Appendix A in full).}\] _Remark 5.2_.: It may appear that with such strong results, our work is over. Yet again, two real-world issues prevent us from claiming any untrained model is the same as any retrained model: 1. empirically, the models are trained with early stopping: the number of epochs to train is determined by minimal loss; and 2. matrix factorization solutions via ALS are not unique. For empirical privacy, some of the potential solutions may be _more private_ than others. Therefore, it is crucial to complement with empirical privacy measures. ### Untrain Runtime \(\leq\) Training Runtime, Per Pass Clearly, every pass of Untrain-ALS has the same runtime as a pass of ALS (Algorithms 1 and 2). UntrainALS benefits from convergence analyses of ALS itself (Nichmajew, 2012). Because the loss of the pre-trained model is minimal, using Untrain-ALS would be much faster than doing ALS from scratch. Section 6.2 verifies empirically that UntrainALS takes fewer passes than re-training. Speedups. Every default pass of ALS requires inverting a large matrix. Though fast implementations use conjugate gradient (CG) to approximate inverses (Takacs et al., 2011), we note a faster alternative for exactly computing the matrix inverse in Untrain-ALS, where the original inverse is already available. Adjusting for \(c_{ui}\gets 0\) is equivalent to changing a single entry in the diagonal matrix \(C^{u}\). This subtraction of a one-entry matrix is the perturbation of concern. The resulting confidence matrix under un-training, \(\widetilde{C^{u}}\), is very close to the original confidence matrix, where \[\widetilde{C^{u}}:=C^{u}-(\mathrm{diag}[0,\cdots,c_{ui},\cdots,0]). \tag{3}\] Consider a special case of Woodbury's inverse (Woodbury, 1950) where only one element is subtracted, by Sherman and Morrison (1950)'s subtraction case, for matrix \(A\), there is \((A-uv^{\intercal})^{-1}=A^{-1}+A^{-1}u(1-v^{\intercal}A^{-1}u)^{-1}v^{\intercal} A^{-1}\). Let \(A:=Y^{\intercal}C^{u}Y+\lambda I\). The adjusted inverse becomes \[(\widetilde{A})^{-1}=(Y^{\intercal}C^{u}Y+\lambda I)^{-1}\\ +\frac{c_{ui}}{1-q}y_{i}(Y^{\intercal}C^{u}Y+\lambda I)^{-1}y_{i }^{\intercal}(Y^{\intercal}C^{u}Y+\lambda I)^{-1}. \tag{4}\] Overall Per-Pass Runtime. Without inverse adjustment, \(O(|\mathcal{D}_{\mathrm{obs}}|k^{2}+n^{3}k)\) is both the ALS and Untrain-ALS runtimes. With inverse adjustment, every user feature is computed to complete one step of ALS: \(X_{u}\leftarrow(Y^{\intercal}C^{u}Y+\lambda I)^{-1}C^{u}p(u)\). In ALS, the inverse of \(A\) is computed in \(O(k^{3})\), and using CG speeds it up to \(O(k^{2}p)\) where \(p\) is the number of CG iterations. Assuming \(\mathcal{A}^{-1}\) has been computed in the pretraining step, we can see the adjustment as a perturbation on \(\mathcal{A}\), which we project to its inverse. This allows for a runtime of \(O(k^{2})\) per user or item per iteration, or every Untrain-ALS pass \(O(|\mathcal{D}_{\mathrm{obs}}|k^{2})\) for single deletion. ## 6 Numerical Results Extensive numerical simulations verify the conclusions in our method, and we empirically demonstrate the efficiency of Unlearn-ALS using MovieLens data (Bennett et al., 2007). B states experimental parameters. ### Experimental Goals and Setup To investigate the practical implication of using Untrain-ALS we examine three aspects: 1. The accuracy of Untrain-ALS to prevent model degradation. In our case, we show that \(\mathcal{M}_{\mathrm{untrained}}\) performs no worse than \(\mathcal{M}_{\mathrm{retrained}}\) models that result from retraining from scratch on the remaining data. 2. The runtime in practice; in our case, it suffices to show that unlearning takes fewer iterations than retraining. 3. The privacy implications of Untrain-ALS. In our case, unlearning should reduce privacy risks from undeleted model through reduction on MI accuracies. Note that empirical privacy evaluations should (i.) reliably uncover vulnerabilities in \(\mathcal{M}_{\mathrm{undeleted}}\), and (ii.) be able to differentiate between \(\mathcal{M}_{\mathrm{retrain}}\), which does not observe the offending data, and \(\mathcal{M}_{\mathrm{undeleted}}\). For data \(P\), we use MovieLens datasets (Bennett et al., 2007; Harper and Konstan, 2015). On larger models, membership inference suffers severe sensitivity issues, therefore we illustrate on MovieLens-100k and MovieLens-1M. We run parallelized Alternating Least Squares, with conjugate gradient speedup (Hu et al., 2008; Takacs et al., 2011) as baseline; without our inverse adjustments, runtime is apparent through comparing the iteration number between models. Additional setups parameters are outlined in B. ### Untrain-ALS: No Degradation, Fast Convergence Over a wide range of removal fractions, Figure 4 shows fast untraining dynamics, which result in highly performant models. In MovieLens-1M, for removal fractions under 60%, Untrain-ALS typically converges by \(10\) iterations, while retraining takes \(40\) to \(70\) ALS passes. Because Unlearn-ALS clearly follows the well-tested ALS, if untraining is left unchecked as in Figure 4, there is no degradation to the model compared to training from scratch. Per theoretic analysis in Section 5.1, Untrain-ALS is consistent with retraining without removal data in objective. In summary, Unlearn-ALS breaks the usual expectation that fast unlearning necessarily degrades model performance. ### Membership Inference (MI) from data queries Recall \(\mathrm{MI}(\mathcal{M}):(\theta_{\mathcal{M}},\mathcal{D}_{\mathrm{removal}}, \mathcal{D}_{\mathrm{remain}})\rightarrow[0,1]\). When evaluating data leakage across datasets and model configurations, the measure \(\mathrm{MI}(\mathcal{M})\) is _average-case_ (Section 3). We first motivate MI-based evaluations with a null result, discuss the specific risks of MI for evaluating memorization vulnerabilities, and offer an improved metric. Concurrently, (Carlini et al., 2022) states that a more robust metric should be used, though they are motivated by deep learning models. Average-case MI is insufficient.In Figure 5, after random data is removed, the vulnerability of retraining is comparable to \(10\) passes of our proposed unlearning method. While this looks like we are doing perfect, observe even with changes in training iterations (from underfit to convergent), there is no change in vulnerability. An appealing conclusion is that underfit bi-linear models are always robust to deletion, as though no work needs to be done in unlearning so long as we stop early. _We do not believe that the unlearning algorithm is absolutely private_. #### 6.3.1 Sensitivity issue of MI in practice We investigate empirical attacks based on average-case membership inference against the unlearned model. As alluded to in Section 4, matrix completion-based models have only modest privacy risk to begin with; in implicit feedback datasets, extensive model validation can mitigate the risks against random data deletion. Meanwhile, membership inference attacks (Shokri et al., 2017) are especially powerful when the input data has a lot of information; in matrix completion, without some advanced user fingerprinting, the model output itself is all the information. Concretely, 3 challenges arise in this pursuit for practical evaluation: 1. In the real world, the intial pre-trained recommenders tend to perform well, even after large portion of the data is zero-ed (Figure 2). Uniformly removing data is akin to sampling another held-out set, the initial model likely predicts the missing items just as well (Section 4). Moreover, corrupting the base model for better privacy is not an option in industrial deployments. 2. Using only the predicted value, \(\mathcal{M}_{\mathrm{untrain}}\) does not distinguish between the removed and remaining samples, so there is no significant MI accuracy change. (Only at certain ratios for certain epochs can we induce a 2% difference.) _MI is highly susceptible to small noise._ 3. Varying train-test splits, the base model \(\mathcal{M}_{\mathrm{undeleted}}\) has different membership attack vulnerabilities built-in, due to ALS not having a fixed unique solution; different training trajectories find different decompositions as solutions to the same matrix completion problem. Some of those models are inherently more defensible than others. This adds noise to the already small numerical measurement. _We tackle this directly._ Recall that our privacy model views re-training as ground truth. To study the vulnerability of unlearning is to study the _additional_ vulnerability compared with re-training. Let \(\mathcal{IV}\) denote the _intrinsic vulnerability_ associated with the learning strategy. We are concerned with whether Untrain-ALS presents more or less intrinsic risk compared with retraining. Assuming that the training and un-training procedures have similar intrinsic vulnerability, \(\mathcal{IV}_{\mathrm{ALS}}\approx\mathcal{IV}_{\mathrm{re-train}}\). An estimator for \(\mathcal{IV}_{\mathrm{Untrain-ALS}}\) is thus the difference between the empirical measure for membership inference: \[\mathcal{IV}_{\mathrm{Untrain-ALS}}=\mathrm{MI}(\mathrm{untrain})-\mathrm{ MI}(\mathrm{retrain}) \tag{5}\] _Remark 6.1_.: Because retraining is assumed to be statistically independent from removed data, being able to infer properties of the removed data from the re-trained model e.g. due to data duplication is not an essential vulnerability. If an empirical measurement shows that untrained model has membership vulnerability, it is a tolerable amount of privacy risk under our setup. However, this measurement on intrinsic untraining vulnerability shows that, at the best fit, untraining and untraining are extremely close. This numerical difference is so small, that the measurement appears dominated by noise, while having inconclusive results, as shown in Figure 5. When averaged across runs, the overlap of untraining and retraining are further obscured. #### 6.3.2 Modifying Average-Case Membership Inference Identifying model noise as a cause, let \(\mathcal{IV}^{\prime}\) be our modified intrinsic vulnerability measure, applied not only to the same \(\{M,\mathcal{D}_{\mathrm{obs}},\mathcal{D}_{\mathrm{removal}}\}\), but also under identical train-test split. The splits greatly impact the model, as we see in Section 6.3.1 that intrinsic vulnerability to deletion is closely related to model AUC. Using ALS and Untrain-ALS to retrain and unlearn after data removal, we make three accuracy measurements: \(\mathrm{MI}(\mathrm{untrain})\), \(\mathrm{MI}(\mathrm{retrain})\), and \(\mathrm{MI}(\mathrm{undeleted})\). Even though our privacy model does not directly concern the base model, the inclusion of \(\mathrm{MI}(\mathrm{undeleted})\) serves to _denoise_ the influence of model splits on our numerical accuracy differences. We have \[\mathcal{IV}^{\prime}_{\mathrm{Untrain}}= \mathrm{MI}(\mathcal{M}_{\mathrm{untrain}}) \tag{6}\] \[- \mathrm{MI}(\mathcal{M}_{\mathrm{retrain}})-\mathrm{MI}(\mathcal{ M}_{\mathrm{undeleted}}).\] For the same model, Equation 6 appears off by a constant from Equation 5. However, as a measurement, the subtraction for each run improves numerical stability, and reduces noise when averaged over multiple runs. In Figure 6 and 7, the vulnerability \(\mathcal{IV}\) is measured as membership inference accuracy subtracting membership inference accuracy associated with \(\mathcal{M}_{\mathrm{undeleted}}\), for the same split under MovieLens-100K. The removal fraction is set at every 5% of the data (even though we are empirically only concerned with small fractions). The procedures for untraining involves training the base model with the selected number of iterations. Improvements.\(\mathcal{IV}^{\prime}\) significantly mitigates sensitivity issues, as we now see a change with respect to model training iterations. In Figure 6, as training iterations get larger, the inherent vulnerability is greater. In Figure 7, as untraining continues, there is a decrease of vulnerability. Both phenomena are not salient when measured under \(\mathcal{IV}\). Limitations.Nonetheless, our efforts to denoise only has a clear effect on small scale on a specific removal range. The range related to user-requested deletion is, however, still not very sensitive. As illustrated in Figure 8, at a larger scale, this metric suffers a sensitivity loss. ## 7 Discussion, Limitations, and Impacts We propose using Untrain-ALS to perform machine unlearning on bi-linear recommendations, which is simuta neously widely-deployed in the real world and under-studied in machine unlearning. The method is fast to converge, and can unlearn exactly without compromising model degradation. However, empirically, models learned with regularized matrix completion are not unique, thus unlearning and re-training may exhibit small differences in privacy. To find them, we employ empirical attacks of memership inference, and adapt the vanilla version to denoise the impact of data splits, and successfully see trends in vulnerability that was previously obscured. We see two trends emerging from empirical results: 1. Unlearn-ALS is clearly fast and powerful, with no degradation in model performance, unlike most unlearning methods (Sekhari et al., 2021). 2. Unlearn-ALS is not the same as re-training, but it closely relates to re-training in most privacy measures, provided that it is trained to the best fit. 2. Relying on membership inference classifications alone to measure unlearning thus leads to potential outstanding privacy risks. We join prior calls in urging the unlearning community to re-think empirical evaluations for unlearning to meet practical privacy needs(Truex et al., 2019; Chen et al., 2021; Jayaraman et al., 2020). LimitationsOur work is limited to the choice of models. Though we apply to all bi-linear models, not all recommendation systems are implemented with dot products. Societal impactWe place pressure on platforms that train on user data to give users real-time options to remove the influence of their training data. Despite research progress, however, real world systems have yet caught on (Villaronga et al., 2018). When users opt to remove their past records' influence on recommendations, existing implementations tend to fall under two categories: complete expunsing of their recommendation, in which a user's all historic interactions are zero-ed, such as Netflix's reset, or a vague removal of learnt concepts such as Facebook's Ad preferences. While many services offer granular control over which ones of their historic actions the platform collects, they do not promise that the deletion necessarily impact downstream systems that learn from such data. Ostensibly, two factors prevent machine unlearning to be deployed: 1. lacking legal recognition for the associated privacy risks, as GDPR-style deletion hinges on whether automated systems leak private data for the general public (Villaronga et al., 2018). For that, our work adds to the rigor of discovery: empirical evaluation needs revisiting. 2. industrial-scale computation expenditure on pre-trained machine learning models is massive, and there has yet been a compelling demonstration that industrial-scale recommendation models can be efficiently unlearned without hurting the bottom line. Our work on Untrain-ALS proposes efficient and exact unlearning. Upon sequential deletion requests, the unlearned model will not perform worse than the retrained model. When no trade-off is made, the hope is that both policy and industry can agree to facilitate user privacy. Malicious useOur selective forgetting techniques can be applied to sinister areas where forgetting is a form of censorship. The right to be forgotten also faces similar criticism outside of the legal realms; after all, even the forgetting procedure in "Eternal Sunshine of the Spotless Mind" may be troublesome because it lets the recipient "live a lie" (Grau, 2006). ## 8 Conclusion In practice, matrix completion-based models are not guaranteed to be inherently private. To unlearn, we develop Untrain-ALS, with sound theory and efficient implementations. Untrain-ALS objective aligns exactly with re-training, so developers will see no degradation in model performance caused by choosing unlearning over re-training. To make our solution practical, we provide a numerical speedup to scaling large systems, and a denoised vulnerability measure to improve membership inference sensitivities.
2308.13546
Functional Graph Contrastive Learning of Hyperscanning EEG Reveals Emotional Contagion Evoked by Stereotype-Based Stressors
This study delves into the intricacies of emotional contagion and its impact on performance within dyadic interactions. Specifically, it focuses on the context of stereotype-based stress (SBS) during collaborative problem-solving tasks among female pairs. Through an exploration of emotional contagion, this study seeks to unveil its underlying mechanisms and effects. Leveraging EEG-based hyperscanning technology, we introduced an innovative approach known as the functional Graph Contrastive Learning (fGCL), which extracts subject-invariant representations of neural activity patterns from feedback trials. These representations are further subjected to analysis using the Dynamic Graph Classification (DGC) model, aimed at dissecting the process of emotional contagion along three independent temporal stages. The results underscore the substantial role of emotional contagion in shaping the trajectories of participants' performance during collaborative tasks in the presence of SBS conditions. Overall, our research contributes invaluable insights into the neural underpinnings of emotional contagion, thereby enriching our comprehension of the complexities underlying social interactions and emotional dynamics.
Jingyun Huang, Rachel C. Amey, Mengting Liu, Chad E. Forbes
2023-08-22T09:04:14Z
http://arxiv.org/abs/2308.13546v2
Functional Graph Contrastive Learning of Hyperscanning EEG Reveals Emotional Contagion Evoked by Stereotype-Based Stressors ###### Abstract This study delves into the intricacies of emotional contagion and its impact on performance within dyadic interactions. Specifically, it focuses on the context of stereotype-based stress (SBS) during collaborative problem-solving tasks among female pairs. Through an exploration of emotional contagion, this study seeks to unveil its underlying mechanisms and effects. Leveraging EEG-based hyperscanning technology, we introduced an innovative approach known as the functional Graph Contrastive Learning (fGCL), which extracts subject-invariant representations of neural activity patterns from feedback trials. These representations are further subjected to analysis using the Dynamic Graph Classification (DGC) model, aimed at dissecting the process of emotional contagion along three independent temporal stages. The results underscore the substantial role of emotional contagion in shaping the trajectories of participants' performance during collaborative tasks in the presence of SBS conditions. Overall, our research contributes invaluable insights into the neural underpinnings of emotional contagion, thereby enriching our comprehension of the complexities underlying social interactions and emotional dynamics. Emotional Contagion Graph Contrastive Learning Stereotype-based stressor Graph Classification Graph Representation Learning ## 1 Introduction Emotional contagion refers to the sharing of emotional states between individuals, and it has been observed in both animal and human models that the infectivity of negative emotions is much greater than that of positive emotions [1]. Negative emotional contagion has a powerful effect on our relationships - family, friends, teams, etc. - and can lead, for example, to depressive behavior in healthy people who live with depressed individuals. It is urgent to understand the mechanism of emotional contagion, especially negative emotional contagion. Emotional contagion has long been regarded as reflecting a mimicry-based process, for which mimicry of emotional expressions and its consequent feedback function are assumed and can be evoked by higher-order social processes or by a simple emotion-to-action response as well as the primary mimicry-based process [2]. At present, the emotional contagion models mostly adopt behavioral analysis and questionnaires, which are often affected by subjects' subjective factors. They have mainly focused on behavioral experiment such as analysing people's posts containing emotional information to extract affective evidence [3], using the Positive And Negative Affective Schedule (PANAS) scale to measure positive and negative emotions as a quantitive research [4] and the mathematical simulation model of emotional contagion in crowd evacuation [5]. Although behavioral analysis and questionnaires can provide valuable insights into emotional contagion, they have limitations in terms of capturing the neural mechanisms, timing, and subtleties of this phenomenon. To overcome these limitations, researchers have turned to EEG-based hyperscanning, a technology that records electroencephalographic (EEG) data from multiple participants simultaneously. This approach complements traditional behavioral analysis and questionnaires by providing a more direct and precise real-time examination of the underlying brain activities associated with emotional contagion. EEG-based hyperscanning technology has proven effective in capturing brain states during affective communication. For instance, when an individual experiences specific emotions like sadness, joy, or fear, their brain activity may influence the brain activity of others they interact with, thus bearing implications for emotional contagion [6]. However, the significant inter-subject variability of emotion-related EEG signals poses a great challenge for cross-individual emotional representation extraction [7]. In most cases, the accuracy of the intra-subject emotion classification is higher than that of inter-subject classification with the same classifier [8; 9]. This limitation in the generalization of emotion classifiers may be attributed to individual differences in EEG-based emotional representations, influenced by factors such as personality, dispositional affect, and genotype [10]. Furthermore, individual variance in patterns of brain connectivity reveals that the inter-subject contrast plays a significant role in cognitive analysis [11]. These previous findings suggest that EEG emotional representation should not be extracted without considering the individual difference. In this study, we investigated whether women who experienced social identity threat could transmit their stress to women who were not under threat, using a process known as Stereotype-based Stress (SBS) contagion and examined how this collective stress affected women in a dyadic performance context. As previous research revealed, SBS contexts typically engender a variety of behavioral and physiological SBS responses including sustained vACC activation, unique neural network configurations and enhanced connectivity between regions integral for emotion (dACC, vACC, and mPFC) and saliency networks (IPL, insula, and STS), this could collectively provide evidence for increased emotional processing and the awareness of negatively arousing or stressful information [12; 13; 14; 15]. We sought to understand if threatened women can transmit their stress to otherwise non-threatened partners, does it hurt or benefit the woman directly under threat, and, to what extent can this come at a cost to their otherwise non-threatened partners? To this end, we designed an experimental paradigm of emotional contagion by using discussion and learning scenarios, and try to understand the physiological mechanism of emotional contagion by using EEG-based hyperscanning technology combined with a data-driven approach -- functional Graph Contrastive Learning (fGCL), which extracts the subject-invariant emotional representations while preserving Functional Connectivity (FC) information, with a downstream analysis to infer and explore its process from outcomes of emotional contagion. The formulation of fGCL is grounded on the assumption that the neural activities of the subjects are in a similar state when they receive the same segment of emotional stimuli (i.e., the displayed CORRECT or WRONG responses on the screen). Based on this fundamental idea, we aim to learn subject-invariant representations of EEG signals in the embedding space underlying similar mental activities. Specifically, fGCL mainly consists of two components, i.e., the spectral-based graph convolutional network (spectral GCN) encoder, and a two-layer multi-layer perceptron (MLP). It maximizes the similarities of the representations in response to identical emotional stimuli while minimizing the similarities between signals corresponding to different stimuli. In the downstream analysis, a classifier -- Dynamic Graph Classification (DGC) utilize the trained encoder extracted graph embeddings as input to identify brain state to emotional stimuli after emotional contagion within dyads. Since the representations are extracted based on semantically meaningful settings, they are expected to be informative and generalizable in the downstream analysis. The method presents three essential characteristics: * The presented model can extract EEG signal representations with inter-individual commonality and remove the individual differences, which more effectively summarize the internal neural activity pattern. * A deep learning model that is more effective than traditional statistical analysis and behavioral analysis methods to investigate the emotional contagion mechanism * The graph data structure based analysis is more aligned with functional brain network structure and thus yield more intuitive and effective results ## 2 Method ### Graph Construction and Analysis Procedure As individual difference exists in inter-subject functional connectivity, a statistical dependency quantifies the connection strengths between brain region of interests (ROIs) [16], our aim is to preserve the FC information for for more effective emotional analysis. To this end, we adopt graphs, which are naturally suitable for modeling brain topology. In this approach, we project ROIs onto the nodes of a graph and the weighted edges connect nodes, this overcomes the limitation of traditional 2D grid-like structure, where models might fail to explore and exploit the complex FC [17]. Given a dataset \(\{(\mathcal{G}_{i}^{j},y_{i})\}_{i=1}^{N}\) with N individuals, where \(y_{i}=\{0,1\}\) denotes the label of \(i\)-th graph, \(\mathcal{G}_{i}^{j}=\{\mathcal{V}_{i}^{j},\mathcal{E}_{i}^{j}\}\) is the \(j\)-th view in augmented views (see section 3.7) where Pearson correlations for each ROI is calculated as node features \(X_{i}^{j}\in\mathcal{R}^{ROIs\times ROIs}\) where \(x_{n}=X[n,:]^{T}\) is the ROIs (the number of ROI in a graph) dimensional node feature of node \(v_{n}\in\mathcal{V}\) containing FC information and Partial correlations between ROIs is used as edge features \(\mathcal{H}_{i}^{j}\in\mathcal{R}^{ROIs\times ROIs}\) where \(h_{nn^{\prime}}=\mathcal{H}[n,n^{\prime}]^{T}\) is the edge feature of edge \(e_{nn^{\prime}}\in\mathcal{E}\). As illustrated in Figure 1, the functional Graph Contrastive Learning (fGCL) encoder learns to extract embedding of each graph as subject-invariant representations (i.e., maximize the representational similarity of EEG signals belonging to the similar scenario, and minimize that of others), by leveraging this encoder, we construct a population graph where a collection of subject-invariant embeddings of EEG signals as nodes. Given this, the Dynamic Graph Classification (DGC) classifier is trained to perform classification task on three stages (early, middle and late). The results can be further utilized for emotional contagion analysis (see Section 4.3). #### 2.1.1 The functional Graph Contrastive Learning (fGCL) Encoder The Graph contrastive learning encoder \(f\) takes N-paired graphs (minibatch) \(\{\mathcal{G}_{i}^{A}|i=1,..,N\}\) and \(\{\mathcal{G}_{j}^{B}|j=1,..,N\}\) as inputs to generate subject-invariant embeddings \(\{z_{i}^{A}|i=1,..,N\}\) and \(\{z_{j}^{B}|j=1,..,N\}\) of graphs for each type of trial (correct or wrong feedback in our experiment). It adopts spectral graph convolution network consisting of 2 Chebshev spectral graph convolution layers [18] each followed by a TopK pooling layer [19] as model backbone to capture representations and retain important nodes in the iterative aggregation, global average pooling layer and global mean pooling layer are used to capture the global information, the final two-layer multi-layer perceptron (MLP) is used to output the graph embedding. Similar to SimCLR framework [20] with InfoNCE loss, we adopt the contrastive loss function for the anchor embedding \(z_{i}^{A}\) defined as: \[L(z_{i}^{A})=-log(\frac{exp(sim(z_{i}^{A},z_{i}^{B})/\tau)}{\sum_{j=1}^{N}I_{ j\neq i}exp(sim(z_{i}^{A},z_{j}^{A})/\tau)+\sum_{j=1}^{N}exp(sim(z_{i}^{A},z_{j}^{ B})/\tau)}) \tag{1}\] Figure 1: The workflow of our approach. (A) Experiment Setting. Subjects in one dyad answer mathematical questions while recording their EEG signals simultaneously. (B) Given a minibatch of graph data representing subjects in the same dyad, the fGCL encoder is able to extract subject-invariant graph embeddings. (C) The splitted graph embeddings are then fed into DGC classifier to produce classification evaluation results at these three stages respectively. (D) Donwstram analysis. Data are divided into early stage, middle stage and late stage according to the senario of tasks to perform further emotional contagion analysis. where \(I_{j\neq i}=\{0,1\}\) is an indicator, which is set to 1 when \(j\neq i\), \(\tau\) is a temperature factor to adjust the attractiveness strength. Overall, this loss function increases the attractiveness of positive pair (\(z_{i}^{A}\) and \(z_{i}^{B}\)) and decrease the attractiveness of negative pair (\(z_{i}^{A}\) and others) in the embedddging space. The similarity between two embeddings is computed by \[sim(z_{i}^{A},z_{j}^{B})=\frac{z_{i}^{A}\cdot z_{j}^{B}}{||z_{i}^{A}||||z_{j}^{ B}||} \tag{2}\] Eventually, the accumulated loss of the minibatch is computed by \[L=\sum_{i=1}^{N}L(z_{i}^{A})+\sum_{i=1}^{N}L(z_{i}^{B}) \tag{3}\] The spectral convolution blocks of fGCL comprise of Chebyshev spectral graph convolutional operator (i.e., ChebConv layer), which is defined by: \[X^{\prime}=\sum_{k=1}^{K}\mathcal{Z}^{(k)}(X)\cdot\theta^{(k)} \tag{4}\] where K is the Chebyshev filter size, \(\mathcal{Z}^{(k)}(X)\) is computed recursively by \(\mathcal{Z}^{(1)}=X\), \(\mathcal{Z}^{(2)}=\tilde{L}\cdot X\), all the way to \(\mathcal{Z}^{(k)}=2\cdot\tilde{L}\cdot\mathcal{Z}^{(k-1)}-\mathcal{Z}^{(k-2)}\), \(\tilde{L}\) denotes the scaled and normalized Laplacian \(\frac{2L}{\lambda_{max}}-I\), where \(\lambda_{max}\) is the largest eigenvalue of \(L\). \(\theta^{(k)}\) are learnable parameters. To prevent overfitting, each ChebConv layer is followed by a TopK pooling layer, which downsamples graphs and reduce their dimensionality while retaining the most relevant nodes (i.e., top K nodes) leading the model to focus on meaningful information. Afterwards, embeddings of graphs optimized by graph contrastive learning are input to the downstream classifier to perform graph classification task. #### 2.1.2 Downstream Analysis on the Outcome of Emotional Contagion After graph contrastive learning phase, the subject difference has been eliminated in the embedding space, that is, the representation extracted the neural activity pattern of the group commonality. To fully utilize these aligned representations, we adopt Dynamic Graph Classification (DGC) model to perform brain emotional contagion state analysis, the trained classifier could better classify the commonality. Initially, the DGC takes a population graph with isolated nodes as the input, by iteratively projecting node features to a new feature space, it is able to construct new edges based on the top-K connection strengths definded as: \[v_{i}^{Pt}=\sum_{m\in\mathcal{E}_{(i,.)}^{P}}\phi(v_{i}^{P}\,||\,v_{m}^{P}-v_{ i}^{P}),\quad\mathcal{E}^{P}=\text{KNN}(\mathcal{V}^{P})_{\text{topK}} \tag{5}\] The classification tasks are mainly carried out according to the early, middle and late stages of the problem solving tasks. (1) At the early stage, we hypothesized that there was no significant difference in the brain activity of DMT-PST Figure 2: The illustration of the construction of positive pair and negative pair in graph contrastive learning. In a minibatch, embedding \(z_{1}^{A}\) forms a positive pair with \(z_{1}^{B}\), other embeddings forms negative pairs with embedding \(z_{1}^{A}\). dyads, that is, the emotional contagion effect did not occur, so the differentiation between DMT and PST was low. (2) At the middle stage, we assume that the DMT in DMT-PST dyad begins to induce emotional contagion effect, gradually transferring the negative emotions caused by pressure to PST, so it can be seen that the differentiation between and will gradually increase. (3) At the late stage, we assume that the emotional contagion effect of the DMT-PST dyad has terminated, and the negative emotion of DMT is transferred to PST. Therefore, there is a significant difference in brain activity between DMT and PST, so the differentiation between them reaches the highest. In general, if the distinguishable degree is described with time as the independent variable, then according to our hypothesis, it can be found that the distinguishable degree in the pair will show a positive correlation trend. For the control group PST-PST dyad, we assume that these is no significant discrepancy between subject exists and thus the distinguishable degree displays a stable trend. The algorithm of training the fGCL encoder and the downstream classifier DGC are summarised in Algotithm 1. ``` 0: Collection of N batched double view data \(\mathcal{B}\), the learning rate \(\alpha_{l}\) of fGCL, the learning rate \(\alpha_{g}\) of DGC, batch size \(N\), temperature \(T\), ratio \(r\), kernel size \(K\), maximum number of training epochs of the fGCL \(MaxEpoch_{f}\), maximum number of training epochs of the DGC \(MaxEpoch_{d}\); 1: Initialize the parameters of ChebConv and TopKPooling blocks in fGCL with \(K\) and \(r\) and set \(Epochs_{f}\gets 1\), \(Epochs_{d}\gets 1\); 2:while not converge and \(Epochs_{f}\leq MaxEpoch_{f}\)do 3:for batch \(B_{b}=\{\{(\mathcal{G}_{1}^{1},y_{1}^{1}),(\mathcal{G}_{2}^{1},y_{2}^{1})\},....,\{\{(\mathcal{G}_{1}^{N},y_{1}^{N}),(\mathcal{G}_{2}^{N},y_{2}^{N})\}\} \}\in\mathcal{B}\)do 4: Obtain graph embeddings \(\{z_{i}^{b}|i=1,2,...,2N\}\) by two ChebConv and TopKPooling blocks and a two-layer MLP; 5: Calculate graph contrastive loss by (1) - (3); 6:\(\theta_{f}\leftarrow\theta_{f}-\alpha_{l}\frac{\partial L}{\partial\theta_{f}}\) 7:\(\alpha_{l}\gets F_{l}(\alpha_{l})\) where \(F_{l}(\cdot)\) is the multi step learning rate scheduler; 8:endfor 9:\(Epochs_{f}\gets Epochs_{f}+1\); 10:endwhile 11: Construct a population graph \(\mathcal{G}_{p}\) on \(\mathcal{B}\) with isolated nodes using trained \(fGCL_{\theta_{f}}\); 12:while not converge and \(Epochs_{d}\leq MaxEpoch_{d}\)do 13: Dynamically reconstruct population graph \(\mathcal{G}_{p}\) using (5); 14: Calculate focal loss by (6) - (7); 15:\(\theta_{d}\leftarrow\theta_{d}-\alpha_{g}\frac{\partial L}{\partial\theta_{d}}\); 16:\(Epochs_{d}\gets Epochs_{d}+1\); 17:\(\alpha_{g}\gets F_{g}(\alpha_{g})\) where \(F_{g}(\cdot)\) is the step learning rate scheduler; 18:endwhile 19:Output: fGCL encoder parameters \(\theta_{f}\) and DGC classifier parameters \(\theta_{d}\); ``` **Algorithm 1**The Training Algorithm of fGCL+DGC -- Learning Graph Embedding & Classification ## 3 Experiment ### Participants Eighteen white female students who granted written consent participated in the study for payment. Participants were recruited for the study if they expressed knowledge of the stereotype that men are better at math than women. Specifically, all participants responded with a three or lower to the following question during a pre-study screening: "Regardless of what you think, what is the stereotype that people have about women's and men's math ability" (1= Men are better than women; 7= Women are better than men). Participants were paired into nine dyads. One participant was excluded from EEG analyses due to a lack of valid trials. Thus one (PST) dyad was removed from all dyadic analyses. ### Procedure Upon arrival to the lab, partners of each dyad met for the first time while signing consent forms; they were then prepared for EEG recording. Each member of the dyad was seated in their own soundproof chamber in front of a computer screen and iPad tablet. Dyads were randomly assigned to either an SBS/diagnostic math test condition (DMT, n=14 dyads) or a control/problem-solving task condition (PST, n=15 dyads). In the DMT condition, one participant (referred to as the "actor") was exposed to SBS by being told they would complete tasks that were diagnostic of their math intelligence. They also completed demographic questions that included a gender query, had pre-recorded instructions read aloud to them in a male voice through headphones, and were prepped for EEG recording by at least one male experimenter. In contrast, the DMT actor's interaction partner (referred to as the "partner") and all participants in the PST condition were informed that they would be completing tasks that would inform researchers about the different types of problem-solving techniques they prefer [21; 22], completed demographic questions that excluded the gender query, had prerecorded instructions read aloud to them by a female voice through headphones, and were set up by female experimenters. Thus, DMT partners and both participants in the PST condition were always placed in stereotype neutral/stress-free contexts. That is, only the condition of the actor varied across dyad conditions. After an initial set of instructions, participants were connected via webcam on their iPad tablet in order to facilitate face-to-face communication during the interactive math task (described below). Participants were able to see one another through the duration of the interactive math task. When the interactive math task was completed, participants answered a series of questionnaires alone (iPads were removed from the EEG chambers), were debriefed, and were compensated for their participation with cash or course credit. ### Interactive Math Task Actors and partners simultaneously completed a 100 problem math task consisting of standard multiplication and division problems (e.g., 10x20=) that they solved both alone and together. Initial pilot tests confirmed that the problems selected varied in degree of difficulty (easy, medium, and hard), ensuring all participants would solve problems correctly and incorrectly, thus exposing them to both positive and negative performance feedback. Actors and partners were first presented with the same math problem to solve alone for 16 seconds. During this solo time, participants were given three answer choices below each problem (A, B, or C), with the answer to each problem randomly presented in one of the three answer positions. Participants mentally completed all problems without scratch paper and made all answer selections via a button box placed in their laps. This solo answer was used for all performance outcomes in our analyses. After participants entered their solo answer, they were prompted with a screen that said, "Please discuss the answer to the problem with your partner." At this time, participants were given 20 seconds to discuss their answer with their partners. Participants were then given five seconds to change or confirm their answer to the math problem they just solved alone. After submitting their final response, participants received feedback for two seconds that indicated whether their final answer was correct or incorrect (presented as the words _CORRECT_ or _WRONG_ written in black on a white screen). ### EEG Recording Consistent with Forbes's research [23], continuous EEG activity was recorded from each member of the dyad using an ActiveTwo head cap and the ActiveTwo Biosemi system (BioSemi, Amsterdam, Netherlands). Recordings were collected from 64 Ag-AgCl scalp electrodes and from bilateral mastoids. Two electrodes were placed next to each other 1 cm below the right eye to record eye-blink responses. A ground electrode was established by BioSemi's common Mode Sense active electrode and Driven Right Leg passive electrode. EEG activity was digitized with ActiView software (BioSemi) and sampled at 2048 Hz. Data was downsampled post-acquisition and analyzed at 512 Hz. In our data-driven analysis, we only consider the feedback trials (i.e., _CORRECT_ or _WRONG_ responses) to better capture elicited emotional patterns when participants are evoked by stimuli. ### Data Preprocessing For feedback analyses, the EEG signal was epoched and stimulus-locked from 500ms pre-feedback presentation to 2000ms post-feedback presentation. EEG artifacts were removed via FASTER (Fully Automated Statistical Thresholding for EEG artifact Rejection) [24], an automated approach to cleaning EEG data that is based on multiple iterations of independent component and statistical thresholding analyses. Specifically, raw EEG data was initially filtered through a band-pass FIR filter between 0.3 and 55 Hz. The EEG channels with significant unusual variance (absolute z score larger than 3 standard deviations from the average), mean correlations with other channels, and Hurst exponents were removed and interpolated from neighboring electrodes using a spherical spline interpolation function. EEG signals were then epoched, and baseline corrected. Epochs with significant unusual amplitude range, variance, and channel deviation were removed. The remaining epochs were then transformed through ICA. Independent components with significant unusual correlations with EOG channels, spatial kurtosis, slope in the filter band, Hurst exponent, and median gradient were subtracted and the EEG signal was reconstructed using the remaining independent components. Finally, EEG channels within single epochs with significant unusual variance, median gradient, amplitude range, and channel deviation were removed and interpolated from neighboring electrodes within the same epochs. ### Source Reconstruction All a priori sources used in network connectivity analyses were identified and calculated via forward and inverse models utilized by MNE-python [25; 26]. The forward model solutions for all source locations located on the cortical sheet were computed using a 3-layered boundary element model [27], constrained by the default average template of anatomical MNI MRI. Cortical surfaces extracted with FreeSurfer were sub-sampled to approximately 10,240 equally spaced vertices on each hemisphere. The noise covariance matrix for each individual was estimated from the pre-stimulus EEG recordings after preprocessing. The forward solution, noise covariance, and source covariance matrices were used to calculate the dynamic statistical parametric mapping (dSPM) estimated inverse operator [28; 29]. The inverse computation was done using a loose orientation constraint (loose = 0.2, depth = 0.8) [30]. dSPM inverse operators have been reported to help characterize distortions in cortical and subcortical regions and improve the bias accuracy of neural generators in deeper structures, e.g., the insula [31] by using depth weighting and a noise normalization approach. The cortical surface was divided into 68 anatomical regions (i.e., sources) of interest (ROIs; 34 in each hemisphere) based on the Desikan-Killiany atlas [32] and signal within a seed voxel of each region was used to calculate the power within sources and phase locking (connectivity) between sources. All graph theory calculation descriptions are located in the supplementary materials. ### Implementation Details After the data preprocessing, 18 female individuals formed 9 pairs consisting of 7 DMT-PST dyads and 2 PST-PST dyads. We then applied a sliding window technique to all basic views (the initial multi-ROI EEG time series without augmentation), denoted as \(\mathcal{X}\), which has 768 time points and 68 ROIs. The width and step size of the sliding window were set to 300 and 50, respectively, resulting in 10 augmented views (68 * 300) for each basic view (68 * 768). This augmentation increased the size of our dataset to 17,650 response graphs, where subjects received either _WRONG_ or _CORRECT_ feedback. This augmentation allows the model to learn more general patterns. The dataset was split into training, testing, and validation sets in a 7:2:1 ratio. In the graph contrastive learning procedure, within a N-sized minibatch, given a graph collection \(\{\mathcal{G}_{i}^{j}|j=1,2,3,...,K\}\) representing i-th mathematical question, where \(j\) is an index denoting augmented views, \(i=1,2,....,N\). We enumerate all possible pairs within these graphs to form positive pairs. Within a dyad \(D_{m}\) consists of subject \(D_{m}^{A}\) and \(D_{m}^{B}\), since K augmented views \(\{\mathcal{G}_{i}^{j}|j=1,2,3,...,K\}\) of one of the members (e.g. \(D_{m}^{A}\)) represent the same mathematical question and exhibit similar neural activity patterns. Given one view \(\mathcal{G}_{i}^{j}\) within these K augmented views, the views \(\{\mathcal{G}_{i}^{k}|k=1,2,3,...,K,k\neq j\}\) form positive pair with it and negative pairs with views representing other mathematical questions (i.e., \(\{\mathcal{G}_{i}^{n}|n=1,2,3,...,K,r\neq i\}\)). Additionally, when two subjects encounter the same mathematical question, one augmented view of subject \(D_{m}^{A}\) in the m-th dyad should form positive pairs with the other K augmented views of subject \(D_{m}^{B}\) in the same dyad. This process results in a total of 2K * (2K - 1) / 2 positive pairs, which is 190 in our experiments. We trained the model for 700 epochs with early stopping and adopted a multi step learning rate scheduler and Adam optimizer. The initial learning rate, the weight decay, the temperature and the batch size was empirically set to 0.001, 0.02, 0.5 and 68 respectively. The filter size in the Chebyshev spectral graph convolutional operator was set to 4, and the ratio of the TopK pooling layer was set to 0.5. After obtaining the encoder \(f_{\theta}\), graph features can be extracted as a embedding \(z_{i}=f_{\theta}(\mathcal{G}_{i})\in R^{d}\) in the \(d\) dimensional embedding space forming isolated nodes in the population graph initially. In the feedback type classification procedure, we use the raw dataset (without sliding window augmentation) to train the DGC classifier for 100 epochs. We adopted focal loss eq. (6) - eq. (7) for the classifier to mitigate the aftermath of imbalanced data. We set \(\alpha\) and \(\gamma\) to 0.5 and 2 empirically. \[FL(p_{t})=-\alpha_{t}(1-p_{t})^{\gamma}log(p_{t}) \tag{6}\] \[p_{t}=\begin{cases}p&\text{if }y=1\\ 1-p&\text{otherwise},\end{cases} \tag{7}\] All methods are implemented with Pytorch and trained with GPU NVIDIA_GeForce_RTX_3090. ## 4 Experimental Result ### Graph Embedding Effectiveness Analysis We visualize the feature attraction of graph embeddings and raw features in the dataset. Figure 3 and Figure 4 indicate that the positive pairs have higher feature attraction than that of negative pairs. Besides, the feature attraction of negative pairs is close to 0, which verifies the effectiveness of fGCL in reducing the attraction of negative pairs and heightening the attraction of positive pairs. Conversely, the feature attraction on raw features display no significant differences, indicating identical similarity between positive and negative pairs. ### Impact of Emotional Contagion on Performance Over Time Initial examinations of performance revealed a condition by time interaction, \(Wald_{X}^{2}(2)\) = 13.72, p =.001. Specifically, DMT partner's performance decreased over time (B = -.005 (SE =.002); \(Wald_{X}^{2}(1)\) = 7.45, (95% Wald LL CI = -.009; UL CI = -.002), p =.006; for every one unit increase in time (trial number), the log odds of getting the question correct decreased by.005 units. In contrast, DMT actors and PST dyad members exhibited no over-time changes. Their performance remained stable (p's >.11). Moreover, simple contrasts between conditions revealed performance differences between DMT actors and partners over the course of the task. At the beginning of the task, DMT partners had a higher probability of getting a question correct in comparison to the DMT actor (p=.002), and PST dyad members (p=.034). At the end of the task, DMT partners had a lower probability of getting a question correct in comparison to the partner (p=.004) and were largely comparable to PST dyad members (p=.089). DMT actors did not differ from PST dyad members (p=.35). Thus, at the end of the task, DMT partners underperformed in comparison to the actors. These findings support the possibility that DMT actors benefited from dyadic interactions at the expense of their partners. Figure 4: The probability density function of attractions on graph contrastive features and raw features. Figure 3: The feature attraction of graph contrastive features and raw features. ### Contagion Stage Analysis In the analysis of the contagion stage, we divided the basic view dataset into training, testing, and validation sets in a 7:2:1 proportion. Subsequently, we extracted subject-invariant embeddings using the trained graph contrastive encoder for the classification task. Table 1 presents the results of ablation experiments conducted on various models, including KNN, SVM, MLP, and DGC, aimed at classifying feedback types within the basic view dataset. Notably, we employed embeddings extracted by the fGCL encoder as inputs for the KNN, SVM, and MLP models. The experimental results demonstrate that our model exhibits superior performance in the task of feedback type classification while maintaining a balance between sensitivity and specificity. However, it is worth noting that the absence of the fGCL encoder leads to a noticeable degradation in the performance of the DGC model. #### 4.3.1 Individual-level Classification We further examined our hypothesis that whether the emotional contagion occur over the period of sustained interpersonal interaction. To this end, we divided the trial sequence into three stages averagely: early stage (\(0\sim 1/3\)), middle stage (\(1/3\sim 2/3\)) and late stage (\(2/3\sim 1\)) and evaluate them separately by our model. The result in Table 2 revealed that the classification performance of DMT actors in DMT-PST dyad is better than DMT partners in DMT-PST dyad in Table 3 entirely, and the accuracy and the F1 score of DMT actors in DMT-PST dyad represents an evident increasing trend especially from the early stage to the middle stage and the late stage was superior to the other stages. In contrast, that of DMT partners in DMT-PST dyad displayed a relatively less increment in the accuracy and F1-score. This may additionally reflect that the emotional contagion evolve cumulatively over the sustained interpersonal communication, this in turn, affect the neurological representation when encountering negative event, and thus resulting in the poor classification result of recognizing the pattern of neural response of DMT partners in DMT-PST dyad. On the contrary, DMT actors in DMT-PST dyad release negative emotions by transmitting them to their partners, this then gradually break the disordered response pattern resulting in the improvement of classification performance. Our classifier was evaluated using leave-dyad-out cross validation, results of DMT-PST dyad and PST-PST dyad are shown in Table 2 and Table 3 respectively, which shed light on our aforementioned hypothesis. Regarding the result of DMT actors inside DMT-PST dyads, our model achieved the classification accuracy as 60.66 \(\pm\) 0.14% at the early stage, this accuracy than increased at the middle stage of problem solving to 69.33 \(\pm\) 0.25% than raised to 71.33 \(\pm\) 0.17% at the late stage. Conversely, the results for DMT partners within DMT-PST dyads displayed a distinct pattern. The accuracy increased from 59.38 \(\pm\) 0.25% in the early stage to 64.21 \(\pm\) 0.15% in the middle stage, with marginal change observed in the late stage, yielding an accuracy of 64.76 \(\pm\) 0.12%. Analyzing this pattern, it becomes apparent \begin{table} \begin{tabular}{l c c c c c} \hline **Models** & **ACC** & **AUC** & **F1 Score** & **SEN** & **SPEC** \\ \hline _fGCL + KNN_ & 63.80 \(\pm\) 0.15\% & 66.07 \(\pm\) 0.55\% & 62.93 \(\pm\) 0.15\% & 58.63 \(\pm\) 0.43\% & 63.96 \(\pm\) 0.05\% \\ _fGCL + SVM_ & 59.11 \(\pm\) 0.22\% & 63.19 \(\pm\) 0.39\% & 52.32 \(\pm\) 0.25\% & 51.21 \(\pm\) 0.53\% & 55.96 \(\pm\) 0.45\% \\ _fGCL + MLP_ & 58.11 \(\pm\) 0.32\% & 63.19 \(\pm\) 0.39\% & 58.61 \(\pm\) 0.25\% & 54.38 \(\pm\) 0.53\% & 58.13 \(\pm\) 0.18\% \\ _DGC w/o encoder_ & 61.11 \(\pm\) 0.12\% & 63.19 \(\pm\) 0.39\% & 58.61 \(\pm\) 0.25\% & 57.00 \(\pm\) 0.53\% & 64.96 \(\pm\) 0.45\% \\ **fGCL + DGC (ours)** & **65.12 \(\pm\) 0.45\%** & **69.79 \(\pm\) 0.35\%** & **64.90 \(\pm\) 0.06\%** & **64.71 \(\pm\) 0.35\%** & **65.53 \(\pm\) 0.05\%** \\ \hline \end{tabular} \end{table} Table 1: The whole-brain level feedback type classification result in the entire period \begin{table} \begin{tabular}{l c c c c c} \hline **Periods** & **ACC** & **AUC** & **F1 Score** & **SEN** & **SPEC** \\ \hline _Entire_ & 68.28 \(\pm\) 0.05\% & 70.49 \(\pm\) 0.05\% & 70.50 \(\pm\) 0.06\% & 66.67 \(\pm\) 0.08\% & 67.63 \(\pm\) 0.20\% \\ _Early_ & 60.66 \(\pm\) 0.14\% & 64.13 \(\pm\) 0.06\% & 64.21 \(\pm\) 0.17\% & 58.85 \(\pm\) 0.29\% & 65.62 \(\pm\) 0.25\% \\ _Middle_ & 69.33 \(\pm\) 0.25\% & 64.35 \(\pm\) 0.27\% & 68.23 \(\pm\) 0.31\% & 64.51 \(\pm\) 0.35\% & 64.52 \(\pm\) 0.34\% \\ _Late_ & 71.33 \(\pm\) 0.17\% & 73.31 \(\pm\) 0.06\% & 72.64 \(\pm\) 0.22\% & 72.83 \(\pm\) 0.28\% & 67.50 \(\pm\) 0.31\% \\ \hline \end{tabular} \end{table} Table 2: The classification result of DMT in DMT-PST dyad at early, middle and late stage with entire testing. \begin{table} \begin{tabular}{l c c c c} \hline **Periods** & **ACC** & **AUC** & **F1 Score** & **SEN** & **SPEC** \\ \hline _Entire_ & 63.33 \(\pm\) 0.05\% & 67.87 \(\pm\) 0.06\% & 67.20 \(\pm\) 0.08\% & 66.67 \(\pm\) 0.10\% & 63.01 \(\pm\) 0.13\% \\ _Early_ & 59.38 \(\pm\) 0.25\% & 62.12 \(\pm\) 0.13\% & 64.96 \(\pm\) 0.35\% & 60.00 \(\pm\) 0.46\% & 67.14 \(\pm\) 1.17\% \\ _Middle_ & 64.21 \(\pm\) 0.15\% & 63.19 \(\pm\) 0.13\% & 68.59 \(\pm\) 0.26\% & 68.20\(\pm\) 0.37\% & 64.96 \(\pm\) 01.45\% \\ _Late_ & 64.76 \(\pm\) 0.12\% & 68.16 \(\pm\) 0.08\% & 65.18 \(\pm\) 0.14\% & 67.70 \(\pm\) 0.18\% & 65.12 \(\pm\) 0.21\% \\ \hline \end{tabular} \end{table} Table 3: The classification result of PST in DMT-PST dyad at early, middle and late stage with entire period testing. that in the early stage of problem-solving, no substantial difference in brain activity existed between DMT and PST pairs, indicating that the emotional contagion effect was relatively dormant. As the problem-solving process advanced to the middle stage, the emotional contagion effect commenced within DMT-PST pairs, with DMT actors gradually transmitting the negative emotions stemming from pressure to DMT partners. As a result, the differentiation between the two groups demonstrated an increasing trend. In the final stage of problem-solving, the emotional contagion effect within DMT-PST pairs subsided, with DMT actors transferring their negative emotions to DMT partners. Consequently, the discernible distinction in brain activity between DMT actors and DMT partners became significant, accentuating the differentiation in neural pattern between the two. Besides, the results of PST-PST dyads in Table 4 and Table 5 display no significant changes in the accuracy, this might suggest that the emotional contagion does not appear in PST-PST dyads and thus the neural pattern differentiation in all stages is vague. Overall, the findings put forth the notion that DMT actors and those within the PST dyad condition were processing performance feedback in a comparable fashion, particularly when considering the overall span of the task. This alignment is noteworthy, given that DMT actors underwent initial exposure to SBS. Conversely, discernible distinctions emerge in the processing patterns of DMT partners, especially during the middle and later phases. These outcomes lend credence to the idea that SBS contagion unfolds progressively over time. Notably, DMT partners appears to play a role in buffering the negative emotions of DMT actors. ## 5 Discussion Overall, our findings suggest SBS-based emotional contagion can occur within female dyads in problem-solving contexts and has different consequences on performance for each member of the dyad. While working together on a math task, DMT partners performed worse over time, whereas DMT actors (i.e., those under SBS), performed better and comparable to dyads working in SBS neutral contexts. Importantly, DMT partners showed evidence of "catching" this initial stress response from the threatened actor. This, in turn, had direct ramifications for DMT partners' performance. Contrastively, these relationships were not evident within PST dyad members interacting in SBS neutral contexts. ### Inference of performance decrement in dyadic interaction Results provide further insight into the dynamic relationship between two individuals performing in domains where their common identity is devalued. Although it seems conceivable that the performance of both DMT actors and partners would suffer when solving problems together in a negatively stereotyped domain, results provide further support for the notion that non-threatened partners help buffer initially threatened actors from the deleterious consequences of SBS over time, at their own expense. These findings are consistent with past work showing that the presence of a female role model or competent female partner alleviates performance decrements otherwise typically evident in stereotype threatening contexts [33; 34; 35]. Results expand upon this work in several ways. Most notably, by demonstrating that the transference of an individual's stress response on to their partners may be one important factor in buffering women from SBS during dyadic problem-solving interactions, particularly during initial stages of the interaction. Conversely, like past work demonstrating that increased emotional processing of feedback in SBS contexts has a negative impact on individuals' performance when alone [22; 23], findings from this study demonstrate that this effect extends to partners in a dyadic interaction, providing a potential mechanism for underperformance effects among these individuals in group problem-solving contexts moving forward. \begin{table} \begin{tabular}{l c c c c c} \hline **Periods** & **ACC** & **AUC** & **F1 Score** & **SEN** & **SPEC** \\ \hline _Entire_ & 75.03 \(\pm\) 0.01\% & 85.54 \(\pm\) 0.09\% & 82.48 \(\pm\) 0.01\% & 72.06 \(\pm\) 0.01\% & 68.09 \(\pm\) 0.05\% \\ _Early_ & 76.67 \(\pm\) 0.05\% & 80.62 \(\pm\) 0.25\% & 79.17 \(\pm\) 0.15\% & 67.31 \(\pm\) 0.03\% & 63.96 \(\pm\) 0.05\% \\ _Middle_ & 75.11 \(\pm\) 03.12\% & 84.19 \(\pm\) 01.39\% & 84.35 \(\pm\) 0.13\% & 75.61 \(\pm\) 0.25\% & 74.96 \(\pm\) 0.45\% \\ _Late_ & 74.12 \(\pm\) 01.45\% & 83.79 \(\pm\) 0.35\% & 83.21 \(\pm\) 0.11\% & 64.71 \(\pm\) 0.35\% & 63.53 \(\pm\) 0.05\% \\ \hline \end{tabular} \end{table} Table 4: The classification result of PST1 in PST-PST dyad at early, middle and late stage with entire period testing. \begin{table} \begin{tabular}{l c c c c c} \hline **Periods** & **ACC** & **AUC** & **F1 Score** & **SEN** & **SPEC** \\ \hline _Entire_ & 59.83 \(\pm\) 0.03\% & 66.38 \(\pm\) 0.03\% & 69.13 \(\pm\) 0.02\% & 58.46 \(\pm\) 0.05\% & 63.96 \(\pm\) 0.05\% \\ _Early_ & 57.17 \(\pm\) 0.31\% & 61.69 \(\pm\) 0.21\% & 63.37 \(\pm\) 0.22\% & 56.19 \(\pm\) 0.43\% & 62.96 \(\pm\) 0.05\% \\ _Middle_ & 56.31 \(\pm\) 0.19\% & 60.12 \(\pm\) 0.14\% & 59.37 \(\pm\) 0.13\% & 53.19 \(\pm\) 0.21\% & 55.36 \(\pm\) 0.12\% \\ _Late_ & 55.97 \(\pm\) 0.13\% & 59.12 \(\pm\) 0.13\% & 58.13 \(\pm\) 0.12\% & 55.39 \(\pm\) 0.13\% & 51.96 \(\pm\) 0.03\% \\ \hline \end{tabular} \end{table} Table 5: The classification result of PST2 in PST-PST dyad at early, middle and late stage with entire testing. ### Efficiency in timescale and Implications for Contagion Hypotheses Present results provide novel insight into how contagion manifests on the order of milliseconds, a much more rapid timescale than previously assumed, to affect performance accordingly using neuroscience methodologies. Moreover, the design of the present study provides a novel yet realistic platform to examine emotion contagion phenomena via EEG or fMRI methodology in future studies. By using iPads, it was possible to capture simultaneous EEG activity in a controlled manner while still allowing participants to have real time face to face interaction. This design also provides implications for contagion hypotheses specific to the mimicry and proximity literature. Because participants only communicated through an iPad webcam, participants were only able to view their partner's face and hear their voice through the webcam during the interaction. This suggests that vocal patterns and facial expressions may have played an integral role in facilitating contagion effects [36, 37]. Findings provide a more nuanced understanding of the contagion process while also providing a better understanding of a heretofore largely unexamined question in the literature: how social identity threats and SBS manifest in dyadic interactions to have paradoxical effects on performance. More importantly, findings provide further insight into the many ways the gender gap in STEM domains can be perpetuated but also one day nullified. ### Graph-based Approach for Subject Invariant Emotional Representation Extraction Graph structures naturally align with the brain's topology, allowing for the effective modeling of anatomical regions of interest while preserving functional connectivity (FC) information. The proposed fGCL model harnesses semantically meaningful information i.e., graphs corresponding to the same mathematical problem, to construct both positive and negative pairs. This innovative approach significantly mitigates inter-subject variability in EEG data and has been evaluated as optimal. The subject-invariant representations extracted from FC graphs are well-aligned within the common embedding space. Furthermore, the fGCL employs a spectral graph network capable of convolving across the entire node set with FC connections, integrating valuable cognitive information [38]. ### Limitations and Future Works Regarding the limitations of the study, it is important to note that when considering EEG data and the spatial limitations associated with the methodology, conclusions based on precise brain locations should always be interpreted with caution. Results should be replicated and expanded upon in future fMRI studies, although given the temporal constraints of fMRI methodologies with respect to findings in this study (i.e., these effects may occur on the order of milliseconds), this approach could be problematic as well. Besides, the proposed fGCL encoder was validated with EEG data of young female students (mean age = 25.18 years) and optimized with semantical auxiliary task. As is well known that age plays an important role in emotion processing and thus might have different emotion patterns in different age ranges [39], further studies should be conducted to cover different age ranges for a more generalized encoder model. ## 6 Conclusion In this study, we addressed a critical issue in previous emotional contagion research: the neglect of subject-level difference. We hereby adopt a self-supervised learning approach to eliminate the subject difference by increasing the attraction of positive pairs and reducing the attraction of negative pairs, which aligns the neurophysiologically meaningful representations of EEG signals in the embedding space, this results in the removal of subject variant information. Based on this substrate, we than employ a dynamic graph classification model to analyze process of emotional contagion. The results suggest that DMT actors and those in PST dyads processed feedback similarly, but distinctions emerged in DMT partners' processing patterns, supporting the idea of gradual SBS contagion. DMT partners appeared to buffer negative emotions of DMT actors. ## Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. ## Author Contributions **J.H.:** Conceptualization, Methodology, Formal Analysis, Software, Validation, Writing - Original Draft, Writing - Review & Editing, Visualization; **R. A.:** Data Collection; **M.L.:** Conceptualization, Writing - Review & Editing; **C.F.:** Funding Collection. ## Acknowledgments All aspects of this study and article were supported by National Science Foundation grant #1535414 awarded to Chad E. Forbes. ## Supplemental Data None ## Data Availability Statement Our native EEG data will be made available on request from any qualified investigator who provides a practicable proposal, or for the purpose of replicating procedures and results presented in the current study.
2310.18881
Efficient separate quantification of state preparation errors and measurement errors on quantum computers and their mitigation
Current noisy quantum computers have multiple types of errors, which can occur in the state preparation, measurement/readout, and gate operation, as well as intrinsic decoherence and relaxation. Partly motivated by the booming of intermediate-scale quantum processors, measurement and gate errors have been recently extensively studied, and several methods of mitigating them have been proposed and formulated in software packages (e.g., in IBM Qiskit). Despite this, the state preparation error and the procedure to quantify it have not yet been standardized, as state preparation and measurement errors are usually considered not directly separable. Inspired by a recent work of Laflamme, Lin, and Mor [Phys. Rev. A 106, 012439 (2022)], we propose a simple and resource-efficient approach to quantify separately the state preparation and readout error rates. With these two errors separately quantified, we also propose methods to mitigate them separately, especially mitigating state preparation errors with linear (with the number of qubits) complexity. As a result of the separate mitigation, we show that the fidelity of the outcome can be improved by an order of magnitude compared to the standard measurement error mitigation scheme. We also show that the quantification and mitigation scheme is resilient against gate noise and can be immediately applied to current noisy quantum computers. To demonstrate this, we present results from cloud experiments on IBM's superconducting quantum computers. The results indicate that the state preparation error rate is also an important metric for qubit metrology that can be efficiently obtained.
Hongye Yu, Tzu-Chieh Wei
2023-10-29T02:51:06Z
http://arxiv.org/abs/2310.18881v1
Efficient separate quantification of state preparation errors and measurement errors on quantum computers and their mitigation ###### Abstract Current noisy quantum computers have multiple types of errors, which can occur in the state preparation, measurement/readout, and gate operation, as well as intrinsic decoherence and relaxation. Partly motivated by the booming of intermediate-scale quantum processors, measurement and gate errors have been recently extensively studied, and several methods of mitigating them have been proposed and formulated in software packages (e.g., in IBM Qiskit). Despite this, the state preparation error and the procedure to quantify it have not yet been standardized, as state preparation and measurement errors are usually considered not directly separable. Inspired by a recent work of Laflamme, Lin, and Mor [Phys. Rev. A **106**, 012439 (2022)], we propose a simple and resource-efficient approach to quantify separately the state preparation and readout error rates. With these two errors separately quantified, we also propose methods to mitigate them separately, especially mitigating state preparation errors with linear (with the number of qubits) complexity. As a result of the separate mitigation, we show that the fidelity of the outcome can be improved by an order of magnitude compared to the standard measurement error mitigation scheme. We also show that the quantification and mitigation scheme is resilient against gate noise and can be immediately applied to current noisy quantum computers. To demonstrate this, we present results from cloud experiments on IBM's superconducting quantum computers. The results indicate that the state preparation error rate is also an important metric for qubit metrology that can be efficiently obtained. ## I Introduction There has been dramatic growth in multi-qubit quantum processors; some possess qubits well over one hundred. Nevertheless, they are characterized as noisy-intermediate quantum (NISQ) devices [1] as it is currently not yet feasible to implement quantum error correction on these devices to reduce logical error rates and prolong coherent computation. Hence, a short-term goal is to develop error mitigation schemes to enhance the performance of NISQ devices. Errors and noise arise in different stages of quantum computation: state preparation, gate operation, and readout/measurement. In the past few years, substantial efforts have been spent on gate error mitigation [2; 3; 4; 5; 6; 7], as gate operation is the bulk of quantum computation. However, measurement errors also affect the computation outcome. To mitigate the measurement error, several methods have been proposed [8; 9; 10; 11; 12] to mitigate the final readout statistics, and they have improved the estimation of physical observables from quantum computers. State preparation is perhaps the most omitted one among the three error types. At present, there does not seem to be an efficient and practical approach to even quantifying state preparation and readout errors separately. Usually, state preparation error is combined with the measurement error as state preparation and measurement (SPAM) error. Moreover, current readout mitigation schemes on superconducting qubits, such as in the Qiskit software [13], still mixed with the state preparation error, although its rate is typically lower than that of the readout error. The goal of this work is two-fold. First, we propose a simple scheme that allows separate quantification of state preparation and readout errors. This issue was recently discussed by Laflamme, Lin, and Mor [14] employing algorithmic cooling [15; 16; 17]. Our approach extends theirs but dramatically simplifies the procedure. Both schemes rely on coupling the original qubit to other qubits. Ours requires only one additional qubit to characterize the errors in state preparation and readout separately. Secondly, with separate quantification, the readout mitigation matrix, in principle, is free of the state preparation error and can be directly applied to mitigate the readout error, just like the current standard practice. Moreover, we propose a novel approach to mitigate the state preparation error. Even though we focus our attention and implementation on superconducting qubits, we envision that our method can be applied to other systems as well. We also remark we do assume that single-qubit gates, in particular, the X gate, are of high fidelity and that the zero-noise extrapolation (ZNE) techniques can mitigate the CNOT gate error [2; 3; 4; 5; 6; 7] if needed. Another approach to characterize gates and SPAM errors with very few assumptions is the gate set tomography [18]; however, it requires many gate sequences. The remainder of the paper is organized as follows. In Sec. II, we present a simple scheme using an ancilla to characterize the target qubit's state preparation and readout errors separately. In Sec. III, we discuss how the characterized state preparation and readout errors can be mitigated. In Sec. IV, we compare our proposed mitigation scheme with the standard readout mitigation (which does not characterize the state preparation error) and illustrate the difference in mitigated outcomes for simple circuits. In Sec. V, we present results from exper iments performed on cloud IBM quantum computers to demonstrate our theoretical proposal. We make concluding remarks in Sec. VI. Furthermore, in Appendix A, we show that the algorithmic cooling idea of Laflamme, Lin, and Mor can also lead to the separate characterization of state preparation and readout errors despite the fact that the equations to be solved are nonlinear and more complicated. In Appendix B, we present the zero-noise extrapolation of CNOT errors when characterizing the state preparation and readout errors. ## II Separate characterization of state preparation and readout errors A single-qubit readout can be characterized by a two-element POVM \(\{M_{0},M_{1}\}\), so that a one-qubit state \(\rho\) will be measured to be '0' with probability \(\mathrm{Tr}(\rho M_{0})\) and '1' with probability \(\mathrm{Tr}(\rho M_{1})\); see, e.g., Ref. [8], which also demonstrated that on superconducting qubits of IBM's devices, the readout errors are dominantly classical flips, i.e., the POVM elements are well approximated by the following forms, \[M_{0}=\begin{pmatrix}1-\delta_{M}^{0}&0\\ 0&\delta_{M}^{1}\end{pmatrix},\;\;M_{1}=\begin{pmatrix}\delta_{M}^{0}&0\\ 0&1-\delta_{M}^{1}\end{pmatrix}. \tag{1}\] We note that the POVM for the general single-qubit readout can be more general than the above form, but it can be enforced to the above form by twirling with a random Pauli Z operator (with 50% probability) before the measurement [8; 14]. In Ref. [14], a measurement-based algorithmic cooling was developed to reduce the state preparation error, where the error ruins the preparation of a perfect \(|0\rangle\), i.e., \[(1-\delta_{\mathrm{SP}})|0\rangle\langle 0|+\delta_{\mathrm{SP}}|1\rangle \langle 1|, \tag{2}\] and the cooling was achieved by coupling to ancillary qubits via CNOTs. There, symmetric readout errors were considered, i.e., \(\delta_{M}^{0}=\delta_{M}^{1}=:\delta_{\mathrm{M}}\) and a procedure employing multiple ancillary qubits was proposed to separately characterize a target qubit's \(\delta_{\mathrm{SP}}\) and \(\delta_{\mathrm{M}}\) (in the limit of many ancillas), which comprise the total SPAM error \(\delta_{\mathrm{SPAM}}=\delta_{\mathrm{SP}}+\delta_{\mathrm{M}}-2\delta_{ \mathrm{SP}}\delta_{\mathrm{M}}\), giving the measurable probability of reading out '1'. In this section, we propose a simplified procedure using one ancilla to extract separately the state preparation error \(\delta_{\mathrm{SP}}\) and readout errors \(\delta_{M}^{0}\) and \(\delta_{M}^{1}\). First, we note that a coherent state preparation error, e.g., \(\sqrt{1-\delta_{\mathrm{SP}}}|0\rangle+\sqrt{\delta_{\mathrm{SP}}}e^{i\phi}|1\rangle\), can be turned into the above incoherent state preparation error (2) by twirling with a Pauli Z operator randomly with 50% probability [14]. (We note that the coherent error can be probed using quantum state tomography once the readout errors have been characterized and mitigated; see Sec. V.2.) We will not assume the symmetric readout errors but actively employ the twirling to ensure the form of incoherent state preparation error. The parameter \(\delta_{M}^{0}\) is the probability that the ideal '0' state will be read as '1', and \(\delta_{M}^{1}\) is the probability that the ideal '1' will be read as '0'. Similar values are being reported on the device properties of all IBM quantum computers, except that the reported values contain the state preparation error. Can the true readout errors be obtained experimentally and separately from the state preparation error? Indeed, Ref. [14] describes an approach by algorithmic cooling to reduce the state preparation error, and in the limit that it becomes infinitesimally small, only the readout errors remain. However, the reduction to zero state preparation error is not practical as CNOT gates (which were assumed to be error-free in Ref. [14]) that are required to perform the measurement-based algorithmic cooling also introduce errors in practice. Our procedure below solves all these problems and can practically improve the device property characterization, separating the state preparation and readout errors. ### Access \(\delta_{sp}\) via an ancillary qubit Here we present an efficient way to estimate the state preparation error \(\delta_{\mathrm{SP}}^{\mathrm{t}}\) for the target qubit \(q_{t}\), which only requires one additional qubit \(q_{a}\), as shown in the Fig. 1. The SPAM error of both qubits can be easily measured. To measure the probability \(\delta_{\mathrm{SPAM}}^{0}\) that a qubit is prepared with \(|0\rangle\) but measured in \(|1\rangle\), one only needs to measure the initialized qubit directly; to measure the probability \(\delta_{\mathrm{SPAM}}^{1}\) that the qubit is prepared with \(|1\rangle\) but measured in \(|0\rangle\), one needs to first initialize the qubit in \(|0\rangle\), apply an X gate, and then measure it. The measured error \(\delta_{\mathrm{SPAM}}\) is a combination of both state preparation error \(\delta_{\mathrm{SP}}\) and measurement error \(\delta_{M}\)'s, and, without assuming symmetric readout errors, we have \[\delta_{\mathrm{SPAM}}^{0} =(1-\delta_{M}^{0}-\delta_{M}^{1})\delta_{\mathrm{SP}}+\delta_{M} ^{1}, \tag{3}\] \[\delta_{\mathrm{SPAM}}^{1} =(1-\delta_{M}^{0}-\delta_{M}^{1})\delta_{\mathrm{SP}}+\delta_{M} ^{1}. \tag{4}\] After adding a noiseless CNOT gate (Fig. 1), the SPAM error of \(q_{t}\) does not change, whereas the ancilla qubit Figure 1: Circuits for measuring \(\delta_{\mathrm{SP}}\) with initial state prepared to be: (top) \(|00\rangle\) and (bottom) \(|01\rangle\). Here we assume the X gate is noiseless and can be considered as a part of state preparation. readout error \(\tilde{\delta}_{\text{SPAM}}^{a,0}\) becomes (note: the derivation is straightforward but may appear tedious) \[\tilde{\delta}_{\text{SPAM}}^{a,0} =(1-\delta_{\text{SPAM}}^{a,0}-\delta_{\text{SPAM}}^{a,1})\delta_{ \text{SP}}^{t}+\delta_{\text{SPAM}}^{a,0}, \tag{5}\] \[\tilde{\delta}_{\text{SPAM}}^{a,1} =(1-\delta_{\text{SPAM}}^{a,0}-\delta_{\text{SPAM}}^{a,1})\delta_{ \text{SP}}^{t}+\delta_{\text{SPAM}}^{a,1}. \tag{6}\] By using either of these two equations and the measured values of \(\delta_{\text{SPAM}}^{a,0/1}\), one can solve the \(\delta_{\text{SP}}^{t}\). Combined with the measured SPAM errors from \(q_{t}\), one can further obtain the value of measurement errors \(\delta_{M}^{0/1}\) for qubit \(q_{t}\). We note that solving \(\delta_{M}\)'s and \(\delta_{\text{SP}}\) involves only linear equations. One can also extend the algorithmic cooling of Laflamme, Lin, and Mor in the two-qubit setting to solve for (the symmetric) \(\delta_{M}\) and \(\delta_{\text{SP}}\), but this will involve nonlinear equations; see Appendix A. ## III Complete mitigation for measurement error and state-preparation error Measurement mitigation is an essential tool to extract information from noisy quantum devices. The measurement error with only bit flips can be characterized by the error assignment matrix. In the single-qubit case, the matrix is simply \[A_{M}=\begin{pmatrix}1-\delta_{M}^{0}&\delta_{M}^{1}\\ \delta_{M}^{0}&1-\delta_{M}^{1}\end{pmatrix}. \tag{7}\] Note that we assume there are only bit-flip errors, or else X and Y errors have been removed by the Z randomization procedure mentioned above. The effect of the measurement error on the outcome probability distribution \(\hat{\mathbf{P}}\) from ideal measurement is then \[\mathbf{P}_{\text{noisy}}=A_{M}\hat{\mathbf{P}}, \tag{8}\] where \(A_{M}\) can be the general assignment matrix characterizing \(n\)-qubit measurement error. Suppose we know the exact form of \(A_{M}\); we can exactly mitigate the measurement error by \[\hat{\mathbf{P}}=A_{M}^{-1}\mathbf{P}_{\text{noisy}}. \tag{9}\] However, due to the state preparation error, the matrix \(A_{M}\) is usually hard to obtain exactly. In practice, one usually lumps together the state preparation and readout errors as the SPAM error \(\delta_{\text{SPAM}}\). The corresponding assignment matrix \(A_{\text{SPAM}}\) is obtained by naive calibration circuits, such as in Qiskit [13], where \(2^{n}\) computational-basis states are prepared and then measured. Subsequently, the inverse of the \(A_{\text{SPAM}}\), instead of \(A_{M}\), is applied for the mitigation in practice \[\tilde{\mathbf{P}}=A_{\text{SPAM}}^{-1}\mathbf{P}_{\text{noisy}}=A_{\text{SPAM}}^{-1 }A_{M}\hat{\mathbf{P}}. \tag{10}\] Since \(A_{\text{SPAM}}^{-1}A_{M}\) is generally not identity, the mitigated result \(\tilde{\mathbf{P}}\) differs from the ideal distribution \(\hat{\mathbf{P}}\). Suppose the assignment matrix for the state-preparation error on the initial state is \(A_{\text{SP}}\). In single-qubit case, it is \[A_{\text{SP}}=\begin{pmatrix}1-\delta_{\text{SP}}&\delta_{\text{SP}}\\ \delta_{\text{SP}}&1-\delta_{\text{SP}}\end{pmatrix}. \tag{11}\] What calibration circuits measure is effectively the successive application of \(A_{\text{SP}}\) and \(A_{M}\) on the initial state, which gives \[A_{\text{SPAM}}=A_{M}A_{\text{SP}}. \tag{12}\] Thus, the result mitigated by \(A_{\text{SPAM}}\) is \[\tilde{\mathbf{P}}=A_{\text{SP}}^{-1}\hat{\mathbf{P}} \tag{13}\] Note that the distribution \(\hat{\mathbf{P}}\) contains the state preparation error, which takes place at the beginning of the circuit and, in general, cannot be mitigated by applying the inverse at the end of the circuit. Thus, the mitigation by \(A_{\text{SPAM}}^{-1}\) usually over-mitigates the outcome and can give rise to some unphysical probability distributions (in addition to those caused by statistical fluctuations), such as negative probabilities. Nevertheless, the negative probability problem can be dealt with by using the nearest physical probability distribution instead [19], after applying the mitigation matrix \(A_{\text{SPAM}}^{-1}\). While fully characterizing those error matrices requires exponentially many circuits, if we assume all the errors are local, those error assignment matrices can be decomposed into a direct product of all single-qubit measurement errors \(\{A^{(i)}\}\) \[A_{M} =A_{M}^{(1)}\otimes A_{M}^{(2)}\cdots A_{M}^{(n)}, \tag{14}\] \[A_{\text{SP}} =A_{\text{SP}}^{(1)}\otimes A_{\text{SP}}^{(2)}\cdots A_{\text{ SP}}^{(n)},\] (15) \[A_{\text{SPAM}} =A_{\text{SPAM}}^{(1)}\otimes A_{\text{SPAM}}^{(2)}\cdots A_{ \text{SPAM}}^{(n)}, \tag{16}\] where \[A_{\text{SPAM}}^{(i)}=\begin{pmatrix}1-\delta_{\text{SPAM}}^{i,0}&\delta_{ \text{SPAM}}^{i,1}\\ \delta_{\text{SPAM}}^{i,0}&1-\delta_{\text{SPAM}}^{i,1}\end{pmatrix}. \tag{17}\] The above conditions approximately hold and match many current superconducting qubit devices [20], where the uncorrelated errors dominate, and correlated errors among qubits are usually small. It is easy to check that \(A_{\text{SPAM}}^{(i)}=A_{M}^{(i)}A_{\text{SP}}^{(i)}\). Thus, one can measure the \(\delta_{\text{SPAM}}\) locally and then combine them to get the full \(A_{\text{SPAM}}\) matrix, which only takes \(O(n)\) number of circuits. Nevertheless, the state preparation error usually does not commute with the circuit gates, which, in principle, cannot be automatically combined with the measurement errors at the end of the circuit. Thus, we need a new technique to mitigate the state preparation error and the measurement error separately. ### Measurement error mitigation In this section, we present a similar mitigation scheme for the measurement error as Eq. 9. Here, we assume that all the errors are uncorrelated and Eqs. (14), (15), and (16) hold. We can firstly measure the SPAM error \(\delta_{\text{SPAM}}\) for each qubit, then use the methods in Sec. II.1 to obtain the state preparation errors \(\delta_{\text{SP}}\)'s and measurement errors \(\delta_{M}\)'s. After getting the single-qubit measurement errors \(\delta_{M}^{i,0}\) and \(\delta_{M}^{i,1}\) for each qubit, we construct the full measurement error assignment matrix \(A_{M}\) according to Eq. (7) and Eq. (14). The measurement error is mitigated by applying the inverse of \(A_{M}\) \[\hat{\mathbf{P}}=A_{M}^{-1}\mathbf{P}_{\text{noisy}}. \tag{18}\] Ideally, after such a readout error mitigation process, the remaining errors come from state preparation and gate operations. ### State preparation error mitigation The state preparation error affects the outcome at the beginning of the circuit, so the output probability distribution \(\mathbf{P}\) is a combination of all possible initial assignments with the corresponding probability \[\mathbf{P}(\delta_{\text{SP}}^{(1)},\delta_{\text{SP}}^{(2)}...)=\sum_{\{s_{i}\} \in\{0,1\}^{n}}\left(\prod_{j=1}^{n}q_{\text{SP}}^{(j)}(s_{i},0)\right)\hat{ \mathbf{P}}_{s_{1},s_{2}...s_{n}}, \tag{19}\] where \(\hat{\mathbf{P}}_{s_{1},s_{2}...s_{n}}\) is the \(\delta_{\text{SP}}\)-free output distribution of the quantum circuit with the same gates and measurements but different initial qubit assignments \(|s_{1}s_{2}...s_{n}\rangle\)'s, and \[q_{\text{SP}}^{(j)}(s,s^{\prime})=\begin{cases}1-\delta_{\text{SP}}^{(j)},&s =s^{\prime}\\ \delta_{\text{SP}}^{(j)},&s\neq s^{\prime}\end{cases} \tag{20}\] and \(\delta_{\text{SP}}^{(j)}\) is the state preparation error of the \(j\)-th qubit. Note that the desired probability distribution \[\mathbf{P}(0,0...)=\hat{\mathbf{P}}_{0,0...} \tag{21}\] can, in principle, be obtained by solving \(2^{n}\) linear equations with regards to \(2^{n}\) variables \(\hat{\mathbf{P}}_{s_{1},s_{2}...s_{n}}\) \[\mathbf{P}^{s^{\prime}_{1},s^{\prime}_{2},...,s^{\prime}_{n}}=\sum_{\{s_{i}\}\in \{0,1\}^{n}}\left(\prod_{j=1}^{n}q_{\text{SP}}^{(j)}(s_{i},s^{\prime}_{i}) \right)\hat{\mathbf{P}}_{s_{1},s_{2}...s_{n}}, \tag{22}\] where \(\mathbf{P}^{s^{\prime}_{1},s^{\prime}_{2},...,s^{\prime}_{n}}\) is the noisy experimental output distribution with the same gates and measurement but different erroneous initialization \(|s^{\prime}_{1},s^{\prime}_{2},...s^{\prime}_{n}\rangle\). For a small number of qubits, one can solve for \(\hat{\mathbf{P}}\) exactly. However, for a large number of qubits, exactly solving \(\hat{\mathbf{P}}_{0,0...}\) requires exponentially many circuits and is not practical in experiments. Here, we present an approximate approach that takes a linear number of circuits to achieve \(O(\delta_{\text{SP}}^{2})\) accuracy. To do this, we first expand the distribution \(\mathbf{P}\) near the \(\delta_{\text{SP}}^{(j)}\), \[\mathbf{P}(x_{1},x_{2}...) =\mathbf{P}(\delta_{\text{SP}}^{(1)},\delta_{\text{SP}}^{(2)},...)+ \tag{23}\] \[\sum_{i=1}^{n}\frac{\partial\mathbf{P}(\delta_{\text{SP}}^{(1)},...)} {\partial x_{i}}(x_{i}-\delta_{\text{SP}}^{(j)})+...\] If we assume the \(\delta_{\text{SP}}^{(i)}\)'s are small and approximate the above equation to the order of \(O(\delta_{\text{SP}})\), we can neglect higher order terms and get the estimation of \(\mathbf{P}(0,0...)\) \[\mathbf{P}(0,0...)=\mathbf{P}(\delta_{\text{SP}}^{(1)},\delta_{\text{SP}}^{(2)}...)- \sum_{i=1}^{n}\frac{\partial\mathbf{P}(\delta_{\text{SP}}^{(1)},...)}{\partial x _{i}}\delta_{\text{SP}}^{(i)}, \tag{24}\] where \(\mathbf{P}(\delta_{\text{SP}}^{(1)},\delta_{\text{SP}}^{(2)}...)\) is the experimental result, and the derivatives \(\frac{\partial\mathbf{P}(\delta_{\text{SP}}^{(1)},...)}{\partial x_{i}}\) can be obtained by one additional distribution \(\mathbf{P}_{i}\) from the same circuit except with one additional initial X gate on the _i_-th qubit. This can be seen by rewriting Eq. (19) into \[\mathbf{P}(x_{1},\delta_{\text{SP}}^{(2)},...)=(1-x_{1})\tilde{\mathbf{P}}_{0,X}+x_{1} \tilde{\mathbf{P}}_{1,X}, \tag{25}\] where \[\tilde{\mathbf{P}}_{0,X} = \sum_{\{s_{2},s_{3}...\}\in\{0,1\}^{n-1}}\left(\prod_{j=2}^{n}q_ {\text{SP}}^{(j)}(s_{i})\right)\tilde{\mathbf{P}}_{0,s_{2}...s_{n}},\] \[\tilde{\mathbf{P}}_{1,X} = \sum_{\{s_{2},s_{3}...\}\in\{0,1\}^{n-1}}\left(\prod_{j=2}^{n}q_ {\text{SP}}^{(j)}(s_{i})\right)\tilde{\mathbf{P}}_{1,s_{2}...s_{n}}.\] The original distribution \(\mathbf{P}_{0}\) and the additional distribution \(\mathbf{P}_{1}\) (the distribution from the same circuit with additional initial X gate on the first qubit) can be viewed as the result of substituting \(x_{1}=\delta_{\text{SP}}^{(1)}\) and \(x_{1}=1-\delta_{\text{SP}}^{(1)}\), respectively, in Eq. (25), \[\mathbf{P}_{0} = \mathbf{P}(\delta_{\text{SP}}^{(1)},...)=(1-\delta_{\text{SP}}^{(1)} )\tilde{\mathbf{P}}_{0,X}+\delta_{\text{SP}}^{(1)}\tilde{\mathbf{P}}_{1,X},\] \[\mathbf{P}_{1} = \mathbf{P}(1-\delta_{\text{SP}}^{(1)},...)=\delta_{\text{SP}}^{(1)} \tilde{\mathbf{P}}_{0,X}+(1-\delta_{\text{SP}}^{(1)})\tilde{\mathbf{P}}_{1,X}.\] Thus, we obtain the derivative of \(x_{1}\) \[\frac{\partial\mathbf{P}(\delta_{\text{SP}}^{(1)},...)}{\partial x_{1}} = \tilde{\mathbf{P}}_{1,X}-\tilde{\mathbf{P}}_{0,X} \tag{26}\] \[= \frac{\mathbf{P}_{1}-\mathbf{P}_{0}}{1-2\delta_{\text{SP}}^{(1)}}.\] The other derivatives w.r.t. \(x_{i}\)'s can be obtained in a similar way. Substituting them into the Eq. (24), we arrive at the equation for the first-order state preparation error mitigation \[\hat{\mathbf{P}}=\mathbf{P}_{0}+\sum_{i=1}^{n}\frac{\delta_{\text{SP}}^{(i)}}{1-2 \delta_{\text{SP}}^{(i)}}(\mathbf{P}_{0}-\mathbf{P}_{i}). \tag{27}\] Note that the state preparation error mitigation (27) is linear among different distributions \(\mathbf{P}\)'s and that measurement mitigation (18) is a linear transformation within each distribution \(\mathbf{P}\), so swapping the order of measurement error mitigation and state preparation error mitigation here does not change the final results. We remark that one can achieve \(r\)-th-order accuracy mitigation at the cost of \(O(n^{r})\) additional circuits with a similar procedure, where \(n\) is the qubit number, and we do not explicitly write down the derivation. ## IV Comparison between two mitigation schemes In current standard practice on superconducting qubits, readout mitigation is carried out first by measuring the mitigation assignment matrix \(A_{\text{SPAM}}\) obtained by measuring the distribution in the computational basis when states are prepared in all different computational states. Then, the inverse of the assignment matrix is applied to the actual measurement distribution. However, the preparation of computational states incurs errors, and these are not separately characterized and are thus lumped together in the mitigation matrix. We call this the standard readout mitigation (SRM) and refer to ours as state preparation and readout mitigation (SPRM). ### Summary of the two mitigation schemes First, we summarize the procedure of standard readout mitigation (SRM) as follows: 1. Execute the quantum circuit \(U\) on qubits \(\{q_{i}\}\) and get the noisy raw probability distribution \(P_{\text{raw}}\); 2. Run the standard SPAM error calibration circuits on qubits \(\{q_{i}\}\) and get the SPAM error assignment matrix \(A_{\text{SPAM}}\); 3. Apply the inverse of \(A_{\text{SPAM}}\) to the raw distribution and get the mitigated quasi-probability distribution \(\tilde{P}=A_{\text{SPAM}}^{-1}P_{\text{raw}}\); 4. Find the nearest probability distribution \(\hat{P}\) from \(\tilde{P}\) in case of the existence of negative probabilities in \(\tilde{P}\). Then, the output \(\hat{P}\) is the desired mitigated probability distribution. Next, we summarize our proposed procedure of state preparation and readout mitigation (SPRM) as follows: 1. Execute the quantum circuit \(U\) on qubits \(\{q_{i}\}\) and get the noisy raw probability distri Figure 3: The 2-qubit benchmark circuit for comparing two mitigation schemes SRM and SPRM. Figure 4: The 4-qubit benchmark circuit for comparing two mitigation schemes SRM and SPRM. bution \(P_{\rm raw}\); 2. Run the calibration circuits in sec. II.1 on each qubit pair and separately get the state preparation error \(\{\delta_{\rm SP}^{(i)}\}\) and readout error assignment matrix \(A_{M}\); 3. Adding an initial X gate on \(q_{i}\) and then apply circuit \(U\) for each qubit to get the calibration distribution \(\{P_{i}\}\); 4. Apply the inverse of \(A_{M}\) to the raw distribution \(P_{\rm raw}\) and calibration distribution \(\{P_{i}\}\)s, and get the mitigated quasi-probability distribution \(\tilde{P}_{M}=A_{M}^{-1}P_{\rm raw}\) and \(\{\tilde{P}_{i}=A_{M}^{-1}P_{i}\}\); 5. Use the Eq. (27) to get the further mitigated quasi-probability distribution \(\tilde{P}_{\rm M,SP}\) from \(\tilde{P}_{M}\), \(\{\tilde{P}_{i}\}\), and \(\{\delta_{\rm SP}^{(i)}\}\); 4. Find the nearest probability distribution \(\hat{P}\) from \(\tilde{P}_{\rm M,SP}\) in case of the existence of negative probabilities in \(\tilde{P}_{\rm M,SP}\). Then, the output \(\hat{P}\) is the desired mitigated probability distribution. ### Results from simulator Here, we compare mitigating the state preparation error and measurement error separately and together for a simple circuit shown in Fig. 3. (A four-qubit example circuit that will be tested later is shown in Fig. 4.) The \(R_{y}\) gate is defined as \(R_{y}(\theta)=\exp(-i\theta Y/2)\) and the ideal output state is \[\left|\tilde{\psi}\right\rangle = \cos^{2}\frac{\theta}{2}\left|00\right\rangle+\sin^{2}\frac{ \theta}{2}\left|01\right\rangle \tag{28}\] \[+\sin\frac{\theta}{2}\cos\frac{\theta}{2}\left|10\right\rangle+ \sin\frac{\theta}{2}\cos\frac{\theta}{2}\left|11\right\rangle.\] Since the measurement can only give probability distributions instead of wave functions, we define the overlap (or fidelity) between two probability distributions [21] \[F(\mathbf{P},\mathbf{P}^{\prime})=\left(\sum_{i=1}^{2^{n}}\sqrt{p_{i}p_{i}^{\prime}} \right)^{2}\,, \tag{29}\] where \(p_{i}\) is the probability in the distribution \(\mathbf{P}\) for the state \(\left|i\right\rangle\). The above definition is related to their Heilinger distance [22]\(H(\mathbf{P},\mathbf{P}^{\prime})\) via \(\sqrt{F(\mathbf{P},\mathbf{P}^{\prime\prime})}=1-H^{2}(\mathbf{P},\mathbf{P}^{\prime})\) between the two distributions. The fidelity of an output distribution \(\mathbf{P}_{\theta}\) is then defined as the overlap with the ideal probability distribution \(\hat{\mathbf{P}}\) \[f(\theta)=F(\mathbf{P}_{\theta},\hat{\mathbf{P}}), \tag{30}\] where \(\hat{\mathbf{P}}=(\cos^{4}\frac{\theta}{2},\sin^{4}\frac{\theta}{2},\frac{1}{4} \sin^{2}\theta,\frac{1}{4}\sin^{2}\theta)\). We use a simulator with a state preparation error \(\delta_{\rm SP}=0.05\) and \(\delta_{M}^{0}=0.04,\delta_{M}^{1}=0.06\) for both qubits. The theoretical calculation is straightforward but gives rise to messy expressions for both mitigation schemes, except for \(\theta=0\), at which we have \[f_{\rm SRM}(0)=1-\frac{\delta_{\rm SP}^{(1)}}{2}, \tag{31}\] \[f_{\rm SPRM}(0)=1-\frac{4}{3}\delta_{\rm SP}^{(1)}\delta_{\rm SP }^{(2)}, \tag{32}\] and, at \(\theta=\pi/2\), we have \[f_{\rm SRM}(\frac{\pi}{2})=f_{\rm SPRM}(\frac{\pi}{2})=1. \tag{33}\] Therefore, we expect that, near \(\theta=0\), the two mitigation schemes achieve fidelity with the ideal distribution at different orders, with the SPRM scheme fairly better. The simulation results for both the two-qubit and four-qubit circuits are shown in Fig. 2, where SPRM has better fidelity and less fluctuation in a large proportion of the parameter region in both cases. ## V Experimental demonstration on superconducting quantum computers In this section, we experimentally implement the \(\delta_{\rm SP}\) characterizing circuits and obtain the estimated values of \(\delta_{\rm SP}\) on superconducting quantum computers of IBM's cloud platform. Then, we perform quantum state tomography (with readout mitigation) to confirm that the \(\delta_{\rm SP}\) values we obtain match the tomography results. Finally, we compare the two mitigation schemes on a 4-qubit circuit and demonstrate that the SPRM obtains better probability fidelity than the SRM. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline target qubit(ancilla qubit) & \(q_{0}(q_{1})\) & \(q_{1}(q_{2})\) & \(q_{2}(q_{1})\) & \(q_{3}(q_{5})\) & \(q_{4}(q_{5})\) & \(q_{5}(q_{6})\) & \(q_{6}(q_{5})\) \\ \hline \hline \(\delta_{\rm SPAM}^{0}\) & 0.0108(4) & 0.0134(5) & 0.0086(4) & 0.0087(4) & 0.0120(4) & 0.0162(5) & 0.0086(4) \\ \(\delta_{\rm SPAM}^{1}\) & 0.0514(9) & 0.0394(8) & 0.085(1) & 0.0380(8) & 0.0457(8) & 0.102(1) & 0.0319(7) \\ \hline \hline \(\delta_{M}^{0}\) & 0.0005(8) & 0.0037(8) & 0.0018(9) & 0.0020(8) & 0.0039(8) & 0.010(1) & 0.0021(7) \\ \(\delta_{M}^{1}\) & 0.0411(8) & 0.0297(8) & 0.0780(9) & 0.0312(8) & 0.0376(8) & 0.096(1) & 0.0253(7) \\ \hline \(\delta_{\rm SP}\) & 0.011(1) & 0.0101(4) & 0.0074(3) & 0.0070(5) & 0.0085(7) & 0.0069(3) & 0.0067(4) \\ \hline \end{tabular} \end{table} Table 1: The state preparation error and measurement error for all the qubits in ibm_nairobi have been characterized using methods described in the text. The qubit connection of ibm_nairobi is shown on the right. All of the circuits are repeated \(6\times 10^{5}\) times. ### Characterization of \(\delta_{\rm SP}\) with noisy gates We implement the circuit shown in Fig. 1 on IBM's superconducting quantum computers, where the gate noise becomes the major error source in the circuits. Ideally, the SPAM error \(\delta_{\rm SPAM}^{\rm t}\) measured in the circuit of Fig. 1 should be exactly the same as \(\delta_{\rm SPAM}^{\rm t,raw}\) that is measured directly without the CNOT gate. However, adding the CNOT gate can bring additional gate errors, and the measured SPAM error rate \(\delta_{\rm SPAM}^{t,\rm noisy}\) can be larger than \(\delta_{\rm SPAM}^{t,\rm raw}\). Suppose the noise of the CNOT can be described by a noise channel \(\Lambda\), which we assume is a direct product of two Pauli-like single-qubit channels \[\Lambda(\rho)=\sum_{a\in\{I,X,Y,Z\}^{2}}p_{a}P_{a}(\rho)P_{a}. \tag{34}\] By adding a random Pauli Z operator with 50% probability for each qubit before the CNOT gate, we ensure that the state-preparation error produces effectively \(\rho=(1-\delta_{\rm SP})|0\rangle\langle 0|+\delta_{\rm SP}|1\rangle\langle 1|\), on which the Pauli Z channel has no effect but other Pauli channels act effectively like the bit-flip error, whose error assignment matrix can be combined with the measurement error \[A_{\rm M}^{\rm noisy}=A_{M}A_{\rm CNOT}. \tag{35}\] The SPAM error directly measured from circuits in Fig. 1 becomes \[A_{\rm SPAM}^{\rm noisy}=A_{M}^{\rm noisy}A_{\rm SP}. \tag{36}\] If we assume the noise effect of CNOT gates acts symmetrically on control and target qubits, the \(A_{M}^{\rm noisy}\) matrix will not change when swapping the control qubit and target qubit of the CNOT gate. Thus, one can use the SPAM error with gate noise to calculate \(\delta_{\rm SP}\) \[\tilde{\delta}_{\rm SPAM}^{a,0,{\rm noisy}}=(1-\delta_{\rm SPAM}^ {a,0,{\rm noisy}}-\delta_{\rm SPAM}^{a,1,{\rm noisy}})\delta_{\rm SP}^{t}+ \delta_{\rm SPAM}^{a,0,{\rm noisy}}, \tag{37}\] \[\tilde{\delta}_{\rm SPAM}^{a,1,{\rm noisy}}=(1-\delta_{\rm SPAM}^ {a,0,{\rm noisy}}-\delta_{\rm SPAM}^{a,1,{\rm noisy}})\delta_{\rm SP}^{t}+ \delta_{\rm SPAM}^{a,1,{\rm noisy}}, \tag{38}\] where \(\delta_{\rm SPAM}^{a,{\rm noisy}}\) is obtained by setting \(q_{a}\) as the control qubit of CNOT and \(\tilde{\delta}_{\rm SPAM}^{a,\rm noisy}\) is obtained by setting \(q_{t}\) as the control qubit. Then we use the SPAM error without the gate noise to calculate \(\delta_{M}\) \[\delta_{\rm SPAM}^{0,{\rm raw}}=(1-\delta_{M}^{0}-\delta_{M}^{1} )\delta_{\rm SP}+\delta_{M}^{0}, \tag{39}\] \[\delta_{\rm SPAM}^{1,{\rm raw}}=(1-\delta_{M}^{0}-\delta_{M}^{1} )\delta_{\rm SP}+\delta_{M}^{1}. \tag{40}\] We experimentally demonstrate the above method on the qubit \(q_{12}\) of the backend ibmq_mumbai. With two different ancilla qubits \(q_{10}\) and \(q_{15}\), we obtain the state preparation error \(\delta_{\rm SP}^{(12)}=0.042(2)\) (using \(q_{10}\) as the ancilla) and \(\delta_{\rm SP}^{(12)}=0.041(3)\) (using \(q_{15}\) as the ancilla), respectively. We also perform zero-noise extrapolation for the CNOT gate for the same qubit and obtain similar results (see Appendix B). We use the above method to characterize all of the qubits in the ibm_nairobi, and the results are shown in Table 1. We find that the state preparation error, if not separated from the readout error, is lumped in the SPAM error that is used in the current standard approach for readout mitigation. Figure 5: Experimental results, expected exact results, and mitigated results (using both SRM and SPRM) for the 4-qubit circuit, shown in Fig. 4, with \(\theta=\pi/5\). The outcome labels on the horizontal axis are in the order \([q_{4},q_{3},q_{2},q_{1}]\). The results are obtained from ibmq_mumbai (whose machine layout is shown above the histogram), with the initial layout of our experiment being the four qubits [21,18,15,12] (highlighted in green). The circuits for calculating \(\{\delta_{\rm SP}\}\) are repeated \(6\times 10^{5}\) times, and the target circuits are repeated \(10^{5}\) times. ### One-qubit state tomography To fully characterize the state-preparation noise channel before the measurement, we also perform state tomography for \(|0\rangle\) state and \(|1\rangle\) state after obtaining the \(\delta_{\mathrm{SP}}\) and \(\delta_{M}\)'s. We perform the experiment on \(q_{12}\) of ibmq_mumbai and get \(\delta_{\mathrm{SP}}=0.018(1)\). After the measurement mitigation with regard to \(\delta_{M}\)'s, the tomography results for preparing \(|0\rangle\) and \(|1\rangle\) are \[\rho_{0}=\begin{pmatrix}0.983(2)&-0.003(2)-0.016(2)i\\ -0.003(2)+0.016(2)i&0.017(2)\end{pmatrix}, \tag{41}\] \[\rho_{1}=\begin{pmatrix}0.018(2)&0.002(2)+0.019(2)i\\ 0.002(2)-0.019(2)i&0.982(2)\end{pmatrix}, \tag{42}\] where the diagonal part is consistent with the \(\delta_{\mathrm{SP}}\) obtained from our quantification scheme. ### Experimental comparison between two mitigation schemes To demonstrate the direct application to circuit runs on current NISQ devices, we use the circuit shown in Fig. 4 to benchmark both mitigation schemes. We choose \(\theta=\pi/5\). We run the circuit on the ibmq_mumbai with the initial qubit layout [21, 18, 15, 12]. Using the technique shown in Sec. V.1, we obtained the state preparation errors \(\{\delta_{\mathrm{SP}}\}=\{0.007(3),0.007(2),0.006(1),0.029(1)\}\). Then, we perform the SRM and SPRM, respectively. The raw and mitigated probability results are shown in Fig. 5. The SRM gives 99.6(4)% probability fidelity while the SPRM gives 99.9(2)% probability fidelity. The noiseless simulation indicates a 0.4% improvement in fidelity, close to the experimental results. ## VI Conclusion We have proposed a resource-efficient approach to separately quantify the state preparation and readout error rates. With these two errors separately quantified, we have also described efficient methods to mitigate them, especially mitigating state preparation errors with linear complexity. In addition to theoretical analysis, we have additionally presented results from cloud experiments on IBM's superconducting quantum computers, and the results well match our theoretical predictions. In this work, we have assumed high fidelity of X gates, which is usually the case in real quantum devices. We also proposed an efficient way to obtain \(\delta_{\mathrm{SP}}\) in the presence of CNOT gate noise, which, in the experiment, gives consistent results compared to those obtained by the ZNE. Thus, we believe our methods can be easily extended beyond superconducting quantum computers to other physical systems. The experimental results show practical applications of our quantification and mitigation scheme for \(\delta_{\mathrm{SP}}\) on current noisy quantum computers. Although the improvement of fidelity is limited due to the relatively small \(\delta_{\mathrm{SP}}\) compared to the \(\delta_{M}\) on IBM's superconducting quantum computer, the improvement would be much evident when the \(\delta_{M}\) is reduced to similar scale as \(\delta_{\mathrm{SP}}\) in the future. We advocate that in reporting the properties of quantum bits, the state preparation error rate should also be provided as an important metric (at least the portion in the incoherent mixture of \(|1\rangle\langle 1|\) when preparing \(|0\rangle\); but, in principle, quantum state tomography can be used for further characterization). ###### Acknowledgements. We thank Yusheng Zhao for many useful discussions. This work was partly supported by the National Science Foundation under Grant No. PHY 2310614, in particular, on the part of the state preparation error's characterization and mitigation, and by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage (C2QA) under Contract No. DE-SC0012704, in particular, on the part of the algorithmic cooling. This research also used resources from the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725, and the Brookhaven National Laboratory operated IBM-Q Hub. The results presented in this work do not reflect the views of IBM and its employees. Appendix A Extension of the Laflamme-Lin-Mor algorithmic-cooling scheme for separate quantification of state preparation and readout errors In this Appendix, we assume for simplicity that the readout error is symmetric (i.e., \(\delta_{\mathrm{M}}^{0}=\delta_{\mathrm{M}}^{1}\)) or else we can use the symmetrized readout protocol. We will use scripts to label the two qubits 1 and 2. First, we use 1 as the control and 2 as the target and go through the procedure of post-selecting '0' for qubit 2. This reduces the state preparation error of qubit 1 to \(\tilde{\delta}_{\mathrm{SP,1}}=\delta_{\mathrm{SP,1}}f_{12}\), where (see Ref. [14]) \[f_{12}\equiv\frac{2(\delta_{\mathrm{SP,1}}+\delta_{\mathrm{M,1}}-2\delta_{ \mathrm{SP,1}}\delta_{\mathrm{M,1}})}{1+(1-2\delta_{\mathrm{SP,1}})(1-2\delta_ {\mathrm{M,1}})(1-2\delta_{\mathrm{SP,2}})}, \tag{43}\] and the subscripts are used to denote which qubits. The total measured SPAM error (after post-selecting qubit 2) is \(\tilde{\delta}_{\mathrm{SPAM,1}}=\tilde{\delta}_{\mathrm{SP,1}}+\delta_{ \mathrm{M,1}}\). Next, we use 2 as the control and 1 as the target and go through the procedure of post-selecting '0' for qubit 1. This reduces the state preparation error of qubit 2 to \(\tilde{\delta}_{\text{SP},2}=\delta_{\text{SP},2}f_{21}\), where \[f_{21}\equiv\frac{2(\delta_{\text{SP},2}+\delta_{\text{M},2}-2\delta_{\text{SP},2 }\delta_{M,2})}{1+(1-2\delta_{\text{SP},2})(1-2\delta_{\text{M},2})(1-2\delta_{ \text{SP},1})}. \tag{30}\] The total measured SPAM error (after post-selecting qubit 2) is \(\tilde{\delta}_{\text{SPAM},2}=\tilde{\delta}_{\text{SP},2}+\delta_{\text{M},2}\). Together with two other measurable quantities, \(\delta_{\text{SPAM},i}=\delta_{\text{SP},i}+\delta_{\text{M},i}-2\delta_{ \text{SP},i}\delta_{M,i}\) (for \(i=1,2\)), we have a total of four equations for four unknowns, with which one can solve in principle. We have performed numerical tests and confirmed that the equations can be solved. For example, if we measure the four SPAM errors (using the symmetrized readout protocol to average the two readout errors) and obtain the measured results: \[\{\delta_{\text{SPAM},1},\delta_{\text{SPAM},2},\tilde{\delta}_{ \text{SPAM},1},\tilde{\delta}_{\text{SPAM},2}\}\] \[=\{0.22923,0.277084,0.222717,0.275828\}. \tag{31}\] Then, solving the above equations leads to a single solution: \[\{\delta_{\text{SP},1},\delta_{\text{M},1},\delta_{\text{SP},2}, \delta_{\text{M},2}\}\] \[=\{0.0242911,0.215404,0.0172848,0.269102\}. \tag{32}\] With additional characterization of the asymmetry in the readout, we can obtain a complete and separate characterization of SPAM errors. We note that the algorithmic cooling approach here requires solving coupled nonlinear equations, whereas the method presented in the main text requires solving only linear equations. Appendix B Comparison for calculating \(\delta_{\text{SP}}\) with and without CNOT gate noise extrapolation In this appendix, we compare the estimation of \(\delta_{\text{SP}}\) from noisy-CNOT results (37) and mitigated-CNOT results. To mitigate CNOT gate noise, we use the zero noise extrapolation technique [5], where CNOT gates are repeated \(m=1,3,5,...\) times to get different noise-scale results. After that, we use linear extrapolation to extrapolate them to zero-noise (\(m=0\)) point and get the mitigated data. The extrapolating results for calculating \(\delta_{\text{SP}}^{(12)}\) via ancilla qubit \(q_{10}\) of ibmq_mumbai are shown in Fig. 6. From the mitigated results, we get the \(\delta_{\text{SP}}^{(12)}=0.041(4)\), which is very close to the results from unmitigated data \(\delta_{\text{SP}}^{(12)}=0.042(2)\) (shown in Sec. V.1).
2303.11023
Mittag-Leffler Dichotomy and Roughness of Conformable Fractional Differential Equations
The solutions of traditional fractional differential equations neither satisfy group property nor generate dynamical systems, so the study on hyperbolicity is in blank. Relying on the new proposed conformable fractional derivative, we investigate dichotomy of conformable fractional equations, including structure of solutions of linear systems, Mittag-Leffler dichotomy and stability, roughness and nonuniform dichotomy.
Baishun Wang, Jun Zhou
2023-03-20T11:01:02Z
http://arxiv.org/abs/2303.11023v1
# Mittag-Leffler Dichotomy and Roughness of Conformable Fractional Differential Equations + ###### Abstract The solutions of traditional fractional differential equations neither satisfy group property nor generate dynamical systems, so the study on hyperbolicity is in blank. Relying on the new proposed conformable fractional derivative, we investigate dichotomy of conformable fractional equations, including structure of solutions of linear systems, Mittag-Leffler dichotomy and stability, roughness and nonuniform dichotomy. **Keywords:** Conformable fractional differential equations; Mittag-Leffler dichotomy; Roughness; Nonuniform dichotomy; Stability **AMS (2020) subject classification:** 34D09; 34A08; 37D25 ## 1 Introduction The well-known dichotomy concept on various of hyperbolic systems, e.g. ODEs \[\dot{x}=A(t)x,\quad t\in J, \tag{1.1}\] where interval \(J\subset\mathbb{R}\), is said that there exist a projection matrix \(P\) and a fundamental matrix \(X(t)\) of (1.1), and positive constants \(K_{i}\) and \(\beta_{i}\)\((i=1,2)\) such that for all \(t,s\in J\), \[\begin{split}\|X(t)PX^{-1}(s)\|&\leq K_{1}e^{- \beta_{1}(t-s)},\quad t\geq s,\\ \|X(t)(I-P)X^{-1}(s)\|&\leq K_{2}e^{-\beta_{2}(s-t) },\quad s\geq t.\end{split} \tag{1.2}\] Correspondingly, the roughness of dichotomy is regarded as the persistence of dichotomy undergoing small linear perturbation, i.e., the perturbed system \[\dot{x}=(A(t)+B(t))x,\quad t\in J\] still admits dichotomy behaviour in the form of (1.2) along with small variations of \(P\), \(X(t)\), \(K_{i}\) and \(\beta_{i}\) (\(i=1,2\)). According to the difference of asymptotic rate, there are diverse dichotomies, e.g., the classical exponential dichotomy (1.2) ([16]), \((h,k)\)-dichotomy ([30]), polynomial dichotomy ([8]), etc. The dichotomies and their corresponding properties are core issues in the field of dynamical systems, which can be traced back to the papers of Perron ([33]) and Li ([26]) on conditional stability of linear differential and difference equations respectively. And they are gradually formalized, developed and summarized in literatures [28, 29, 16]. Recently decades, many research results were devoted to exploring the existence criteria of exponential dichotomy (see Hale ([22]), Chow and Leiva ([13]), Sasu ([38]), Barreira and Valls ([7]), Battelli and Palmer ([10]) and the references therein). The roughness referred before are also widely focused on, and firstly demonstrated by Massera and Schaffer ([28]) under hypothesis that the original matrix \(A(t)\) is bounded. Schaffer ([39]) subsequently eliminated the assumption of boundedness. Coppel ([15]) gave a general elementary proof of roughness if matrix \(A(t)\) commutes with the projection \(P\). In 1978 Coppel ([16, pp.28-33]) exhibited a simpler proof via the so-called _projected integral inequalities_ raised by Hale ([22, pp.110-111]). Later, Naulin and Pinto ([31]) improved the size of perturbation \(B(t)\) in Coppel's [16, pp.34-35] without boundedness of \(A(t)\) yet. Popescu ([36]) further generalized the results of [16] and [31] to infinite dimensional Banach spaces. Thereafter, the notion of nonuniform exponential dichotomy, roughly speaking dichotomy formula (1.2) involving extra nonuniform constants in exponents, was proposed by Barreira and Valls ([7]), where its roughness was also studied. In 2013 Zhou, Lu and Zhang ([48]) studied the roughness of tempered exponential dichotomy for random difference equations in Banach spaces lack of the so-called _Multiplicative Ergodic Theorem_. Moreover, plenty of works on the roughness of exponential dichotomy could be found in [12, 13, 16, 31, 36, 7] for continuous dynamical systems and in [34, 38, 48, 49] for discrete dynamical systems and references therein. In addition, the corresponding admissibility problem of dichotomy, i.e. admissible functions pair of solutions \(x\) and inhomogeneous perturbations \(f\), was investigated extensively in [29, 38, 49, 6, 19] and so on. Although the research on dichotomy involved ODEs ([28, 29, 22, 16, 30, 31, 12, 36, 7, 8, 10, 19]), difference equations ([26, 38, 49, 48]), functional differential equations ([32]), random systems ([2, 20, 14, 27]), skew-product semiflows ([13, 34]), etc., till now there is no result of dichotomy for fractional differential equations (FDEs for short). Fractional derivative started from a letter from L'Hospital to Leibniz about discussing the meaning of a half derivative. From then on, because of better approximation to practical model associated with memory and hereditary phenomena than ODEs and PDEs, FDEs are steadily developed in the aspects of Physics and Chemistry ([47, 44]), Biology and Medicine ([18, 41]), Engineering and Control Theory ([3, 35]) and Economics and Psychology ([11, 40]), especially in recent decades (see monographs [35, 25, 21, 50, 23, 17]). Traditional definitions of fractional derivative and integral, such as Riemann-Liouville's, Caputo's and Grunwald-Letnikov's ([35, 25]), have no product rule and chain rule of derivative, such that the solutions of FDEs neither fulfil group property nor generate dynamical systems. There is a vast of works on well-posedness ([50]), stability ([25]), Laplace transform method and optimal control ([35]), variational method, attractors and numerical solutions ([21]) and chaos ([5]) of FDEs, but the study on hyperbolicity of FDEs is temporarily in blank state. Until 2014, Khalil et al. ([24]) introduced a new definition of fractional derivative, that is so-called _conformable fractional derivative_, which can almost satisfy all corresponding characteristics analogous to integer derivative. Thus, the solutions of CFDEs (abbreviation of conformable fractional differential equations) also can generate dynamical systems, which makes it possible to consider the hyperbolic behaviors of FDEs. Later, Abdeljawad ([1]) accomplished the definition of left and right conformable fractional derivatives and the variation of constants formula of CFDEs and solved CFDEs via Laplace transform. In 2017 Souahi et al. ([42]) employed Lyapunov direct method to present the stability, asymptotic stability and exponential stability of CFDEs. In 2019 Khan et al. ([43]) further verified the generalized definition and its semigroup and linear properties of conformable derivative and existence and uniqueness of solutions for CFDEs. During the same year, Balci et al. ([4]) displayed the Neimark-Sacker bifurcation and chaotic behavior for a tumor-immune system modelled by a CFDE. In 2020 Xie et al. ([46]) showed an exact solution and difference scheme for a gray model with conformable derivative. Recently, Wu et al. ([45]) revealed the Hyers-Ulam stability of a conformable fractional model. In this paper we attempt to establish the theory of dichotomy for CFDEs. In order to generalize the hyperbolicity of ODEs to CFDEs, we first modify the definitions of conformable fractional derivative and integral and a Mittag-Leffler-type function originated from [24, 1]. Subsequently, we derive the well-posedness of solutions, conformable integral inequality and variation of constants formula for CFDEs and structure of solutions and operator semigroups for linear CFDEs. These fundamental theories are achieved in section 2. In section 3 we provide the definitions of so-called _Mittag-Leffler stability and dichotomy_ with respect to CFDEs, whose asymptotic rate is a Mittag-Leffler-type function. These stability and dichotomy include the classical exponential stability and dichotomy ([16]) in ODEs with integer derivative as special cases. Meanwhile, we develop the conformable fractional integral versions of projected inequalities to prove the existence of Mittag-Leffler dichotomy and corresponding invariant manifolds. In section 4 we discuss the roughness of Mittag-Leffler dichotomy in \(\mathbb{R}_{+}\). In section 5 we additionally study nonuniform Mittag-Leffler stability and dichotomy and their roughness in \(\mathbb{R}_{+}\) too. Our results extend the works of Hale ([22]), Coppel ([16]), Barreira and Valls ([7]) to CFDEs. ## 2 Linear CFDEs Throughout this paper, we define the following functions sets: \[I(\mathbb{R},\mathbb{R}) := \{\varphi:\mathbb{R}\to\mathbb{R}\,|\,\varphi\text{ is a nondecreasing function}\},\] \[C(\mathbb{R},\mathbb{R}) := \{\varphi:\mathbb{R}\to\mathbb{R}\,|\,\varphi\text{ is a continuous function}\},\] \[C_{b}(\mathbb{R},\mathbb{R}) := \{\varphi:\mathbb{R}\to\mathbb{R}\,|\,\varphi\text{ is a continuous and bounded function}\},\] \[C_{I}(\mathbb{R},\mathbb{R}) := \{\varphi:\mathbb{R}\to\mathbb{R}\,|\,\varphi\text{ is a continuous and nondecreasing function}\}.\] Further, set constants \(\alpha\in(0,1]\) and \(t_{*},t_{0},t^{*}\) satisfying \(t_{*}<t_{0}<t^{*}\), function \(f:(t_{*},t^{*})\to\mathbb{R}\), and norms \[\|x(t)\| := \sum_{i=1}^{n}|x_{i}(t)|,\quad x:[t_{0},+\infty)\to\mathbb{R}^{n},\] \[\|A(t)\| := \max\{\sum_{i=1}^{n}|a_{i1}(t)|,\sum_{i=1}^{n}|a_{i2}(t)|,..., \sum_{i=1}^{n}|a_{in}(t)|\},\quad A:\mathbb{R}\to\mathbb{R}^{n\times n}.\] In this section, we focus on the qualitative properties of linear CFDEs and their perturbations. Analogously to the linear ODEs, there also exists fundamental solutions for linear CFDEs. Consider the nonautonomous linear CFDE \[\mathcal{T}^{\alpha}x=A(t)x,\quad(t,x)\in\mathbb{R}^{n+1}, \tag{2.1}\] where matrix function \(A\in C(\mathbb{R},\mathbb{R}^{n\times n})\). Our primary purpose is to establish its well-posedness, e.g. existence and uniqueness, continuous dependence on initial data of solutions and continuation of solutions. Before this, as a preliminary, we modify the definition and some properties of conformable fractional derivative and integral raised by Khalil, Horani, Yousef and Sababheh([24]) and Abdeljawad([1]), to make them make more sense. **Definition 2.1**: _The \(\alpha\)-conformable fractional derivative of \(f\) is defined as_ \[\mathcal{T}^{\alpha}f(t):=\lim_{\varepsilon\to 0}\frac{f(t+\varepsilon|t|^{1- \alpha})-f(t)}{\varepsilon},\quad t\in(t_{*},t^{*}). \tag{2.2}\] _In particular, if \(\lim_{t\to 0}\mathcal{T}^{\alpha}f(t)\) exists, then_ \[\mathcal{T}^{\alpha}f(0):=\lim_{t\to 0}\mathcal{T}^{\alpha}f(t).\] _Here function \(f\) is called as \(\alpha\)-conformable differentiable, if \(\mathcal{T}^{\alpha}f(t)\) exists._ Our Definition 2.1 extends the one in [24] to the case of \(t\leq 0\). And different from the definition in [1], there are same formulae in (2.2) for both \(t\leq 0\) and \(t\geq 0\). Further, we can deduce the following relations between conformable fractional derivative and Newton-Leibniz derivative and between conformable fractional integral and Riemann integral. **Proposition 2.1**: _The \(\alpha\)-conformable fractional derivative of \(f\) can be represented as_ \[{\cal T}^{\alpha}f(t)=|t|^{1-\alpha}f^{\prime}(t),\quad t\in(t_{*},t^{*}).\] This proposition implies that there are same sign between conformable fractional derivative and Newton-Leibniz derivative. **Proposition 2.2**: _The \(\alpha\)-conformable fractional integral of \(f\) is given by_ \[{\cal I}^{\alpha}_{t_{0}}f(t)=\int_{t_{0}}^{t}|s|^{\alpha-1}f(s)ds,\quad t\in( t_{*},t^{*}).\] Next, we present some properties of conformable fractional derivative and integral. **Proposition 2.3**: _Let real functions \(f\) and \(g\) be \(\alpha\)-conformable differentiable in \((t_{*},t^{*})\), then the following properties hold:_ **(A1)**: \({\cal T}^{\alpha}{\cal I}^{\alpha}_{t_{0}}f(t)=f(t)\)_,_ \({\cal I}^{\alpha}_{t_{0}}{\cal T}^{\alpha}f(t)=f(t)-f(t_{0})\)_;_ **(A2)**: \({\cal T}^{\alpha}(af+bg)=a{\cal T}^{\alpha}f+b{\cal T}^{\alpha}g\)_, for all_ \(a,b\in\mathbb{R}\)_;_ **(A3)**: \({\cal T}^{\alpha}(fg)=f{\cal T}^{\alpha}g+g{\cal T}^{\alpha}f\)_,_ \({\cal T}^{\alpha}\big{(}\frac{f}{g}\big{)}=\frac{g{\cal T}^{\alpha}f-f{\cal T} ^{\alpha}g}{g^{2}}\)_;_ **(A4)**: \({\cal T}^{\alpha}f\circ g=|g|^{\alpha-1}{\cal T}^{\alpha}f(g){\cal T}^{ \alpha}g\)_, if_ \(g:(t_{*},t^{*})\to(t_{*},t^{*})\)_._ Specially, the conformable fractional derivatives of some elementary functions are stated as follows: (1) \({\cal T}^{\alpha}1=0\), (2) \({\cal T}^{\alpha}e^{ct}=c|t|^{1-\alpha}e^{ct},\ \forall c\in\mathbb{R}\), (3) \({\cal T}^{\alpha}t^{p}=pt^{p-\alpha},\ \forall t\geq 0,p\in\mathbb{R}\) (4) \({\cal T}^{\alpha}\frac{t^{\alpha}}{\alpha}=1,\ \forall t\geq 0\), (5) \({\cal T}^{\alpha}\sin(bt)=b|t|^{1-\alpha}\cos(bt),\ \forall b\in\mathbb{R}\), (6) \({\cal T}^{\alpha}\cos(bt)=-b|t|^{1-\alpha}\sin(bt),\ \forall b\in\mathbb{R}\). **Proposition 2.4**: _If function \(f(t)\) is \(\alpha\)-conformable differentiable at \(t=\hat{t}\), then \(f\) is continuous at \(\hat{t}\)._ **Proof.** For any \(\hat{t}\in(t_{*},t^{*})\), since \({\cal T}^{\alpha}f(\hat{t})\) exists, we have \[\lim_{\varepsilon\to 0}[f(\hat{t}+\varepsilon|\hat{t}|^{1-\alpha})-f( \hat{t})] =\lim_{\varepsilon\to 0}\frac{f(\hat{t}+\varepsilon|\hat{t}|^{1- \alpha})-f(\hat{t})}{\varepsilon}\cdot\varepsilon\] \[={\cal T}^{\alpha}f(\hat{t})\lim_{\varepsilon\to 0} \varepsilon=0,\] where \(\lim_{\varepsilon\to 0}\varepsilon|\hat{t}|^{1-\alpha}=0\). It leads to the continuity of \(f\) at \(\hat{t}\), and the proposition is proved. \(\quad\Box\) The following special function and fractional integral inequality both will be useful throughout this paper. **Definition 2.2**: _The following special function is called as a Mittag-Leffler-type function:_ \[E_{\alpha}(\lambda,t):=\left\{\begin{array}{ll}\exp\Big{(} \lambda\frac{t^{\alpha}}{\alpha}\Big{)}=\sum\limits_{k=0}^{+\infty}\frac{ \lambda^{k}t^{\alpha k}}{\alpha^{k}k!},\quad\lambda\in\mathbb{R},\quad t\in \mathbb{R}_{+},\\ \exp\Big{(}-\lambda\frac{(-t)^{\alpha}}{\alpha}\Big{)}=\sum\limits_{k=0}^{+ \infty}\frac{(-\lambda)^{k}(-t)^{\alpha k}}{\alpha^{k}k!},\quad\lambda\in \mathbb{R},\quad t\in\mathbb{R}_{-}.\end{array}\right.\] **Lemma 2.1**: _Let functions \(a\in I([t_{0},t^{*}),\mathbb{R}_{+})\) and \(f\in C([t_{0},t^{*}),\mathbb{R}_{+})\). Assume that \(u:[t_{0},t^{*})\to\mathbb{R}_{+}\) satisfies fractional integral inequality_ \[u(t)\leq a(t)+{\cal I}_{t_{0}}^{\alpha}f(t)u(t),\quad t\in[t_{0},t^{*}). \tag{2.3}\] _Then \(u\) can be estimated by_ \[u(t) \leq a(t)e^{{\cal I}_{t_{0}}^{\alpha}f(t)} \tag{2.4}\] \[\leq a(t)E_{\alpha}\Big{(}\sup_{s\in[t_{0},t]}f(s),|t|\Big{)}E_{ \alpha}\Big{(}\sup_{s\in[t_{0},t]}f(s),|t_{0}|\Big{)},\quad t\in[t_{0},t^{*}).\] **Proof.** Dividing \(a(t)\) on the both hands sides of (2.3), by the monotonicity of \(a\), we derive \[\frac{u(t)}{a(t)}\leq 1+{\cal I}_{t_{0}}^{\alpha}f(t)\frac{u(t)}{a(t)}, \quad t\in[t_{0},t^{*}). \tag{2.5}\] Let \(v(t):=1+{\cal I}_{t_{0}}^{\alpha}f(t)\frac{u(t)}{a(t)}\), then \[v^{\prime}(t)\leq|t|^{\alpha-1}f(t)v(t),\quad t\in[t_{0},t^{*}).\] Integrating both hands sides of equation above from \(t_{0}\) to \(t\), we get \[\ln v(t)\leq\mathcal{I}^{\alpha}_{t_{0}}f(t),\quad t\in[t_{0},t^{*}),\] which implies \[v(t)\leq e^{\mathcal{I}^{\alpha}_{t_{0}}f(t)},\quad t\in[t_{0},t^{*}).\] Subsequently, from (2.5) we prove the first inequality of estimate (2.4). Because of the continuity of \(f\) in \([t_{0},t^{*})\), \(f\) is bounded in \([t_{0},t]\) for each \(t\in[t_{0},t^{*})\). Simplifying \(a(t)e^{\mathcal{I}^{\alpha}_{t_{0}}f(t)}\), we gain \[a(t)e^{\mathcal{I}^{\alpha}_{t_{0}}f(t)} \leq a(t)\exp(\sup_{s\in[t_{0},t]}f(s)\mathcal{I}^{\alpha}_{t_{0}}1)\] \[\leq a(t)\exp(\sup_{s\in[t_{0},t]}f(s)\frac{|t|^{\alpha}+|t_{0}|^{ \alpha}}{\alpha}),\quad t\in[t_{0},t^{*}),\] following from Definition 2.2 that the second inequality of estimate (2.4) is true. Therefore, Lemma 2.1 is proved. \(\qquad\Box\) ### Well-posedness of solutions Subsequently, we study the existence and uniqueness, continuous dependence on initial data of solutions and continuation of solutions for the general CFDEs. Consider the initial value problem (IVP) as follows \[\left\{\begin{array}{l}\mathcal{T}^{\alpha}x(t)=f(t,x(t)),\quad(t,x)\in \mathbb{R}^{n+1},\\ x(t_{0})=x_{0}.\end{array}\right. \tag{2.6}\] Given constants \(a,b>0\) and domains \[D_{+}=\{(t,x)\in\mathbb{R}^{n+1}:t\in[t_{0}-a,t_{0}+a]\cap \mathbb{R}_{+},\|x-x_{0}\|\leq b\},\quad t_{0}\geq 0,\] \[D_{-}=\{(t,x)\in\mathbb{R}^{n+1}:t\in[t_{0}-a,t_{0}+a]\cap \mathbb{R}_{-},\|x-x_{0}\|\leq b\},\quad t_{0}\leq 0,\] assume that the function \(f\) satisfies: **(B1)**: \(f\in C(D_{+},\mathbb{R}^{n})\) (resp. \(f\in C(D_{-},\mathbb{R}^{n})\)); **(B2)**: \(f(t,x)\) satisfies Lipschitz condition with respect to \(x\) in \(D_{+}\) (resp. \(D_{-}\)), i.e., there is a positive constant \(L\) such that \[\big{\|}f(t,x_{1})-f(t,x_{2})\big{\|}\leq L\|x_{1}-x_{2}\big{\|},\quad(t,x_{1} ),(t,x_{2})\in D_{+}\ (\text{resp.}\ D_{-}).\] **Theorem 2.1**: _Suppose that_ **(B1)** _and_ **(B2)** _hold. Then IVP (2.6) has a unique continuous solution in \(I_{+}:=[t_{0}-\delta_{+},t_{0}+\delta_{+}]\cap\mathbb{R}_{+}\) for \(t_{0}\geq 0\) (resp. \(I_{-}:=[t_{0}-\delta_{-},t_{0}+\delta_{-}]\cap\mathbb{R}_{-}\) for \(t_{0}\leq 0\)), where_ \[\delta_{+} :=\min\Big{\{}a,\frac{b}{M_{+}}t_{0}^{1-\alpha}\Big{\}},\quad M_{+ }:=\max_{(t,x)\in D_{+}}\|f(t,x)\|,\] \[\delta_{-} :=\min\Big{\{}a,\frac{b}{M_{-}}(-t_{0})^{1-\alpha}\Big{\}},\quad M _{-}:=\max_{(t,x)\in D_{-}}\|f(t,x)\|.\] **Proof.** For convenience, we only discuss the case of \(t\in[t_{0},t_{0}+\delta_{+}]\) for \(t_{0}\geq 0\), all the others can be proved analogously. Step 1. By proposition 2.2, the equivalent integral equation of IVP (2.6) is \[x(t)=x_{0}+\mathcal{I}_{t_{0}}^{\alpha}f(t,x(t)),\quad\forall t \in[t_{0},t_{0}+\delta]. \tag{2.7}\] Construct the Picard iteration sequence \(\{\varphi_{n}(t)\}\) successively as follows: \[\begin{cases}\varphi_{0}(t):=x_{0},\\ \varphi_{n}(t):=x_{0}+\mathcal{I}_{t_{0}}^{\alpha}f(t,\varphi_{n-1}(t)),\quad n \in\mathbb{N},\ t\in[t_{0},t_{0}+\delta].\end{cases} \tag{2.8}\] We claim that \(\varphi_{n}\in C([t_{0},t_{0}+\delta],\mathbb{R}^{n})\) is well-defined and satisfies \[\|\varphi_{n}(t)-x_{0}\|\leq b. \tag{2.9}\] It is obvious that the assertion is true for \(n=0\). Assume that it is also true for \(n=k\), then \(\varphi_{k+1}\in C([t_{0},t_{0}+\delta],\mathbb{R}^{n})\) is also well-defined from the inductive hypothesis and formula (2.8). By **(B1)** we compute \[\|\varphi_{k+1}(t)-x_{0}\| \leq\Big{\|}\mathcal{I}_{t_{0}}^{\alpha}f(t,\varphi_{k}(t))\Big{\|}\] \[\leq M\mathcal{I}_{t_{0}}^{\alpha}1=\frac{M}{\alpha}(t^{\alpha}-t _{0}^{\alpha}),\quad t\in[t_{0},t_{0}+\delta]. \tag{2.10}\] By the Lagrange's Mean Value Theorem ([37]), there exists a constant \(\xi\in(t_{0},t)\) such that \[|t^{\alpha}-t_{0}^{\alpha}|=|\alpha\xi^{\alpha-1}(t-t_{0})|\leq \alpha t_{0}^{\alpha-1}\delta.\] Substituting the inequality above into (2.10), we realize that (2.9) holds for \(n=k+1\). Then the assertion is true by induction. Step 2. We need explain the uniform convergence of sequence \(\{\varphi_{n}(t)\}\) with respect to \(t\in[t_{0},t_{0}+\delta]\), that is the uniform convergence of series \[\varphi_{0}(t)+\sum_{k=1}^{+\infty}[\varphi_{k}(t)-\varphi_{k-1}( t)],\quad\forall t\in[t_{0},t_{0}+\delta], \tag{2.11}\] since \(\varphi_{0}(t)+\sum\limits_{k=1}^{n}[\varphi_{k}(t)-\varphi_{k-1}(t)]=\varphi_{n}(t)\). We claim that \[\|\varphi_{n}(t)-\varphi_{n-1}(t)\|\leq\frac{L^{n-1}b^{n}}{M^{n-1}},\quad\forall t \in[t_{0},t_{0}+\delta]. \tag{2.12}\] By **(B2)** and induction, one can verify that \[\|\varphi_{n+1}(t)-\varphi_{n}(t)\| \leq\Big{\|}\mathcal{I}_{t_{0}}^{\alpha}[f(t,\varphi_{n}(t))-f(t, \varphi_{n-1}(t))]\Big{\|}\] \[\leq L\mathcal{I}_{t_{0}}^{\alpha}\|\varphi_{n}(t)-\varphi_{n-1}( t)\|\leq L\frac{L^{n-1}b^{n}}{M^{n-1}}\mathcal{I}_{t_{0}}^{\alpha}1\] \[\leq\frac{L^{n}b^{n}}{M^{n-1}}\frac{b}{M}=\frac{L^{n}b^{n+1}}{M^ {n}},\] which yields assertion (2.12) is true for \(n\in\mathbb{N}\). By the ratio test, we know that the series \(\sum\limits_{k=1}^{+\infty}\frac{L^{k-1}b^{k}}{M^{k-1}}\) is convergent if \(b<M/L\), which directly implies that series (2.11) and sequence \(\{\varphi_{n}(t)\}\) are both uniformly convergent in \([t_{0},t_{0}+\delta]\). Step 3. Let \(\varphi(t):=\lim\limits_{n\to+\infty}\varphi_{n}(t)\), then we can easily know that \(\varphi\in C([t_{0},t_{0}+\delta],\mathbb{R}^{n})\) and fulfills \(\|\varphi(t)-x_{0}\|\leq b\). By **(B2)** and the continuity of \(f\), we obtain \(f(t,\varphi(t))=\lim\limits_{n\to+\infty}f(t,\varphi_{n}(t))\). Then (2.8) implies \[\lim\limits_{n\to+\infty}\varphi_{n}(t)=x_{0}+\mathcal{I}_{t_{0}}^{\alpha} \lim\limits_{n\to+\infty}f(t,\varphi_{n-1}(t)),\] that is \(x(t)=\varphi(t)\) fulfills (2.7). Thus, \(\varphi(t)\) is a solution of IVP (2.6). Step 4. Suppose that other solution \(x(t)=\psi(t)\) also satisfies (2.7) associated with initial condition \(\psi(t_{0})=y_{0}\), then by **(B2)** we have \[\|\varphi(t)-\psi(t)\| \leq\|\varphi(t_{0})-\psi(t_{0})\|+\Big{\|}\mathcal{I}_{t_{0}}^{ \alpha}[f(t,\varphi(t))-f(t,\psi(t))]\Big{\|}\] \[\leq\|x_{0}-y_{0}\|+L\mathcal{I}_{t_{0}}^{\alpha}\|\varphi(t)- \psi(t)\|,\quad\forall t\in[t_{0},t_{0}+\delta].\] We employ Lemma 2.1 to receive \[\|\varphi(t)-\psi(t)\|\leq\|x_{0}-y_{0}\|E_{\alpha}(L,t)E_{\alpha}(L,t_{0}), \quad\forall t\in[t_{0},t_{0}+\delta]. \tag{2.13}\] Since \(E_{\alpha}(L,t)E_{\alpha}(L,t_{0})\) is bounded for any \(t\in[t_{0},t_{0}+\delta]\), i.e., there exits a constant \(M>0\) such that \(E_{\alpha}(L,t)E_{\alpha}(L,t_{0})\leq M\). For any constant \(\varepsilon>0\), if \(\|x_{0}-y_{0}\|<\varepsilon/M\), it follows from (2.13) that \(\|\varphi(t)-\psi(t)\|<\varepsilon\) for \(t\in[t_{0},t_{0}+\delta]\). This proves the continuous dependence on initial data of solutions for IVP (2.6). Especially, if \(x_{0}=y_{0}\) then \(\varphi(t)\equiv\psi(t)\) for all \(t\in[t_{0},t_{0}+\delta]\). Thus, the uniqueness of solutions is proved. In conclusion, the proof of Theorem 2.1 is completed. **Lemma 2.2**: _All solutions of (2.1) have maximal interval \(\mathbb{R}\)._ **Proof.** Without loss of generality, we only discuss the case of \(t\geq t_{0}\), the other case can be proved similarly. Associated with initial time \(t_{0}\), the equivalent integral equation of (2.1) is \[x(t)=x(t_{0})+\mathcal{I}_{t_{0}}^{\alpha}A(t)x(t),\quad\forall t\in[t_{0},+ \infty),\] which implies \[\|x(t)\|\leq\|x(t_{0})\|+\mathcal{I}_{t_{0}}^{\alpha}\|A(t)\|\|x(t)\|,\quad \forall t\in[t_{0},+\infty).\] It follows from Lemma 2.1 that \[\|x(t)\|\leq\|x(t_{0})\|E_{\alpha}\Big{(}\sup_{s\in[t_{0},t]}\|A(s)\|,|t| \Big{)}E_{\alpha}\Big{(}\sup_{s\in[t_{0},t]}\|A(s)\|,|t_{0}|\Big{)},\quad \forall t\in[t_{0},+\infty),\] where note that the matrix function \(A\in C([t_{0},+\infty),\mathbb{R}^{n\times n})\). Therefore, the solution \(x\) exists in interval \([t_{0},+\infty)\). \(\Box\) ### Structure of solutions In this subsection, we will present that the linear CFDEs possess the same structure of solutions as the linear ODEs. Recall the linear CFDE (2.1). **Proposition 2.5**: _If \(x_{1},x_{2}:\mathbb{R}\to\mathbb{R}^{n}\) are both solutions of (2.1), then \(a_{1}x_{1}+a_{2}x_{2}\) is also a solution of (2.1) for any \(a_{1},a_{2}\in\mathbb{R}\). And the set of all solutions of (2.1) is an \(n\)-D linear space._ **Sketch of proof.** One can easily verify the first result of Proposition 2.5 via property **(A2)**. Like the corresponding proof in ODEs, see e.g. [9, p.59], one can verify that solutions \(x_{1}(t),...,x_{n}(t)\) associated with initial values \(\varepsilon_{1},\varepsilon_{2},...,\varepsilon_{n}\) is a basis of \(n\)-D linear space, if \(\varepsilon_{1},\varepsilon_{2},...,\varepsilon_{n}\) is a basis of \(\mathbb{R}^{n}\). \(\Box\) **Remark 2.1**: \(n\times n\) _matrix function \(X(t)\), consisting of \(n\) linearly independent solutions \(x_{1}(t),...,x_{n}(t)\) as its columns, is also called a fundamental solution of (2.1). And for different fundamental solutions \(X(t)\) and \(Y(t)\), they can be linearly represented by each other, i.e., there exists an invertible linear transformation \(C\) such that \(Y(t)=X(t)C\) for all \(t\in\mathbb{R}\)._ The proof of this remark is analogous to the case in linear ODEs. **Proposition 2.6**: _The general solution of (2.1) associated with initial data \(x_{0}\) can be written as_ \[x(t)=X(t)X^{-1}(t_{0})x_{0},\quad t\in\mathbb{R}, \tag{2.14}\] _where \(X(t)\) is any fundamental solution of (2.1)._ **Proof.** Note that the \(X(t_{0})\) is a nonsingular matrix, since all its columns, as the initial values of (2.1), form a basis of \(\mathbb{R}^{n}\). Then (2.14) is well defined, and \(x(t_{0})=x_{0}\). It follows from (2.14) and (2.1) that \[\mathcal{T}^{\alpha}x(t)=\mathcal{T}^{\alpha}X(t)X^{-1}(t_{0})x_{0}=A(t)X(t)X^ {-1}(t_{0})x_{0}=A(t)x(t),\] that is (2.14) is the general solution of (2.1) with \(x_{0}\). Therefore, Proposition 2.6 is proved. \(\Box\) **Proposition 2.7**: _If \(X(t)\) is a fundamental solution of (2.1), then_ \[\det X(t)=\det X(t_{0})\exp\Big{(}\mathcal{I}^{\alpha}_{t_{0}}\mathrm{tr}A(t) \Big{)},\quad t\in\mathbb{R}.\] **Proof.** Taking \(X(t):=[x_{i1}(t),...,x_{in}(t)]\) and \(A(t):=[a_{i1}(t),...,a_{in}(t)]\) for \(i=1,...,n\), calculate that \[\mathcal{T}^{\alpha}\det X(t) =\sum_{i=1}^{n}\det[\mathcal{T}^{\alpha}x_{i1}(t),...,\mathcal{T }^{\alpha}x_{in}(t)]\] \[=\sum_{i=1}^{n}\det\Big{[}\sum_{j=1}^{n}a_{ij}(t)x_{j1}(t),\cdots,\sum_{j=1}^{n}a_{ij}(t)x_{jn}(t)\Big{]}\] \[=\sum_{i=1}^{n}a_{ii}(t)\det[x_{i1}(t),\cdots,x_{in}(t)]=\mathrm{ tr}A(t)\det X(t).\] Then \(\det X(t)\) is a solution of \(\mathcal{T}^{\alpha}y(t)=\mathrm{tr}A(t)y(t)\). From Proposition 2.1, we derive \[y(t)=y(t_{0})\exp\Big{(}\mathcal{I}^{\alpha}_{t_{0}}\mathrm{tr}A(t)\Big{)}, \quad t\in\mathbb{R},\] yielding the desired result. \(\Box\) ### Autonomous systems As a special case of (2.1), here we consider the linear autonomous system \[\mathcal{T}^{\alpha}x=Ax,\quad(t,x)\in\mathbb{R}^{n+1}, \tag{2.15}\] where \(A\) is an \(n\times n\) real constant matrix. To solving (2.15), we first provide the concept of the Mittag-Leffler-type of a matrix. **Definition 2.3**: _The Mittag-Leffler-type of an \(n\times n\) real constant matrix \(A\) is defined as_ \[E_{\alpha}(A,1):=\sum_{k=0}^{+\infty}\frac{A^{k}}{\alpha^{k}k!}, \tag{2.16}\] _and denote \(E_{\alpha}(0,1)=I\) for convention._ **Proposition 2.8**: _The power series in (2.16) is convergent for any matrix \(A\)._ **Proof.** In deed, it is obvious that \(\|A^{k}\|\leq\|A\|^{k}\) for all \(k\in\mathbb{N}\), so \[\sum_{k=0}^{+\infty}\frac{\|A^{k}\|}{\alpha^{k}k!}\leq\sum_{k=0}^{+\infty} \frac{\|A\|^{k}}{\alpha^{k}k!}=E_{\alpha}(\|A\|,1)<+\infty.\] It yields that the series in (2.16) is convergent. \(\Box\) Recall the Jordan canonical form in ODEs as follows \[P^{-1}AP:=\left[\begin{array}{ccc}J_{1}&\cdots&0\\ 0&\ddots&0\\ 0&\cdots&J_{l}\end{array}\right],\quad J_{i}=\left[\begin{array}{ccc}\lambda _{i}&1&0\\ 0&\ddots&1\\ 0&0&\lambda_{i}\end{array}\right],\quad i=1,2,...,l, \tag{2.17}\] where \(P\) is an \(n\times n\) nonsingular complex matrix and \(\lambda_{i}\) is an eigenvalue of \(A\). Thus, the Mittag-Leffler-type of a matrix can be easily computed as follows. **Proposition 2.9**: _Let \(A\) is an \(n\times n\) real matrix with Jordan canonical form in (2.17), then_ \[E_{\alpha}(A,1)=PE_{\alpha}(P^{-1}AP,1)P^{-1}=P\left[\begin{array}{ccc}E_{ \alpha}(J_{1},1)&\cdots&0\\ 0&\ddots&0\\ 0&\cdots&E_{\alpha}(J_{l},1)\end{array}\right]P^{-1}.\] **Proof.** Since \((P^{-1}AP)^{k}=P^{-1}A^{k}P\) for any \(k\in\mathbb{N}\), one can compute \[E_{\alpha}(P^{-1}AP,1)=P^{-1}\Big{(}\sum_{k=0}^{+\infty}\frac{A^{k}}{\alpha^{ k}k!}\Big{)}P=P^{-1}E_{\alpha}(A,1)P.\] Subsequently, one can easily verify that \[E_{\alpha}(P^{-1}AP,1)=\sum_{k=0}^{+\infty}\frac{1}{\alpha^{k}k!}\left[ \begin{array}{ccc}J_{1}^{k}&\cdots&0\\ 0&\ddots&0\\ 0&\cdots&J_{l}^{k}\end{array}\right]=\left[\begin{array}{ccc}E_{\alpha}(J_ {1},1)&\cdots&0\\ 0&\ddots&0\\ 0&\cdots&E_{\alpha}(J_{l},1)\end{array}\right],\] which implies the required conclusion. \(\Box\) Further, one can verify the following formula. **Proposition 2.10**: _If \(A=\lambda I+N\), where the nilpotent matrix \(N\) is_ \[N=\left[\begin{array}{ccc}0&1&0\\ 0&\ddots&1\\ 0&0&0\end{array}\right],\] _then the following expression fulfills:_ \[E_{\alpha}(A,1)=E_{\alpha}(\lambda,1)\Big{(}I+\frac{N}{\alpha}+\frac{N^{2}}{ \alpha^{2}2!}+\cdots+\frac{N^{n-1}}{\alpha^{(n-1)}(n-1)!}\Big{)}.\] Relying on the preliminaries above, one can solve (2.15) as follows. **Lemma 2.3**: _The matrix \(E_{\alpha}(A,t)\) is a fundamental solution of (2.15) for all \(t\in\mathbb{R}\)._ **Proof.** Note that \[E_{\alpha}(A,t):=\left\{\begin{array}{l}\sum\limits_{k=0}^{+\infty}\frac{A^ {k}t^{\alpha k}}{\alpha^{k}k!},\quad t\in\mathbb{R}_{+},\\ \sum\limits_{k=0}^{+\infty}\frac{(-A)^{k}(-t)^{\alpha k}}{\alpha^{k}k!}, \quad t\in\mathbb{R}_{-}.\end{array}\right.\] The following calculus implies that power series \(E_{\alpha}(A,t)\) is convergent for all \(t\in\mathbb{R}\): \[\sum\limits_{k=0}^{+\infty}\frac{|t|^{\alpha k}\|A^{k}\|}{\alpha^{k}k!}\leq \sum\limits_{k=0}^{+\infty}\frac{|t|^{\alpha k}\|A\|^{k}}{\alpha^{k}k!}=E_{ \alpha}(\|A\|,|t|)<+\infty.\] And one can compute that \[\mathcal{T}^{\alpha}E_{\alpha}(A,t) =\sum\limits_{k=1}^{+\infty}\frac{t^{1-\alpha}t^{\alpha k-1}A^{k} }{\alpha^{k-1}(k-1)!}\] \[=A\sum\limits_{k=1}^{+\infty}\frac{t^{\alpha(k-1)}A^{k-1}}{ \alpha^{k-1}(k-1)!}=AE_{\alpha}(A,t),\quad t\in\mathbb{R}_{+},\] \[\mathcal{T}^{\alpha}E_{\alpha}(A,t) =\sum\limits_{k=1}^{+\infty}\frac{-(-t)^{1-\alpha}(-t)^{\alpha k -1}(-A)^{k}}{\alpha^{k-1}(k-1)!}\] \[=A\sum\limits_{k=1}^{+\infty}\frac{(-t)^{\alpha(k-1)}(-A)^{k-1}} {\alpha^{k-1}(k-1)!}=AE_{\alpha}(A,t),\quad t\in\mathbb{R}_{-},\] which yields the wanted result. \(\qquad\Box\) Both Proposition 2.6 and Lemma 2.3 lead to the following result on general solutions of (2.15). **Proposition 2.11**: _The general solution of (2.15) associated with initial data \(x_{0}\) can be expressed as_ \[x(t)=\frac{E_{\alpha}(A,t)}{E_{\alpha}(A,t_{0})}x_{0},\quad t\in \mathbb{R}.\] ### Variation of constants formula In this subsection, we focus on the perturbation of linear CFDE (2.1). Consider the inhomogeneous linear CFDE \[\mathcal{T}^{\alpha}x=A(t)x+f(t),\quad(t,x)\in\mathbb{R}^{n+1}, \tag{2.18}\] where \(f\in C(\mathbb{R},\mathbb{R}^{n})\) and matrix function \(A\in C(\mathbb{R},\mathbb{R}^{n\times n})\). One can verify the analogous properties to ODEs as follows. **Proposition 2.12**: _Like ODEs, if both \(x_{1}^{*}(t)\) and \(x_{2}^{*}(t)\) are solutions of (2.18), then \(x_{1}^{*}(t)-x_{2}^{*}(t)\) is a solution of (2.1). On the other hand, if \(x(t)\) and \(x^{*}(t)\) are solutions of (2.1) and (2.18) respectively, then \(x(t)+x^{*}(t)\) is also a solution of (2.18)._ These properties can easily lead to the following structure of general solutions for (2.18). **Proposition 2.13**: _If \(x^{*}(t)\) is a solution of (2.18), then general solutions of (2.18) associated with initial data \(x_{0}\) can be represented as_ \[x(t)=X(t)X^{-1}(t_{0})x_{0}+x^{*}(t),\quad t\in\mathbb{R},\] _where \(X(t)\) is any fundamental solution of (2.1)._ Next, we will give the variation of constants formula for (2.18). **Theorem 2.2**: _Let \(X(t)\) is a fundamental matrix of (2.1), then the general solutions of (2.18) associated with initial data \(x_{0}\) can be given by_ \[x(t)=X(t)X^{-1}(t_{0})x_{0}+X(t)\mathcal{I}^{\alpha}_{t_{0}}X^{- 1}(t)f(t),\quad t\in\mathbb{R}. \tag{2.19}\] _Particularly, if \(A(t)\) degenerates into an \(n\times n\) real constant matrix \(A\), the variation of constants formula (2.19) becomes the form_ \[x(t)=\frac{E_{\alpha}(A,t)}{E_{\alpha}(A,t_{0})}x_{0}+E_{\alpha}(A,t)\mathcal{ I}^{\alpha}_{t_{0}}\frac{f(t)}{E_{\alpha}(A,t)},\quad t\in\mathbb{R}.\] **Proof.** By Proposition 2.6, the general solutions of (2.1) is \(x(t)=X(t)X^{-1}(t_{0})c\) for any constant column vector \(c\). By analogy with ODEs, let \[x(t)=X(t)X^{-1}(t_{0})c(t) \tag{2.20}\] is the general solutions of (2.18), then \[\mathcal{T}^{\alpha}x(t) =(\mathcal{T}^{\alpha}X(t))X^{-1}(t_{0})c(t)+X(t)X^{-1}(t_{0}) \mathcal{T}^{\alpha}c(t)\] \[=A(t)X(t)X^{-1}(t_{0})c(t)+f(t). \tag{2.21}\] Since \(\mathcal{T}^{\alpha}X(t)=A(t)X(t)\), it follows from (2.21) that \[X(t)X^{-1}(t_{0})\mathcal{T}^{\alpha}c(t)=f(t),\quad t\in\mathbb{R}. \tag{2.22}\] Integrating both hands sides of (2.22) from \(t_{0}\) to \(t\), we receive \[c(t)=c(t_{0})+\mathcal{I}^{\alpha}_{t_{0}}X(t_{0})X^{-1}(t)f(t),\quad t\in \mathbb{R}. \tag{2.23}\] Substituting (2.23) into (2.20), we receive \[x(t) =X(t)X^{-1}(t_{0})\Big{[}c(t_{0})+\mathcal{I}^{\alpha}_{t_{0}}X( t_{0})X^{-1}(t)f(t)\Big{]}\] \[=X(t)X^{-1}(t_{0})c(t_{0})+X(t)\mathcal{I}^{\alpha}_{t_{0}}X^{-1} (t)f(t),\quad t\in\mathbb{R},\] where (2.20) guarantees that \(c(t_{0})=x_{0}\). It yields that (2.19) holds. The special case \(X(t)=E_{\alpha}(A,t)\) can be obtained naturally. Thus, Theorem 2.2 is proved. \(\square\) **Proposition 2.14**: _If an \(n\times n\) real constant matrix \(A\) has only eigenvalues with negative real part, then there exist constants \(K,\lambda>0\) such that_ \[\|E_{\alpha}(A,t)\|\leq KE_{\alpha}(-\lambda,t),\quad t\in\mathbb{R}_{+}.\] The proof is similar to the case in ODEs, referred Proposition 2.27 in [9, p.77]. ## 3 Stability and Mittag-Leffler dichotomy In this section, we study the concepts of stability and Mittag-Leffler dichotomy of CFDEs. Before this, Souahi, Makhlouf and Hammami([42]) combined Lyapunov stability and properties of conformable fractional derivative given by Abdeljawad([1]) to raise the concepts of stability, asymptotic stability and fractional exponential stability for the nonlinear system (2.6). For the nonautonomous linear CFDE (2.1), the definitions of uniform stability and uniformly asymptotic stability are more essential. Based on the definition of stability for CFDEs described in [42], we introduce the following definition of uniformly stability analogous to the corresponding concept of ODEs in e.g. [16, p.1]. **Definition 3.1**: _The solution \(\hat{x}(t)\) of system (2.6) is said to be_ * _uniformly stable, if for any_ \(\varepsilon>0\) _there exists_ \(\delta:=\delta(\varepsilon)>0\) _such that any solution_ \(x(t)\) _of (_2.6_) satisfies for some_ \(s\geq 0\)_, the inequality_ \(\|x(s)-\hat{x}(s)\|<\delta\) _implies_ \(\|x(t)-\hat{x}(t)\|<\varepsilon\) _for all_ \(t\geq s\)_;_ * _attractive, if there exists_ \(\delta_{0}>0\) _and_ \(T:=T(\varepsilon)>0\) _for any_ \(\varepsilon>0\) _such that for some_ \(s\geq 0\)_, the inequality_ \(\|x(s)-\hat{x}(s)\|<\delta_{0}\) _implies_ \(\|x(t)-\hat{x}(t)\|<\varepsilon\) _for all_ \(t\geq s+T\)_;_ * _uniformly asymptotically stable, if it is uniformly stable and attractive._ The following definition is on the _Mittag-Leffler stability_. **Definition 3.2**: _The solution \(x_{*}=0\) of system (2.6) is Mittag-Leffler stable if_ \[\|x(t)\|\leq K\frac{E_{\alpha}(\lambda,t_{0})}{E_{\alpha}(\lambda,t)}\|x_{0} \|,\quad t\geq t_{0},\] _where constants \(K,\lambda>0\)._ More generally, we focus on the significant application of Definition 3.1 to linear equation (2.1). **Proposition 3.1**: _Suppose \(X(t)\) is a fundamental matrix of (2.1) and \(c\) is a real constant, then solution \(x_{*}=0\) of (2.1) is said to be_ * _stable for any_ \(t_{0}\in\mathbb{R}\) _if and only if there exists_ \(K:=K(t_{0})>0\) _such that_ \[\|X(t)\|\leq K,\quad t_{0}\leq t<+\infty;\] * _uniformly stable for_ \(t_{0}\geq c\) _if and only if there exists_ \(K:=K(c)>0\) _such that_ \[\|X(t)X^{-1}(s)\|\leq K,\quad t_{0}\leq s\leq t<+\infty;\] * _asymptotically stable for any_ \(t_{0}\in\mathbb{R}\) _if and only if_ \(\lim_{t\to+\infty}\|X(t)\|=0\)_;_ * _uniformly asymptotically stable for_ \(t_{0}\geq c\) _if and only if there exist_ \(K:=K(c)>0\) _and_ \(\lambda:=\lambda(c)>0\) _such that_ \[\|X(t)X^{-1}(s)\|\leq K\frac{E_{\alpha}(\lambda,s)}{E_{\alpha}(\lambda,t)}, \quad t_{0}\leq s\leq t<+\infty.\] (3.1) _Particularly,_ **(D1)-(D4)** _all hold for autonomous system (2.15), if fundamental matrix \(X(t)\) is replaced by \(E_{\alpha}(A,t)\)._ The proof of **(D1)-(D4)** can refer to the Theorem 2.1 in [22, p.84]. In particular, since the Mittag-Leffler stability implies the uniformly asymptotic stability, one can simply verify conclusion **(D4)**. The definitions of the corresponding stabilities above for Caputo FDEs had been proposed in references e.g. [17, p.140]. Next, we shall propose the concept of _Mittag-Leffler dichotomy_ for linear CFDE (2.1). **Definition 3.3**: _Suppose that \(X(t)\) is a fundamental matrix of (2.1). The equation (2.1) possesses a Mittag-Leffler dichotomy if there exists a projection matrix \(P\), i.e. \(P^{2}=P\), and positive constants \(N_{i}\), \(\beta_{i}\)\((i=1,2)\) such that_ \[\|X(t)PX^{-1}(s)\|\leq N_{1}\frac{E_{\alpha}(\beta_{1},s)}{E_{ \alpha}(\beta_{1},t)},\quad t\geq s, \tag{3.2}\] \[\|X(t)(I-P)X^{-1}(s)\|\leq N_{2}\frac{E_{\alpha}(\beta_{2},t)}{E_ {\alpha}(\beta_{2},s)},\quad s\geq t.\] _In particular, (2.1) possesses an ordinary dichotomy if (3.2) hold with \(\beta_{1}=\beta_{2}=0\)._ Finally, we concern perturbation of nonautonomous linear CFDE (2.1). Consider the perturbed equation \[\mathcal{T}^{\alpha}x=A(t)x+f(t,x),\quad(t,x)\in\mathbb{R}^{n+1}, \tag{3.3}\] where \(f\in C(\mathbb{R}^{n+1},\mathbb{R}^{n})\) and matrix function \(A\in C(\mathbb{R},\mathbb{R}^{n\times n})\). The following conclusion give out the projection form of equivalent integral equation and the existence of bounded solutions for equation (3.3). **Lemma 3.1**: _Suppose that function \(f\in C(\mathbb{R}^{n+1},\mathbb{R}^{n})\), \(P\) is a projection matrix given in Definition 3.3 and equation (2.1) possesses a Mittag-Leffler dichotomy. If \(x\in C_{b}([t_{0},+\infty),\mathbb{R}^{n})\) is a solution of (3.3) with \(x(t_{0})=x_{0}\) for constant \(t_{0}\in\mathbb{R}_{+}\), then_ \[\begin{split} x(t)=& X(t)PX^{-1}(t_{0})x_{0}+X(t )P\mathcal{I}_{t_{0}}^{\alpha}X^{-1}(t)f(t,x(t))\\ &+X(t)(I-P)\mathcal{I}_{+\infty}^{\alpha}X^{-1}(t)f(t,x(t)),\quad t \geq t_{0}.\end{split} \tag{3.4}\] _If \(x\in C_{b}((-\infty,t_{0}],\mathbb{R}^{n})\) is a solution of (3.3) with \(x(t_{0})=x_{0}\) for constant \(t_{0}\in\mathbb{R}_{-}\), then_ \[\begin{split} x(t)=& X(t)(I-P)X^{-1}(t_{0})x_{0}+X( t)(I-P)\mathcal{I}_{t_{0}}^{\alpha}X^{-1}(t)f(t,x(t))\\ &+X(t)P\mathcal{I}_{-\infty}^{\alpha}X^{-1}(t)f(t,x(t)),\quad t \leq t_{0}.\end{split} \tag{3.5}\] _Conversely, any bounded solution of (3.4) or (3.5) is a solution of (3.3)._ **Proof.** For convenience, we only prove (3.4), formula (3.5) can be proved in an analogous manner. Assume \(x(t)\) is a bounded solution of (3.3) and \(M:=\sup\limits_{t\in[t_{0},+\infty)}\|x(t)\|\) for \(t\geq t_{0}\). The continuity of \(f\) implies that there exists a positive constant \(N\) such that \(N:=\sup\limits_{t\in[t_{0},+\infty)}\|f(t,x(t))\|\). By the variation of constants formula (2.19), for any \(\tau\geq t_{0}\), the solution \(x(t)\) satisfies \[\begin{split} X(t)(I-P)X^{-1}(t)x(t)=& X(t)(I-P)X^{-1}( \tau)x(\tau)\\ &+X(t)(I-P)\mathcal{I}_{\tau}^{\alpha}X^{-1}(t)f(t,x(t)),\quad t,\tau\geq t_{0},\end{split} \tag{3.6}\] where the following estimate can be obtained by (3.2) \[\begin{split}\|X(t)(I-P)X^{-1}(\tau)x(\tau)\|& \leq N_{2}\frac{E_{\alpha}(\beta_{2},t)}{E_{\alpha}(\beta_{2}, \tau)}\sup\limits_{t\in[t_{0},+\infty)}\|x(t)\|\\ &\leq MN_{2}\frac{E_{\alpha}(\beta_{2},t)}{E_{\alpha}(\beta_{2}, \tau)},\quad t,\tau\geq t_{0}.\end{split}\] It yields that \[\lim\limits_{\tau\to+\infty}\|X(t)(I-P)X^{-1}(\tau)x(\tau)\|=0,\quad t\geq t_{ 0}.\] On the other hand, in integral equation (3.6) for \(t\geq t_{0}\), \[\begin{split}\|X(t)(I-P)\mathcal{I}_{\tau}^{\alpha}X^{-1}(t)f(t,x(t))\|&\leq NN_{2}E_{\alpha}(\beta_{2},t)|\mathcal{I}_{\tau}^{ \alpha}E_{\alpha}(-\beta_{2},t)|\\ &\leq NN_{2}E_{\alpha}(\beta_{2},t)\int_{t}^{\tau}s^{\alpha-1} \exp\Big{(}-\beta_{2}\frac{s^{\alpha}}{\alpha}\Big{)}ds\\ &\leq\frac{NN_{2}E_{\alpha}(\beta_{2},t)}{\beta_{2}}(E_{\alpha}( -\beta_{2},t)-E_{\alpha}(-\beta_{2},\tau))\\ &\leq\frac{NN_{2}}{\beta_{2}},\end{split}\] which implies that \[\|X(t)(I-P)\mathcal{I}_{+\infty}^{\alpha}X^{-1}(t)f(t,x(t))\|<+\infty,\quad t \geq t_{0}.\] It follows from (3.6) that \[X(t)(I-P)X^{-1}(t)x(t)=X(t)(I-P)\mathcal{I}_{+\infty}^{\alpha}X^{-1}(t)f(t,x( t)),\quad t\geq t_{0}. \tag{3.7}\] From the variation of constants formula (2.19), it also follows that for \(t\geq t_{0}\), \[X(t)PX^{-1}(t)x(t)=X(t)PX^{-1}(t_{0})x(t_{0})+X(t)P\mathcal{I}_{t_{0}}^{\alpha }X^{-1}(t)f(t,x(t)). \tag{3.8}\] Since \(x(t)=X(t)PX^{-1}(t)x(t)+X(t)(I-P)X^{-1}(t)x(t)\), substituting (3.7) and (3.8) into it, we attain (3.4). And the converse conclusion can be verified by direct calculation to end the proof. \(\quad\Box\) The following Lemma is the fractional-order version of projected integral inequality. **Lemma 3.2**: _Suppose that \(N_{i}\), \(\beta_{i}\) and \(\varepsilon\) are all positive constants for \(i=1,2\), and bounded continuous nonnegative solutions \(u(t)\) satisfy_ \[\begin{split} u(t)\leq& N_{1}\frac{E_{\alpha}(\beta_{1},t_ {0})}{E_{\alpha}(\beta_{1},t)}+\frac{\varepsilon N_{1}}{E_{\alpha}(\beta_{1},t )}\mathcal{I}_{t_{0}}^{\alpha}E_{\alpha}(\beta_{1},t)u(t)\\ &-\varepsilon N_{2}E_{\alpha}(\beta_{2},t)\mathcal{I}_{+\infty}^ {\alpha}\frac{u(t)}{E_{\alpha}(\beta_{2},t)},\quad t\geq t_{0}\geq 0,\\ u(t)\leq& N_{2}\frac{E_{\alpha}(\beta_{2},t)}{E_{ \alpha}(\beta_{2},t_{0})}-\varepsilon N_{2}E_{\alpha}(\beta_{2},t)\mathcal{I }_{t_{0}}^{\alpha}\frac{u(t)}{E_{\alpha}(\beta_{2},t)}\\ &+\frac{\varepsilon N_{1}}{E_{\alpha}(\beta_{1},t)}\mathcal{I}_{- \infty}^{\alpha}E_{\alpha}(\beta_{1},t)u(t),\quad t\leq t_{0}\leq 0.\end{split} \tag{3.9}\] _Set that_ \[\theta:=\varepsilon\Big{(}\frac{N_{1}}{\beta_{1}}+\frac{N_{2}}{\beta_{2}} \Big{)},\quad K_{i}:=\frac{N_{i}}{1-\theta},\quad\lambda_{i}:=\beta_{i}-\frac {\varepsilon N_{i}}{1-\theta},\quad i=1,2.\] _If \(\theta<1\), then_ \[u(t)\leq\left\{\begin{array}{ll}K_{1}\frac{E_{\alpha}(\lambda_{1},t_{0})}{E _{\alpha}(\lambda_{1},t)},&t\geq t_{0},\\ K_{2}\frac{E_{\alpha}(\lambda_{2},t)}{E_{\alpha}(\lambda_{2},t_{0})},&t\leq t _{0}.\end{array}\right.\] **Proof.** Without loss of generality, we only consider inequality (3.9), because inequality (3.10) can be changed into (3.9) through transformations \(t\to-t\) and \(t_{0}\to-t_{0}\). Next, we need to verify \(\lim\limits_{t\to+\infty}u(t)=0\). In deed, since \(u(t)\) is bounded, let \(\sigma:=\limsup\limits_{t\to+\infty}u(t)\). If \(\sigma>0\) and for any constant \(\vartheta\) satisfying \(\theta<\vartheta<1\), there exists \(t_{1}\geq t_{0}\) such that for any \(t\geq t_{1}\) we have \(u(t)\leq\vartheta^{-1}\sigma\). For \(t\geq t_{1}\) we compute \[\begin{split} u(t)\leq&\,N_{1}\frac{E_{\alpha}(\beta_ {1},t_{0})}{E_{\alpha}(\beta_{1},t)}+\frac{\varepsilon N_{1}}{E_{\alpha}(\beta _{1},t)}\mathcal{I}_{t_{0}}^{\alpha}E_{\alpha}(\beta_{1},t_{1})u(t_{1})\\ &+\vartheta^{-1}\sigma\Big{[}\frac{\varepsilon N_{1}}{E_{\alpha} (\beta_{1},t)}\mathcal{I}_{t_{1}}^{\alpha}E_{\alpha}(\beta_{1},t)-\varepsilon N _{2}E_{\alpha}(\beta_{2},t)\mathcal{I}_{+\infty}^{\alpha}\frac{1}{E_{\alpha}( \beta_{2},t)}\Big{]}\\ \leq&\,N_{1}\frac{E_{\alpha}(\beta_{1},t_{0})}{E_{ \alpha}(\beta_{1},t)}+\frac{\varepsilon N_{1}}{E_{\alpha}(\beta_{1},t)} \mathcal{I}_{t_{0}}^{\alpha}E_{\alpha}(\beta_{1},t_{1})u(t_{1})+\vartheta^{-1} \sigma\varepsilon\Big{(}\frac{N_{1}}{\beta_{1}}+\frac{N_{2}}{\beta_{2}}\Big{)}.\end{split}\] Since \(\theta<\vartheta<1\), the upper limit of the right hand side of the inequality above is less than \(\sigma\) as \(t\to+\infty\). It follows from the inequality above that \[\sigma\leq\vartheta^{-1}\sigma\varepsilon\Big{(}\frac{N_{1}}{\beta_{1}}+\frac {N_{2}}{\beta_{2}}\Big{)}<\sigma,\] that is a contradiction. Hence, \(\sigma=0\) and \(\lim\limits_{t\to+\infty}u(t)=0\). Set \(v(t):=\sup_{\tau\geq t}u(\tau).\) Obviously, the function \(v(t)\) is nonincreasing and for any \(t\geq t_{0}\), there exists \(t_{2}\geq t\) such that for \(t\leq s\leq t_{2}\), \(v(t)=u(t_{2})=v(s)\). Replacing \(t\) in (3.9) with \(t_{2}\), for \(t\geq t_{0}\) we calculate that \[v(t)=u(t_{2}) \leq N_{1}\frac{E_{\alpha}(\beta_{1},t_{0})}{E_{\alpha}(\beta_{1},t_{ 2})}+\frac{\varepsilon N_{1}}{E_{\alpha}(\beta_{1},t_{2})}\mathcal{I}_{t_{0}}^ {\alpha}E_{\alpha}(\beta_{1},t_{2})u(t_{2})\] \[\quad-\varepsilon N_{2}E_{\alpha}(\beta_{2},t_{2})\mathcal{I}_{+ \infty}^{\alpha}\frac{u(t_{2})}{E_{\alpha}(\beta_{2},t_{2})}\] \[\leq N_{1}\frac{E_{\alpha}(\beta_{1},t_{0})}{E_{\alpha}(\beta_{1},t )}+\frac{\varepsilon N_{1}}{E_{\alpha}(\beta_{1},t)}\mathcal{I}_{t_{0}}^{ \alpha}E_{\alpha}(\beta_{1},t)u(t)\] \[\quad-v(t)\Big{[}\frac{\varepsilon N_{1}}{E_{\alpha}(\beta_{1},t _{2})}\mathcal{I}_{t_{2}}^{\alpha}E_{\alpha}(\beta_{1},t)+\varepsilon N_{2}E_ {\alpha}(\beta_{2},t_{2})\mathcal{I}_{+\infty}^{\alpha}\frac{1}{E_{\alpha}( \beta_{2},t_{2})}\Big{]}\] \[\leq N_{1}\frac{E_{\alpha}(\beta_{1},t_{0})}{E_{\alpha}(\beta_{1},t )}+\frac{\varepsilon N_{1}}{E_{\alpha}(\beta_{1},t)}\mathcal{I}_{t_{0}}^{ \alpha}E_{\alpha}(\beta_{1},t)u(t)+v(t)\varepsilon\Big{(}\frac{N_{1}}{\beta_{ 1}}+\frac{N_{2}}{\beta_{2}}\Big{)}.\] Put \(w(t):=\frac{E_{\alpha}(\beta_{1},t)}{E_{\alpha}(\beta_{1},t_{0})}v(t)\), then it follows from the definition of \(v\) that \[w(t) \leq N_{1}+\frac{\varepsilon N_{1}}{E_{\alpha}(\beta_{1},t_{0})} \mathcal{I}_{t_{0}}^{\alpha}E_{\alpha}(\beta_{1},t)v(t)+\theta w(t)\] \[= N_{1}+\varepsilon N_{1}\mathcal{I}_{t_{0}}^{\alpha}w(t)+\theta w (t),\quad t\geq t_{0},\] that is \[w(t)\leq\frac{N_{1}}{1-\theta}+\frac{\varepsilon N_{1}}{1-\theta}\mathcal{I}_ {t_{0}}^{\alpha}w(t),\quad t\geq t_{0}.\] Applying Lemma 2.1 to the inequality above, we attain \[w(t)\leq\frac{N_{1}}{1-\theta}\exp(\mathcal{I}_{t_{0}}^{\alpha}\frac{ \varepsilon N_{1}}{1-\theta})=\frac{N_{1}}{1-\theta}\frac{E_{\alpha}(\frac{ \varepsilon N_{1}}{1-\theta},t)}{E_{\alpha}(\frac{\varepsilon N_{1}}{1- \theta},t_{0})},\quad t\geq t_{0}.\] Combining with the definitions of \(v\) and \(w\), we acquire \[u(t)\leq\frac{N_{1}}{1-\theta}\frac{E_{\alpha}(\frac{\varepsilon N_{1}}{1- \theta},t)}{E_{\alpha}(\frac{\varepsilon N_{1}}{1-\theta},t_{0})}\frac{E_{ \alpha}(\beta_{1},t_{0})}{E_{\alpha}(\beta_{1},t)}=K_{1}\frac{E_{\alpha}(\lambda _{1},t_{0})}{E_{\alpha}(\lambda_{1},t)},\quad t\geq t_{0},\] where \(K_{1}=\frac{N_{1}}{1-\theta}\) and \(\lambda_{1}=\beta_{1}-\frac{\delta N_{1}}{1-\theta}\). Therefore, Lemma 3.2 is proved. \(\Box\) As a corollary of Lemma 3.2, we introduce a more useful result in estimate of dichotomy. **Corollary 3.1**: _Suppose that \(N_{i}\), \(\beta_{i}\) and \(\varepsilon\) are all positive constants for \(i=1,2\), and bounded continuous nonnegative solutions \(u(t)\) satisfy_ \[\begin{split} u(t)\leq& N_{2}\frac{E_{\alpha}( \beta_{2},t)}{E_{\alpha}(\beta_{2},s)}+\frac{\varepsilon N_{1}}{E_{\alpha}( \beta_{1},t)}\mathcal{I}_{t_{0}}^{\alpha}E_{\alpha}(\beta_{1},t)u(t)\\ &-\varepsilon N_{2}E_{\alpha}(\beta_{2},t)\mathcal{I}_{s}^{ \alpha}\frac{u(t)}{E_{\alpha}(\beta_{2},t)},\quad s\geq t\geq t_{0}\geq 0,\\ u(t)\leq& N_{1}\frac{E_{\alpha}(\beta_{1},s)}{E_{ \alpha}(\beta_{1},t)}-\varepsilon N_{2}E_{\alpha}(\beta_{2},t)\mathcal{I}_{t_ {0}}^{\alpha}\frac{u(t)}{E_{\alpha}(\beta_{2},t)}\\ &+\frac{\varepsilon N_{1}}{E_{\alpha}(\beta_{1},t)}\mathcal{I}_{s }^{\alpha}E_{\alpha}(\beta_{1},t)u(t),\quad s\leq t\leq t_{0}\leq 0.\end{split} \tag{3.11}\] _If \(\theta<1\), then_ \[u(t)\leq\left\{\begin{array}{ll} K_{2}\frac{E_{\alpha}( \lambda_{2},t)}{E_{\alpha}(\lambda_{2},s)},\quad s\geq t\geq t_{0},\\ K_{1}\frac{E_{\alpha}(\lambda_{1},s)}{E_{\alpha}(\lambda_{1},t)},\quad s \leq t\leq t_{0},\end{array}\right.\] _where \(\theta\), \(K_{i}\) and \(\lambda_{i}\) were all defined in Lemma 3.2._ **Proof.** Without loss of generality, we only consider inequality (3.11), because inequality (3.12) can be changed into (3.11) through transformations \(t\to-t\), \(s\to-s\) and \(t_{0}\to-t_{0}\). Let \(t_{1}:=(s^{\alpha}-t^{\alpha}+t_{0}^{\alpha})^{1/\alpha}\), then \(s\geq t_{1}\geq t_{0}\), because of the fact \(s\geq t\geq t_{0}\). From (3.11) it follows that for \(s\geq t_{1}\geq t_{0}\geq 0\), \[\begin{split} u((s^{\alpha}-t_{1}^{\alpha}+t_{0}^{ \alpha})^{1/\alpha})\leq& N_{2}\frac{E_{\alpha}(\beta_{2},t_{0})}{E_{ \alpha}(\beta_{2},t_{1})}\\ +&\varepsilon N_{1}\int_{t_{0}}^{(s^{\alpha}-t_{1}^{ \alpha}+t_{0}^{\alpha})^{1/\alpha}}\tau^{\alpha-1}\frac{E_{\alpha}(\beta_{1}, \tau)E_{\alpha}(\beta_{1},t_{1})}{E_{\alpha}(\beta_{1},s)E_{\alpha}(\beta_{1}, t_{0})}u(\tau)d\tau\\ +&\varepsilon N_{2}\int_{(s^{\alpha}-t_{1}^{ \alpha}+t_{0}^{\alpha})^{1/\alpha}}^{s}\tau^{\alpha-1}\frac{E_{\alpha}(\beta_ {2},s)E_{\alpha}(\beta_{2},t_{0})}{E_{\alpha}(\beta_{2},\tau)E_{\alpha}(\beta_ {2},t_{1})}u(\tau)d\tau.\end{split}\] Put \(v(t_{1}):=u((s^{\alpha}-t_{1}^{\alpha}+t_{0}^{\alpha})^{1/\alpha})\) then \(u(\tau)=v((s^{\alpha}-\tau^{\alpha}+t_{0}^{\alpha})^{1/\alpha})\). The inequality above yields that for \(s\geq t_{1}\geq t_{0}\geq 0\), \[v(t_{1})\leq N_{2}\frac{E_{\alpha}(\beta_{2},t_{0})}{E_{\alpha}(\beta_{2},t_{1})}\] \[+\varepsilon N_{1}\int_{t_{0}}^{(s^{\alpha}-t_{1}^{\alpha}+t_{0}^ {\alpha})^{1/\alpha}}\tau^{\alpha-1}\frac{E_{\alpha}(\beta_{1},\tau)E_{\alpha} (\beta_{1},t_{1})}{E_{\alpha}(\beta_{1},s)E_{\alpha}(\beta_{1},t_{0})}v((s^{ \alpha}-\tau^{\alpha}+t_{0}^{\alpha})^{1/\alpha})d\tau\] \[+\varepsilon N_{2}\int_{(s^{\alpha}-t_{1}^{\alpha}+t_{0}^{\alpha} )^{1/\alpha}}^{s}\tau^{\alpha-1}\frac{E_{\alpha}(\beta_{2},s)E_{\alpha}(\beta_ {2},t_{0})}{E_{\alpha}(\beta_{2},\tau)E_{\alpha}(\beta_{2},t_{1})}v((s^{ \alpha}-\tau^{\alpha}+t_{0}^{\alpha})^{1/\alpha})d\tau.\] Let \(\iota:=(s^{\alpha}-\tau^{\alpha}+t_{0}^{\alpha})^{1/\alpha}\), then \[v(t_{1})\leq N_{2}\frac{E_{\alpha}(\beta_{2},t_{0})}{E_{\alpha}(\beta_{2},t_{1} )}+\varepsilon N_{2}\int_{t_{0}}^{t_{1}}\iota^{\alpha-1}\frac{E_{\alpha}( \beta_{2},\iota)}{E_{\alpha}(\beta_{2},t_{1})}v(\iota)d\iota\] \[+\varepsilon N_{1}\int_{t_{1}}^{s}\iota^{\alpha-1}\frac{E_{\alpha }(\beta_{1},t_{1})}{E_{\alpha}(\beta_{1},\iota)}v(\iota)d\iota,\quad s\geq t_ {1}\geq t_{0}\geq 0.\] The inequality also can be amplified as \[v(t_{1})\leq N_{2}\frac{E_{\alpha}(\beta_{2},t_{0})}{E_{\alpha}(\beta_{2},t_{1} )}+\frac{\varepsilon N_{2}}{E_{\alpha}(\beta_{2},t_{1})}\mathcal{I}_{t_{0}}^{ \alpha}E_{\alpha}(\beta_{2},t_{1})v(t_{1})\] \[-\varepsilon N_{1}E_{\alpha}(\beta_{1},t_{1})\mathcal{I}_{+ \infty}^{\alpha}\frac{v(t_{1})}{E_{\alpha}(\beta_{1},t_{1})},\quad t_{1}\geq t _{0}\geq 0.\] By the synchronous boundedness of both functions \(u\) and \(v\), we employ Lemma 3.2 to gain \[v(t_{1})\leq K_{2}\frac{E_{\alpha}(\lambda_{2},t_{0})}{E_{\alpha}(\lambda_{2},t_{1})},\quad t_{1}\geq t_{0}.\] It follows from the definition of \(v(t_{1})\) that \[u(t)\leq K_{2}\frac{E_{\alpha}(\lambda_{2},t)}{E_{\alpha}(\lambda_{2},s)}, \quad s\geq t\geq t_{0},\] where \(K_{2}=\frac{N_{2}}{1-\theta}\), \(\lambda_{2}=\beta_{2}-\frac{\varepsilon N_{2}}{1-\theta}\) given in Lemma 3.2. Hence, Corollary 3.1 is proved. \(\Box\) In the end of this section, we demonstrate the invariant manifolds theorem for CFDE (3.3). But before we do that, let us introduce the following notion. **Definition 3.4**: _Let \(\Omega\) is any subset of \(\mathbb{R}^{n}\) including zero and \(P\) is a projection matrix such that \(\mathbb{R}^{n}=P\mathbb{R}^{n}\oplus(I-P)\mathbb{R}^{n}\) and \(P^{2}=P\). We say \(\Omega\) is tangent to \((I-P)\mathbb{R}^{n}\) (resp. \(P\mathbb{R}^{n}\)) at zero, if \(\|Px\|/\|(I-P)x\|\to 0\) (resp. \(\|(I-P)x\|/\|Px\|\to 0\)) as \(x\to 0\) in \(\Omega\)._ From now on, let \(k:={\cal R}(I-P)\), where denote \({\cal R}(P)\) by the rank of matrix \(P\) and assume that **(E1)**: \(\zeta\in C_{I}(\mathbb{R}_{+},\mathbb{R}_{+})\) satisfies \(\zeta(0)=0\); **(E2)**: \(\Lambda(\zeta)\) consists of functions \(f\in C(\mathbb{R}^{n+1},\mathbb{R}^{n})\) such that \[\begin{array}{rcl}f(t,0)&=&0,\\ \|f(t,x)-f(t,y)\|&\leq&\zeta(\sigma)\|x-y\|,\quad\|x\|,\|y\|\leq\sigma;\end{array}\] **(E3)**: Projection matrix \(P\) fulfils \(X(t)P=PX(t)\) for all \(t\in\mathbb{R}\). **Theorem 3.1**: _Suppose that_ **(E1)-(E3)** _hold and denote the unstable and stable manifolds of the hyperbolic equilibrium \(x=0\) of equation (3.3) as \(U_{k}:=U_{k}(f)\) and \(S_{n-k}:=S_{n-k}(f)\) respectively, for any \(f\in\Lambda(\zeta)\). Then \(U_{k}\) and \(S_{n-k}\) are tangent to \((I-P)\mathbb{R}^{n}\) and \(P\mathbb{R}^{n}\) at \(x=0\) respectively, where \((I-P)\mathbb{R}^{n}\) and \(P\mathbb{R}^{n}\) are the unstable and stable invariant subspaces of the hyperbolic equilibrium \(x=0\) of (2.1), respectively. Moreover, there exist positive constants \(M\), \(\gamma_{1}\) and \(\gamma_{2}\) such that_ \[\begin{array}{rcl}\|x(t)\|\leq& M\frac{E_{\alpha}(\gamma_{1},t_{0})}{E_{ \alpha}(\gamma_{1},t)}\|x(t_{0})\|,\quad t\geq t_{0}\geq 0,\quad x(t_{0})\in S _{n-k},\\ \|x(t)\|\leq& M\frac{E_{\alpha}(\gamma_{2},t)}{E_{\alpha}(\gamma_{2},t_{0})}\|x(t_ {0})\|,\quad t\leq t_{0}\leq 0,\quad x(t_{0})\in U_{k}.\end{array} \tag{3.13}\] **Remark 3.1**: _The hyperbolic equilibrium \(x=0\) of ODE \(\dot{x}=A(t)x\) is also the hyperbolic equilibrium of CFDE (2.1). In fact, by Definition 2.1, one can verify_ \[{\cal T}^{\alpha}x(0)=\lim_{t\to 0}{\cal T}^{\alpha}x(t)=\lim_{t\to 0}A(t)x(t)= \lim_{t\to 0}\dot{x}(t)=\dot{x}(0),\] _which implies the assertion._ **Proof of Theorem 3.1.** Assume \(\lambda\), \(N_{i}\), \(\beta_{i}\) (\(i=1,2\)) are all given in (3.1) and (3.2) respectively, and the function \(\zeta(\sigma)\) (\(\sigma\geq 0\)) is given in **(E1)**. Take \(\delta\) satisfy \[(\frac{N_{1}}{\beta_{1}}+\frac{N_{2}}{\beta_{2}})\zeta(\delta)<\frac{1}{2}, \quad N_{1}<(\beta_{2}+\lambda-4N_{1}N_{2}\zeta(\delta))(\frac{N_{1}}{\beta_{1} }+\frac{N_{2}}{\beta_{2}}). \tag{3.14}\] Choose \(x_{0}\) satisfy \(\|x_{0}\|\leq\delta/2N_{1}\) for \(x_{0}\in\mathbb{R}^{n}\), and define \({\cal L}(Px_{0},\delta)\) is a set of functions \(x\in C([t_{0},+\infty),\mathbb{R}^{n})\), where \(\|x\|_{\infty}:=\sup_{t_{0}\leq t<+\infty}\|x(t)\|\leq\delta\) and \(t_{0}\geq 0\). \({\cal L}(Px_{0},\delta)\) is a closed bounded subset consisting of the Banach space of all bounded continuous functions mapping \([t_{0},+\infty)\) to \(\mathbb{R}^{n}\) with the uniform topology. For any \(x\in\mathcal{L}(Px_{0},\delta)\), define \[\begin{split}(\mathcal{J}x)(t):=& X(t)PX^{-1}(t_{0})x_{0}+X(t)P \mathcal{I}_{t_{0}}^{\alpha}X^{-1}(t)f(t,x(t))\\ &+X(t)(I-P)\mathcal{I}_{+\infty}^{\alpha}X^{-1}(t)f(t,x(t)), \quad t\geq t_{0}.\end{split} \tag{3.15}\] It is easy to know that \(\mathcal{J}x\) is well defined and continuous for \(t\geq t_{0}\). From (3.2), (3.14) and **(E2)**, we calculate that \[\begin{split}\|(\mathcal{J}x)(t)\|\leq& N_{1}\frac{E_{ \alpha}(\beta_{1},t_{0})}{E_{\alpha}(\beta_{1},t)}\|x_{0}\|+\frac{N_{1}}{E_{ \alpha}(\beta_{1},t)}\mathcal{I}_{t_{0}}^{\alpha}E_{\alpha}(\beta_{1},t)\|f(t,x(t))\|\\ &-N_{2}E_{\alpha}(\beta_{2},t)\mathcal{I}_{+\infty}^{\alpha} \frac{\|f(t,x(t))\|}{E_{\alpha}(\beta_{2},t)}\\ \leq& N_{1}\frac{E_{\alpha}(\beta_{1},t_{0})}{E_{ \alpha}(\beta_{1},t)}\|x_{0}\|+\zeta(\delta)\Big{(}\frac{N_{1}}{\beta_{1}}+ \frac{N_{2}}{\beta_{2}}\Big{)}\|x\|_{\infty}\\ \leq& N_{1}\|x_{0}\|+\zeta(\delta)\Big{(}\frac{N_{1}} {\beta_{1}}+\frac{N_{2}}{\beta_{2}}\Big{)}\delta\\ <&\frac{\delta}{2}+\frac{\delta}{2}=\delta,\end{split}\] thus \(\|\mathcal{J}x\|_{\infty}<\delta\) and \(\mathcal{J}:\mathcal{L}(Px_{0},\delta)\to\mathcal{L}(Px_{0},\delta)\). Analogously to the computation above, we obtain \[\|(\mathcal{J}x)(t)-(\mathcal{J}y)(t)\|\leq\zeta(\delta)\Big{(}\frac{N_{1}}{ \beta_{1}}+\frac{N_{2}}{\beta_{2}}\Big{)}\|x-y\|_{\infty}\leq\frac{1}{2}\|x- y\|_{\infty},\quad t\geq t_{0},\] which implies that \(\mathcal{J}\) is a contraction mapping in \(\mathcal{L}(Px_{0},\delta)\). In fact, there is a unique fixed point \(x_{*}(t,Px_{0})\in\mathcal{L}(Px_{0},\delta)\) satisfying (3.4). Note that the function \(x_{*}(t,Px_{0})\) is continuous with respect to \(Px_{0}\) and \(x_{*}(t,0)=0\). Let \(x_{*}(t):=x_{*}(t,Px_{0})\) and \(\hat{x}_{*}(t):=x_{*}(t,P\hat{x}_{0})\), it follows from (3.4), (3.1) and **(E3)** that \[\begin{split}\|x_{*}(t)-\hat{x}_{*}(t)\|\leq& K\frac{E_{ \alpha}(\lambda,t_{0})}{E_{\alpha}(\lambda,t)}\|Px_{0}-P\hat{x}_{0}\|\\ +&\frac{N_{1}\zeta(\delta)}{E_{\alpha}(\beta_{1},t) }\mathcal{I}_{t_{0}}^{\alpha}E_{\alpha}(\beta_{1},t)\|x_{*}(t)-\hat{x}_{*}(t) \|\\ -& N_{2}\zeta(\delta)E_{\alpha}(\beta_{2},t) \mathcal{I}_{+\infty}^{\alpha}\frac{\|x_{*}(t)-\hat{x}_{*}(t)\|}{E_{\alpha}( \beta_{2},t)},\quad t\geq t_{0}.\end{split}\] By Lemma 3.2, we acquire \[\|x_{*}(t,Px_{0})-x_{*}(t,P\hat{x}_{0})\|\leq 2K\frac{E_{\alpha}(\gamma_{1},t_{0} )}{E_{\alpha}(\gamma_{1},t)}\|Px_{0}-P\hat{x}_{0}\|,\quad t\geq t_{0}, \tag{3.16}\] where \(\gamma_{1}=\lambda-\frac{N_{1}\beta_{1}\beta_{2}}{N_{1}\beta_{2}+N_{2}\beta_{1}}\). Combining the fact \(x_{*}(t,0)=0\) and (3.16), one can verify that the first expression of the estimate (3.13) is true. Proceeding analogously to (3.16), the second estimate in (3.13) is also true, when \(\gamma_{2}=\lambda-\frac{N_{2}\beta_{1}\beta_{2}}{N_{1}\beta_{2}+N_{2}\beta_{1}}\). Set \(B_{\delta/2N_{1}}\) is the open ball in \(\mathbb{R}^{n}\) centered at the origin with radius \(\delta/2N_{1}\), and take \(S_{n-k}^{*}:=\{x:x=x_{*}(t_{0},Px_{0}),x_{0}\in B_{\delta/2N_{1}}\cap\mathbb{R}^ {n}\}\). Let \(h(Px_{0}):=x_{*}(t_{0},Px_{0})\) for \(x_{0}\in B_{\delta/2N_{1}}\cap\mathbb{R}^{n}\), we observe that \(h\) is a continuous mapping from \(B_{\delta/2N_{1}}\cap(P\mathbb{R}^{n})\) to \(S_{n-k}^{*}\), then \[h(Px_{0})=Px_{0}+X(t_{0})(I-P)\mathcal{I}_{+\infty}^{\alpha}X^{-1}(t_{0})f(t_ {0},x_{*}(t_{0},Px_{0})).\] Given \(x_{0},\hat{x}_{0}\in B_{\delta/2N_{1}}\cap\mathbb{R}^{n}\), we employ (3.2), (3.14), (3.16) and **(E2)** to obtain \[\|h(Px_{0})-h(P\hat{x}_{0})\| \geq\|Px_{0}-P\hat{x}_{0}\|+N_{2}\zeta(\delta)E_{\alpha}(\beta_{ 2},t_{0})\mathcal{I}_{+\infty}^{\alpha}\frac{\|x_{*}(t_{0})-\hat{x}_{*}(t_{0}) \|}{E_{\alpha}(\beta_{2},t_{0})}\] \[\geq\|Px_{0}-P\hat{x}_{0}\|\left[1+E_{\alpha}(\beta_{2}+\gamma_{ 1},t_{0})\mathcal{I}_{+\infty}^{\alpha}\frac{2N_{1}N_{2}\zeta(\delta)}{E_{ \alpha}(\beta_{2}+\gamma_{1},t_{0})}\right]\] \[\geq\left(1-\frac{2N_{1}N_{2}\zeta(\delta)}{\beta_{2}+\gamma_{1}} \right)\|Px_{0}-P\hat{x}_{0}\|\geq\frac{1}{2}\|Px_{0}-P\hat{x}_{0}\|,\] yielding \(h\) is a bijective. And since \(h^{-1}=P\) is continuous, \(h\) is a homeomorphism. Hence, \(S_{n-k}^{*}\) is homeomorphic to the \((n-k)\)-D open unit ball in \(\mathbb{R}^{n-k}\). If \(S_{n-k}^{*}\) is not a positively invariant set, then we expand \(S_{n-k}^{*}\) into the positively invariant \(S_{n-k}\), by absorbing all the positive orbits of the solutions starting from \(S_{n-k}^{*}\). From the uniqueness of the solutions, \(S_{n-k}\) is also homeomorphic to the open unit ball in \(\mathbb{R}^{n-k}\). In other words, the case \(\|Px\|<\delta/2N_{1}\) for all \(x\in S_{n-k}\) implies \(S_{n-k}\equiv S_{n-k}^{*}\). It follows from (3.15), (3.16), **(E2)** and the fact \(x_{*}(t,0)=0\) that \[\|(I-P)x_{*}(t_{0},Px_{0})\| \leq-N_{2}E_{\alpha}(\beta_{2},t_{0})\mathcal{I}_{+\infty}^{ \alpha}\frac{\|f(t_{0},x_{*}(t_{0},Px_{0}))\|}{E_{\alpha}(\beta_{2},t_{0})}\] \[\leq-N_{2}E_{\alpha}(\beta_{2},t_{0})\mathcal{I}_{+\infty}^{ \alpha}\frac{\zeta(\|x_{*}(t_{0},Px_{0})\|)}{E_{\alpha}(\beta_{2},t_{0})}\|x _{*}(t_{0},Px_{0})\|\] \[\leq-N_{2}E_{\alpha}(\beta_{2},t_{0})\mathcal{I}_{+\infty}^{ \alpha}\frac{\zeta(2N_{1}\|Px_{0}\|)}{E_{\alpha}(\beta_{2},t_{0})}2N_{1}\|Px _{0}\|\] \[\leq\frac{2N_{1}N_{2}}{\beta_{2}}\zeta(2N_{1}\|Px_{0}\|)\|Px_{0}\|.\] Since \(\|Px_{0}\|\to 0\) as \(\|x_{0}\|\to 0\), we get \(\|(I-P)x_{*}(t_{0},Px_{0})\|/\|Px_{0}\|\to 0\) as \(\|x_{0}\|\to 0\) in \(S_{n-k}\). Consequently \(S_{n-k}\) is tangent to \(P\mathbb{R}^{n}\) at zero. Similarly, one can construct the set \(U_{k}\) via (3.5), and complete the proof of Theorem 3.1. \(\Box\) ## 4 Roughness of dichotomy Our focus of this section is the roughness of the Mittag-Leffler dichotomy. That is the preservation of dichotomy for hyperbolic linear systems undergoing small linear perturbation. Consider the perturbed equation of linear CFDE (2.1) as follows \[\mathcal{T}^{\alpha}y=[A(t)+B(t)]y,\quad(t,y)\in\mathbb{R}^{n+1}, \tag{4.1}\] where matrix functions \(A\in C(\mathbb{R},\mathbb{R}^{n\times n})\) and \(B\in C_{b}(\mathbb{R},\mathbb{R}^{n\times n})\). The following is one of our main results of this paper. **Theorem 4.1**: _Assume that \(X(t)\) is a fundamental matrix of (2.1) such that \(X(0)=I\), and equation (2.1) possesses a Mittag-Leffler dichotomy, i.e., estimates (3.2) hold in \(\mathbb{R}_{+}\). If \(\varepsilon:=\sup\limits_{t\geq 0}\|B(t)\|\) is sufficiently small, then perturbed equation (4.1) also possesses a Mittag-Leffler dichotomy in \(\mathbb{R}_{+}\)._ **Proof.** We divide the proof of Theorem 4.1 into the following three steps. Step 1: Finding bounded solutions of equation (4.1). Let matrix function \(Y\in C_{b}(\mathbb{R}_{+},\mathbb{R}^{n\times n})\) equipped with norm \[\|Y\|_{\infty}:=\sup\limits_{t\geq 0}\|Y(t)\|.\] Define mapping \(L:C_{b}(\mathbb{R}_{+},\mathbb{R}^{n\times n})\to C_{b}(\mathbb{R}_{+}, \mathbb{R}^{n\times n})\) as \[LY(t)=X(t)P+X(t)P\mathcal{I}_{0}^{\alpha}X^{-1}(t)B(t)Y(t)+X(t)(I-P)\mathcal{ I}_{+\infty}^{\alpha}X^{-1}(t)B(t)Y(t).\] It follows from (3.2) that \[\|LY(t)\|\leq\frac{N_{1}}{E_{\alpha}(\beta_{1},t)}+\frac{ \varepsilon N_{1}\|Y\|_{\infty}}{E_{\alpha}(\beta_{1},t)}\mathcal{I}_{0}^{ \alpha}E_{\alpha}(\beta_{1},t)-E_{\alpha}(\beta_{2},t)\mathcal{I}_{+\infty}^{ \alpha}\frac{\varepsilon N_{2}\|Y\|_{\infty}}{E_{\alpha}(\beta_{2},t)}.\] Observing that \(LY(t)\) is bounded and continuous for \(t\geq 0\), we obtain \[\|LY\|_{\infty}\leq N_{1}+\varepsilon\Big{(}\frac{N_{1}}{\beta_{1}}+\frac{N_ {2}}{\beta_{2}}\Big{)}\|Y\|_{\infty}.\] Given another \(\hat{Y}\in C_{b}(\mathbb{R}_{+},\mathbb{R}^{n\times n})\), analogously we get \[\|LY-L\hat{Y}\|_{\infty}\leq\varepsilon\Big{(}\frac{N_{1}}{\beta_{1}}+\frac{ N_{2}}{\beta_{2}}\Big{)}\|Y-\hat{Y}\|_{\infty}.\] This yields that the mapping \(L\) has a unique \(Y_{1}\in C_{b}(\mathbb{R}_{+},\mathbb{R}^{n\times n})\) such that \[\begin{split} Y_{1}(t)=& X(t)P+X(t)P\mathcal{I}_{0}^{ \alpha}X^{-1}(t)B(t)Y_{1}(t)\\ &+X(t)(I-P)\mathcal{I}_{+\infty}^{\alpha}X^{-1}(t)B(t)Y_{1}(t), \end{split} \tag{4.2}\] if \[\theta:=\varepsilon\Big{(}\frac{N_{1}}{\beta_{1}}+\frac{N_{2}}{ \beta_{2}}\Big{)}<1.\] Obviously, \(Y_{1}(t)\) is also a matrix solution of (4.1) and differentiable. Post-projecting \(P\) on both hands sides of (4.2), we also know that \(Y_{1}(t)P\) is the unique fixed point of \(L\), and \(Y_{1}(t)P=Y_{1}(t)\). Step 2: Constructing projection matrix. Let \(Q:=Y_{1}(0)\), then \(QP=Q\). Combining (4.2) with the property \(P(I-P)=0\), and replacing \(t\) with \(s\), we attain \[X(t)PX^{-1}(s)Y_{1}(s)=X(t)P+X(t)P\mathcal{I}_{0}^{\alpha}X^{-1}(s)B(s)Y_{1}(s). \tag{4.3}\] It follows from (4.2) and (4.3) that \[\begin{split} Y_{1}(t)=& X(t)PX^{-1}(s)Y_{1}(s)+X(t)P \mathcal{I}_{s}^{\alpha}X^{-1}(t)B(t)Y_{1}(t)\\ &+X(t)(I-P)\mathcal{I}_{+\infty}^{\alpha}X^{-1}(t)B(t)Y_{1}(t), \quad t\geq s\geq 0.\end{split} \tag{4.4}\] Noting (4.3) with \(t=s=0\), we gain \(PQ=P\). Post-projecting \(Q\) on both hands sides of (4.2) again, we acquire \[\begin{split} Y_{1}(t)Q=&\ X(t)P+X(t)P\mathcal{I}_{ 0}^{\alpha}X^{-1}(t)B(t)Y_{1}(t)Q\\ +& X(t)(I-P)\mathcal{I}_{+\infty}^{\alpha}X^{-1}(t) B(t)Y_{1}(t)Q,\end{split}\] implying \(Y_{1}(t)Q\) is also a fixed point of \(L\). In conclusion, \[Y_{1}(t)Q=Y_{1}(t)=Y_{1}(t)P.\] Obviously, \(Q\) is a projection when \(t=0\). Provided \(Y(t)\) is a fundamental matrix of (4.1) fulfilling \(Y(0)=I\), we derive \[Y_{1}(t)=Y(t)Q. \tag{4.5}\] Set \[Y_{2}(t):=Y(t)(I-Q), \tag{4.6}\] then \(Y(t)=Y_{1}(t)+Y_{2}(t)\). Relying on the variation of constants formula (2.19), we calculate that \[Y_{2}(t) = X(t)X^{-1}(0)Y_{2}(0)+X(t)\mathcal{I}_{0}^{\alpha}X^{-1}(t)B(t)Y _{2}(t) \tag{4.7}\] \[= X(t)Y(0)(I-Q)+X(t)\mathcal{I}_{0}^{\alpha}X^{-1}(t)B(t)Y_{2}(t)\] \[= X(t)(I-Q)+X(t)\mathcal{I}_{0}^{\alpha}X^{-1}(t)B(t)Y_{2}(t).\] Combining (4.7) with the fact \((I-P)(I-Q)=I-Q\), and replacing \(t\) with \(s\), we acquire \[X(t)(I-P)X^{-1}(s)Y_{2}(s)=X(t)(I-Q)+X(t)(I-P)\mathcal{I}_{0}^{ \alpha}X^{-1}(s)B(s)Y_{2}(s). \tag{4.8}\] Subsequently, by (4.7) and (4.8), we receive \[Y_{2}(t) = X(t)(I-P)X^{-1}(s)Y_{2}(s)+X(t)\mathcal{I}_{0}^{\alpha}X^{-1}(t) B(t)Y_{2}(t) \tag{4.9}\] \[-X(t)(I-P)\mathcal{I}_{0}^{\alpha}X^{-1}(s)B(s)Y_{2}(s)\] \[= X(t)(I-P)X^{-1}(s)Y_{2}(s)+X(t)P\mathcal{I}_{0}^{\alpha}X^{-1}(t )B(t)Y_{2}(t)\] \[+X(t)(I-P)\mathcal{I}_{0}^{\alpha}X^{-1}(t)B(t)Y_{2}(t)-X(t)(I-P) \mathcal{I}_{0}^{\alpha}X^{-1}(s)B(s)Y_{2}(s)\] \[= X(t)(I-P)X^{-1}(s)Y_{2}(s)+X(t)P\mathcal{I}_{0}^{\alpha}X^{-1}(t )B(t)Y_{2}(t)\] \[+X(t)(I-P)\mathcal{I}_{s}^{\alpha}X^{-1}(t)B(t)Y_{2}(t),\quad s \geq t\geq 0.\] From (4.4) and (4.9) it follows that for any vector \(\xi\), \[\|Y_{1}(t)\xi\| \leq N_{1}\frac{E_{\alpha}(\beta_{1},s)}{E_{\alpha}(\beta_{1},t)}\|Y_{1} (s)\xi\|+\frac{\varepsilon N_{1}}{E_{\alpha}(\beta_{1},t)}|\mathcal{I}_{s}^{ \alpha}E_{\alpha}(\beta_{1},t)Y_{1}(t)\xi|\] \[-\varepsilon N_{2}E_{\alpha}(\beta_{2},t)\Big{|}\mathcal{I}_{+ \infty}^{\alpha}\frac{Y_{1}(t)\xi}{E_{\alpha}(\beta_{2},t)}\Big{|},\quad t\geq s \geq 0,\] and \[\|Y_{2}(t)\xi\| \leq N_{2}\frac{E_{\alpha}(\beta_{2},t)}{E_{\alpha}(\beta_{2},s)}\|Y_ {2}(s)\xi\|+\frac{\varepsilon N_{1}}{E_{\alpha}(\beta_{1},t)}|\mathcal{I}_{0} ^{\alpha}E_{\alpha}(\beta_{1},t)Y_{2}(t)\xi|\] \[-\varepsilon N_{2}E_{\alpha}(\beta_{2},t)\Big{|}\mathcal{I}_{s}^ {\alpha}\frac{Y_{2}(t)\xi}{E_{\alpha}(\beta_{2},t)}\Big{|},\quad s\geq t\geq 0.\] Thus, by Lemma 3.2 and Corollary 3.1, we can know \[\|Y_{1}(t)\xi\| \leq K_{1}\frac{E_{\alpha}(\lambda_{1},s)}{E_{\alpha}(\lambda_{1},t)}\|Y_{1}(s)\xi\|,\quad t\geq s\geq 0, \tag{4.10}\] \[\|Y_{2}(t)\xi\| \leq K_{2}\frac{E_{\alpha}(\lambda_{2},t)}{E_{\alpha}(\lambda_{2},s)}\|Y_{2}(s)\xi\|,\quad s\geq t\geq 0,\] where \(K_{i}=\frac{N_{i}}{1-\theta}\), \(\lambda_{i}=\beta_{i}-\frac{\varepsilon N_{i}}{1-\theta}\) and \(\theta=\varepsilon\Big{(}\frac{N_{1}}{\beta_{1}}+\frac{N_{2}}{\beta_{2}} \Big{)}\) for \(i=1,2\). Step 3: Estimation of fundamental solutions. To prove from (4.10) that the perturbed equation (4.1) also possesses a Mittag-Leffler dichotomy, we only need to exhibit that \(Y(t)QY^{-1}(t)\) is bounded. From the facts \((I-P)P=0\), \((I-P)(I-P)=I-P\) and (4.2) it follows that \[X(t)(I-P)X^{-1}(t)Y_{1}(t)=X(t)(I-P)\mathcal{I}_{+\infty}^{\alpha}X^{-1}(t)B( t)Y_{1}(t).\] By (4.10), for any vector \(\xi\), we calculate that \[\|X(t)(I-P)X^{-1}(t)Y_{1}(t)\xi\| \leq-\varepsilon N_{2}E_{\alpha}(\beta_{2},t)\mathcal{I}_{+ \infty}^{\alpha}\frac{\|Y_{1}(t)\xi\|}{E_{\alpha}(\beta_{2},t)}\] \[\leq-E_{\alpha}(\lambda_{1}+\beta_{2},t)\|Y_{1}(t)\xi\|\mathcal{ I}_{+\infty}^{\alpha}\frac{\varepsilon K_{1}N_{2}}{E_{\alpha}(\lambda_{1}+\beta_{2},t)}\] \[\leq\frac{\varepsilon K_{1}N_{2}}{\lambda_{1}+\beta_{2}}\|Y_{1}(t )\xi\|. \tag{4.11}\] Analogously, pre-multiplying \(X(t)PX^{-1}(t)\) on both hands sides of (4.7), by the property \(P(I-Q)=0\), we obtain \[X(t)PX^{-1}(t)Y_{2}(t)=X(t)P\mathcal{I}_{0}^{\alpha}X^{-1}(t)B(t)Y_{2}(t).\] It follows from (4.10) that for any vector \(\xi\), \[\|X(t)PX^{-1}(t)Y_{2}(t)\xi\| \leq\frac{\varepsilon N_{1}}{E_{\alpha}(\beta_{1},t)}\Big{|}\mathcal{ T}_{0}^{\alpha}E_{\alpha}(\beta_{1},t)Y_{2}(t)\xi\Big{|}\] \[\leq\frac{\varepsilon K_{2}N_{1}}{E_{\alpha}(\lambda_{2}+\beta_{1 },t)}\|Y_{2}(t)\xi\|\mathcal{T}_{0}^{\alpha}E_{\alpha}(\lambda_{2}+\beta_{1},t)\] \[\leq\frac{\varepsilon K_{2}N_{1}}{\lambda_{2}+\beta_{1}}\|Y_{2}( t)\xi\|. \tag{4.12}\] Substituting (4.5) and (4.6) into (4.11) and (4.12) respectively, and replacing \(\xi\) by \(Y^{-1}(t)\xi\), we acquire \[\|X(t)(I-P)X^{-1}(t)Y_{1}(t)\xi\| \leq\|X(t)(I-P)X^{-1}(t)Y(t)QY^{-1}(t)\xi\|\] \[\leq\frac{\varepsilon K_{1}N_{2}}{\lambda_{1}+\beta_{2}}\|Y(t)QY^ {-1}(t)\xi\|, \tag{4.13}\] and \[\|X(t)PX^{-1}(t)Y_{2}(t)\xi\| \leq\|X(t)PX^{-1}(t)Y(t)(I-Q)Y^{-1}(t)\xi\|\] \[\leq\frac{\varepsilon K_{2}N_{1}}{\lambda_{2}+\beta_{1}}\|Y(t)(I -Q)Y^{-1}(t)\xi\|. \tag{4.14}\] On the other hand, it is evident to derive that \[Y(t)QY^{-1}(t)-X(t)PX^{-1}(t)= \ X(t)[P+(I-P)]X^{-1}(t)Y(t)QY^{-1}(t)\] \[-X(t)PX^{-1}(t)Y(t)[Q+(I-Q)]Y^{-1}(t)\] \[= \ X(t)PX^{-1}(t)Y(t)QY^{-1}(t)\] \[+X(t)(I-P)X^{-1}(t)Y(t)QY^{-1}(t)\] \[-X(t)PX^{-1}(t)Y(t)QY^{-1}(t)\] \[-X(t)PX^{-1}(t)Y(t)(I-Q)Y^{-1}(t)\] \[= \ X(t)(I-P)X^{-1}(t)Y(t)QY^{-1}(t)\] \[-X(t)PX^{-1}(t)Y(t)(I-Q)Y^{-1}(t). \tag{4.15}\] Combining (4.13) and (4.14) with (4.15), we can obtain \[\|Y(t)QY^{-1}(t)-X(t)PX^{-1}(t)\|\leq\frac{\varepsilon K_{1}N_{2}}{\lambda_{1 }+\beta_{2}}\mu_{1}+\frac{\varepsilon K_{2}N_{1}}{\lambda_{2}+\beta_{1}}\mu_{ 2},\] where \(\mu_{1}(t):=\|Y(t)QY^{-1}(t)\|\), \(\mu_{2}(t):=\|Y(t)(I-Q)Y^{-1}(t)\|\). For convenience, take \(N:=\max\{N_{1},N_{2}\}\) and \(\beta:=\min\{\beta_{1},\beta_{2}\}\) such that \(\theta\leq\dot{\theta}:=2\varepsilon N/\beta\), and it yields \[\mu_{1}=\|Y(t)QY^{-1}(t)\| \leq\Big{(}\frac{\varepsilon K_{1}N_{2}}{\lambda_{1}+\beta_{2}} \mu_{1}+\frac{\varepsilon K_{2}N_{1}}{\lambda_{2}+\beta_{1}}\mu_{2}\Big{)}+\| X(t)PX^{-1}(t)\|\] \[\leq\Big{(}\frac{\varepsilon K_{1}N_{2}}{\lambda_{1}+\beta_{2}} \mu_{1}+\frac{\varepsilon K_{2}N_{1}}{\lambda_{2}+\beta_{1}}\mu_{2}\Big{)}+N_{1}\] \[\leq\eta(\mu_{1}+\mu_{2})+N, \tag{4.16}\] where \(\eta=\dfrac{\varepsilon N^{2}}{2\beta-5\varepsilon N}\). It is obvious that \[Y(t)QY^{-1}(t)-X(t)PX^{-1}(t)=X(t)(I-P)X^{-1}(t)-Y(t)(I-Q)Y^{-1}(t),\] then \[\mu_{2}\leq\Big{(}\dfrac{\varepsilon K_{1}N_{2}}{\lambda_{1}+\beta_{2}}\mu_{1} +\dfrac{\varepsilon K_{2}N_{1}}{\lambda_{2}+\beta_{1}}\mu_{2}\Big{)}+N_{2} \leq\eta(\mu_{1}+\mu_{2})+N. \tag{4.17}\] By adding the inequality (4.16) and (4.17), we attain \[\mu_{1}+\mu_{2}\leq\dfrac{2N}{1-2\eta}.\] If \(\eta<1/2\), then \[\mu_{1},\mu_{2}\leq\eta(\mu_{1}+\mu_{2})+N\leq\dfrac{N}{1-2\eta}.\] Substituting (4.5) and (4.6) into (4.10), and replacing \(\xi\) by \(Y^{-1}(s)\xi\), we gain \[\|Y(t)QY^{-1}(s)\xi\| \leq K_{1}\dfrac{E_{\alpha}(\lambda_{1},s)}{E_{\alpha}(\lambda_{ 1},t)}\|Y(s)QY^{-1}(s)\xi\|\] \[\leq\dfrac{K_{1}N}{1-2\eta}\dfrac{E_{\alpha}(\lambda_{1},s)}{E_{ \alpha}(\lambda_{1},t)}\|\xi\|,\quad t\geq s\geq 0,\] \[\|Y(t)(I-Q)Y^{-1}(s)\xi\| \leq K_{2}\dfrac{E_{\alpha}(\lambda_{2},t)}{E_{\alpha}(\lambda_{ 2},s)}\|Y(s)(I-Q)Y^{-1}(s)\xi\|\] \[\leq\dfrac{K_{2}N}{1-2\eta}\dfrac{E_{\alpha}(\lambda_{2},t)}{E_{ \alpha}(\lambda_{2},s)}\|\xi\|,\quad s\geq t\geq 0.\] For the arbitrariness of vector \(\xi\), we obtain the Mittag-Leffler dichotomy as follows \[\|Y(t)QY^{-1}(s)\| \leq\dfrac{K_{1}N}{1-2\eta}\dfrac{E_{\alpha}(\lambda_{1},s)}{E_{ \alpha}(\lambda_{1},t)},\quad t\geq s\geq 0,\] \[\|Y(t)(I-Q)Y^{-1}(s)\| \leq\dfrac{K_{2}N}{1-2\eta}\dfrac{E_{\alpha}(\lambda_{2},t)}{E_{ \alpha}(\lambda_{2},s)},\quad s\geq t\geq 0.\] Therefore, Theorem 4.1 is proved completely. \(\Box\) Finally, we present the concrete constants of estimates in Theorem 4.1. Like the condition \(N=\max\{N_{1},N_{2}\}\) and \(\beta=\min\{\beta_{1},\beta_{2}\}\) such that \(\theta\leq\hat{\theta}=2\varepsilon N/\beta\) holds, let \(N\geq 1\) and \(\hat{\theta}<2/5N\) such that \(\eta=\dfrac{\varepsilon N^{2}}{2\beta-5\varepsilon N}<\dfrac{N}{10N-5}< \dfrac{1}{2}\). Thus, by elementary calculation, we can obtain the following brief statement. **Corollary 4.1**: _Suppose that equation (2.1) possesses the Mittag-Leffler dichotomy (3.2) in \(\mathbb{R}_{+}\). If_ \[\varepsilon:=\sup_{t\in\mathbb{R}_{+}}\|B(t)\|<\frac{\beta}{5N^{2}},\] _then perturbed equation (4.1) also possesses the following Mittag-Leffler dichotomy:_ \[\|Y(t)QY^{-1}(s)\|\leq\frac{25N^{2}}{9}\frac{E_{\alpha}(\beta-3 \varepsilon N,s)}{E_{\alpha}(\beta-3\varepsilon N,t)},\quad t\geq s\geq 0,\] \[\|Y(t)(I-Q)Y^{-1}(s)\|\leq\frac{25N^{2}}{9}\frac{E_{\alpha}(\beta -3\varepsilon N,t)}{E_{\alpha}(\beta-3\varepsilon N,s)},\quad s\geq t\geq 0,\] _where \(Y(t)\) is a fundamental matrix of (4.1) such that \(Y(0)=I\), and both projection matrices \(Q\) and \(P\) have the same rank. Moreover,_ \[\|Y(t)QY^{-1}(t)-X(t)PX^{-1}(t)\|\leq\frac{2\varepsilon N^{3}}{2\beta-5 \varepsilon N-2\varepsilon N^{2}},\quad t\geq 0.\] ## 5 Nonuniform dichotomy This section is a continuation of studies for the Mittag-Leffler dichotomy. More precisely, we concern _nonuniform Mittag-Leffler dichotomy_. Let \(\mathcal{B}(Z)\) consist of all bounded linear operators in Banach space \(Z\). Consider nonautonomous linear CFDE on \(Z\) \[\mathcal{T}^{\alpha}x=A(t)x,\quad(t,x)\in J\times Z, \tag{5.1}\] where linear operator \(A\in C(J,\mathcal{B}(Z))\) for some interval \(J\subset\mathbb{R}\) and \(\mathcal{B}(Z)\) is also a Banach space with the norm \(\|A\|:=\sup_{x\in Z,\|x\|=1}\|Ax\|\) for all \(A\in\mathcal{B}(Z)\). Let \(T(t,s)\) be a family of evolution operators satisfying \(x(t)=T(t,s)x(s)\) for \(t\geq s\) and \(t,s\in J\), where \(x(t)\) is any solution of (5.1). \(T(t,s)\) further satisfies: * \(T(t,t)=\mathrm{Id}\) (abbreviation of identity) for \(t\in J\); * \(T(t,s)T(s,\tau)=T(t,\tau)\) for \(t,s,\tau\in J\); * the evolution operator \(T(t,s)\) is invertible and \(T^{-1}(t,s)=T(s,t)\) for \(t,s\in J\). First, we introduce the notions of nonuniform asymptotical stability and nonuniform Mittag-Leffler dichotomy. **Definition 5.1**: _Equation (5.1) is said to be nonuniform asymptotically stable in \(J\) if there exist constants \(\hat{N},\hat{\beta}>0\) and \(\epsilon\geq 0\) such that_ \[\|T(t,s)\|\leq\hat{N}\frac{E_{\alpha}(\hat{\beta},s)}{E_{\alpha}(\hat{\beta},t) }E_{\alpha}(\epsilon,|s|),\quad t\geq s,\quad t,s\in J. \tag{5.2}\] _In particular, (5.1) is uniformly asymptotically stable like (3.1) if (5.2) hold with \(\epsilon=0\)._ **Definition 5.2**: _Equation (5.1) is said to admit a nonuniform Mittag-Leffler dichotomy in \(J\) if there exist projections \(P:J\to\mathcal{B}(Z)\) such that_ \[T(t,s)P(s)=P(t)T(t,s),\quad t\geq s,\quad t,s\in J, \tag{5.3}\] _and constants \(\hat{N}_{i},\hat{\beta}_{i}>0\)\((i=1,2)\) and \(\epsilon\geq 0\) such that for \(t,s\in J\),_ \[\|T(t,s)P(s)\| \leq\hat{N}_{1}\frac{E_{\alpha}(\hat{\beta}_{1},s)}{E_{\alpha}( \hat{\beta}_{1},t)}E_{\alpha}(\epsilon,|s|),\quad t\geq s, \tag{5.4}\] \[\|T(t,s)(\mathrm{Id}-P(s))\| \leq\hat{N}_{2}\frac{E_{\alpha}(\hat{\beta}_{2},t)}{E_{\alpha}( \hat{\beta}_{2},s)}E_{\alpha}(\epsilon,|s|),\quad s\geq t.\] _In particular, (5.1) admits a uniform Mittag-Leffler dichotomy like (3.2) if (5.4) hold with \(\epsilon=0\)._ All results in this section are presented in \(\mathbb{R}_{+}\), and denote \[\mathcal{I}_{s}^{\alpha}f(t,\cdot):=\int_{s}^{t}\tau^{\alpha-1}f(t,\tau)d\tau.\] Consider the linear perturbation of (5.1) as follows \[\mathcal{T}^{\alpha}x=[A(t)+B(t)]x,\quad(t,x)\in\mathbb{R}_{+}\times Z, \tag{5.5}\] where linear operators \(A\in C(\mathbb{R}_{+},\mathcal{B}(Z))\) and \(B\in C_{b}(\mathbb{R}_{+},\mathcal{B}(Z))\). The following theorem gives out the roughness of nonuniform asymptotical stability. **Theorem 5.1**: _Assume that equation (5.1) admits nonuniform asymptotical stability in \(\mathbb{R}_{+}\), and there exists constant \(\delta\) such that \(\|B(t)\|\leq\delta/E_{\alpha}(\epsilon,t)\) for \(t\in\mathbb{R}_{+}\). If \(\theta:=\delta\hat{N}/\hat{\beta}<1\), then equation (5.5) also admits nonuniform asymptotical stability in \(\mathbb{R}_{+}\), that is,_ \[\|U(t,s)\|\leq\frac{\hat{N}}{1-\theta}\frac{E_{\alpha}(\gamma,s)}{E_{\alpha}( \gamma,t)}E_{\alpha}(\epsilon,s),\quad t\geq s,\quad t,s\in\mathbb{R}_{+}, \tag{5.6}\] _where \(\gamma=\hat{\beta}-\frac{\delta\hat{N}}{1-\theta}\) and \(U(t,s)\) denotes the evolution operator associated to (5.5)._ **Proof.** Consider the space \[W:=\{U(t,s)_{t\geq s}\in\mathcal{B}(Z):U\text{ is continuous and }\|U\|_{\alpha}<\infty,\,(t,s)\in \mathbb{R}^{2}_{+}\},\] equipped with \(\alpha\)-weighted norm \[\|U\|_{\alpha}:=\sup\left\{\frac{\|U(t,s)\|}{E_{\alpha}(\epsilon,s)}:t\geq s,( t,s)\in\mathbb{R}^{2}_{+}\right\}. \tag{5.7}\] It is easy to verify that \(W\) is a Banach space. In \(W\) define an operator \(\mathcal{J}\) by \[(\mathcal{J}U)(t,s)=T(t,s)+\mathcal{I}^{\alpha}_{s}T(t,\cdot)B(\cdot)U(\cdot,s).\] It follows from (5.2) that \[\|(\mathcal{J}U)(t,s)\| \leq \|T(t,s)\|+\mathcal{I}^{\alpha}_{s}\|T(t,\cdot)\|\|B(\cdot)\|\|U( \cdot,s)\|\] \[\leq \hat{N}\frac{E_{\alpha}(\hat{\beta},s)}{E_{\alpha}(\hat{\beta},t )}E_{\alpha}(\epsilon,s)+\frac{\delta\hat{N}\|U\|_{\alpha}E_{\alpha}(\epsilon,s)}{E_{\alpha}(\hat{\beta},t)}\mathcal{I}^{\alpha}_{s}E_{\alpha}(\hat{\beta},t)\] \[\leq \hat{N}E_{\alpha}(\epsilon,s)+\frac{\delta\hat{N}}{\hat{\beta}} \|U\|_{\alpha}E_{\alpha}(\epsilon,s).\] And by (5.7) we obtain \[\|\mathcal{J}U\|_{\alpha}\leq\hat{N}+\frac{\delta\hat{N}}{\hat{\beta}}\|U\|_{ \alpha}<\infty,\] which yields that the operator \(\mathcal{J}:W\to W\) is well defined. Analogously to the computation above, we have \[\|\mathcal{J}U_{1}-\mathcal{J}U_{2}\|_{\alpha}\leq\frac{\delta\hat{N}}{\hat{ \beta}}\|U_{1}-U_{2}\|_{\alpha},\quad U_{1},U_{2}\in W,\] which implies that \(\mathcal{J}\) is a contraction since \(\delta<\hat{\beta}/\hat{N}\). So there exists a unique \(U\in W\) satisfying \(\mathcal{J}U=U\), and one can verify that it is a solution of (5.5). We apply Lemma 3.2 with condition \(\theta:=\delta\hat{N}/\hat{\beta}<1\) to the estimation of \(\|U(t,s)\|\). And inequality (5.6) is true. \(\Box\) Subsequently, our purpose is to establish roughness of nonuniform Mittag-Leffler dichotomy in \(\mathbb{R}_{+}\). A preliminary theorem and the main theorem of roughness are both stated as follows. **Theorem 5.2**: _Assume that equation (5.1) admits a nonuniform Mittag-Leffler dichotomy (5.4) in \(\mathbb{R}_{+}\), and there exists constant \(\delta\) such that \(\|B(t)\|\leq\delta/E_{\alpha}(\epsilon,t)\) for \(t\in\mathbb{R}_{+}\). If_ \[\theta:=\delta\left(\frac{\hat{N}_{1}}{\hat{\beta}_{1}}+\frac{\hat{N}_{2}}{ \hat{\beta}_{2}}\right)<1,\quad\epsilon<\min\{\hat{\beta}_{1},\hat{\beta}_{2}\}, \tag{5.8}\] _then there exist projections \(\hat{P}:\mathbb{R}_{+}\rightarrow\mathcal{B}(Z)\) such that_ \[\hat{T}(t,s)\hat{P}(s)=\hat{P}(t)\hat{T}(t,s),\quad t\geq s,\quad t,s\in \mathbb{R}_{+}, \tag{5.9}\] _and constants \(K_{i},\lambda_{i}>0\)\((i=1,2)\) and \(\epsilon\geq 0\) such that_ \[\|\hat{T}(t,s)|\mathrm{Im}\hat{P}(s)\|\leq K_{1}\frac{E_{\alpha}( \lambda_{1},s)}{E_{\alpha}(\lambda_{1},t)}E_{\alpha}(\epsilon,s),\quad t\geq s \geq 0, \tag{5.10}\] \[\|\hat{T}(t,s)|\mathrm{Im}(\mathrm{Id}-\hat{P}(s))\|\leq K_{2} \frac{E_{\alpha}(\lambda_{2},t)}{E_{\alpha}(\lambda_{2},s)}E_{\alpha}(\epsilon,s),\quad s\geq t\geq 0,\] _where \(K_{i}=\frac{\hat{N}_{i}}{1-\theta}\), \(\lambda_{i}=\hat{\beta}_{i}-\frac{\delta\hat{N}_{i}}{1-\theta}\)\((i=1,2)\), and \(\hat{T}(t,s)\) is the evolution operator associated to equation (5.5)._ **Theorem 5.3**: _Assume that equation (5.1) admits a nonuniform Mittag-Leffler dichotomy (5.4) in \(\mathbb{R}_{+}\) under condition (5.8). If \(\delta\) is sufficiently small such that \(\|B(t)\|\leq\delta/E_{\alpha}(2\epsilon,t)\) for \(t\in\mathbb{R}_{+}\), then equation (5.5) also admits a nonuniform Mittag-Leffler dichotomy in \(\mathbb{R}_{+}\)._ **Proof of Theorem 5.2.** We divide the proof into the following several steps. Step 1: Construction of bounded solutions for (5.5). Recall space \(W\) in Theorem 5.1, then the following lemma gives out the existence of bounded solution. **Lemma 5.1**: _For each \(t,s\in\mathbb{R}_{+}\), equation (5.5) has a unique solution \(U\in W\) such that_ \[\begin{split} U(t,s)=& T(t,s)P(s)+\mathcal{I}_{s}^{ \alpha}T(t,\cdot)P(\cdot)B(\cdot)U(\cdot,s)\\ &+\mathcal{I}_{+\infty}^{\alpha}T(t,\cdot)(\mathrm{Id}-P(\cdot)) B(\cdot)U(\cdot,s),\quad t\geq s.\end{split} \tag{5.11}\] **Proof.** Clearly, if the function \(U(t,s)_{t\geq s}\) satisfies (5.11), then it is a solution of (5.5). We must demonstrate that the operator \(L\) defined by \[\begin{split}(LU)(t,s)=&\ T(t,s)P(s)+\mathcal{I}_{s }^{\alpha}T(t,\cdot)P(\cdot)B(\cdot)U(\cdot,s)\\ +&\mathcal{I}_{+\infty}^{\alpha}T(t,\cdot)(\mathrm{ Id}-P(\cdot))B(\cdot)U(\cdot,s),\quad t\geq s,\end{split}\] has a unique fixed point in \(W\). It follows from (5.4) that \[\begin{split}\|(LU)(t,s)\|\leq&\ \|T(t,s)P(s)\|+ \mathcal{I}_{s}^{\alpha}\|T(t,\cdot)P(\cdot)\|\|B(\cdot)\|\|U(\cdot,s)\|\\ -&\mathcal{I}_{+\infty}^{\alpha}\|T(t,\cdot)(\mathrm{ Id}-P(\cdot))\|\|B(\cdot)\|\|U(\cdot,s)\|\\ \leq&\ \hat{N}_{1}\frac{E_{\alpha}(\hat{\beta}_{1},s)}{E_ {\alpha}(\hat{\beta}_{1},t)}E_{\alpha}(\epsilon,s)+\delta\left(\frac{\hat{N}_ {1}}{\hat{\beta}_{1}}+\frac{\hat{N}_{2}}{\hat{\beta}_{2}}\right)\|U\|_{\alpha}E _{\alpha}(\epsilon,s).\end{split}\] Combining (5.7) with (5.8), we obtain \[\|LU\|_{\alpha}\leq\hat{N}_{1}+\theta\|U\|_{\alpha}<\infty,\] this implies that the operator \(L:W\to W\) is well defined. Analogously to the computation above, we have \[\|LU_{1}-LU_{2}\|_{\alpha}\leq\theta\|U_{1}-U_{2}\|_{\alpha},\quad U_{1},U_{2 }\in W,\] which shows that \(L\) is a contraction since \(\theta<1\). Then there exists a unique \(U\in W\) such that \(LU=U\). Therefore, Lemma 5.1 is proved. \(\qquad\Box\) Now we explain that the bounded solutions exhibit the following property. **Lemma 5.2**: _For each \(t\geq\tau\geq s\) in \(\mathbb{R}_{+}\),_ \[U(t,\tau)U(\tau,s)=U(t,s).\] **Proof.** From (5.11) and (5.3), for some \(\tau\in\mathbb{R}_{+}\) we can calculate that \[U(t,\tau)U(\tau,s) = T(t,s)P(s)+\mathcal{I}_{s}^{\alpha}T(t,\tau)P(\tau)B(\tau)U(\tau,s)\] \[+\mathcal{I}_{\tau}^{\alpha}T(t,\cdot)P(\cdot)B(\cdot)U(\cdot, \tau)U(\tau,s)\] \[+\mathcal{I}_{+\infty}^{\alpha}T(t,\cdot)(\mathrm{Id}-P(\cdot))B (\cdot)U(\cdot,\tau)U(\tau,s),\quad t\geq\tau\geq s.\] Let \(H(t,\tau):=U(t,\tau)U(\tau,s)-U(t,s)\) for \(t\geq\tau\geq s\), this yields \[H(t,\tau)=\mathcal{I}_{\tau}^{\alpha}T(t,\cdot)P(\cdot)B(\cdot)H(\cdot,s)+ \mathcal{I}_{+\infty}^{\alpha}T(t,\cdot)(\mathrm{Id}-P(\cdot))B(\cdot)H( \cdot,s). \tag{5.12}\] Define operator \(\mathcal{K}\) as \[(\mathcal{K}\hat{H})(t,\tau):=\mathcal{I}_{\tau}^{\alpha}T(t,\cdot)P(\cdot)B (\cdot)\hat{H}(\cdot,s)+\mathcal{I}_{+\infty}^{\alpha}T(t,\cdot)(\mathrm{Id}- P(\cdot))B(\cdot)\hat{H}(\cdot,s),\] for any \(\hat{H}\in W\) and \(t\geq\tau\). It follows from the identity above and (5.4) that \[\|(\mathcal{K}\hat{H})(t,\tau))\| \leq \mathcal{I}_{\tau}^{\alpha}\|T(t,\cdot)P(\cdot)\|\|B(\cdot)\|\| \hat{H}(\cdot,s)\|\] \[-\mathcal{I}_{+\infty}^{\alpha}\|T(t,\cdot)(\mathrm{Id}-P(\cdot)) \|\|B(\cdot)\|\|\hat{H}(\cdot,s)\|\] \[\leq \delta\left(\frac{\hat{N}_{1}}{\hat{\beta}_{1}}+\frac{\hat{N}_{2 }}{\hat{\beta}_{2}}\right)\|\hat{H}\|_{\alpha}E_{\alpha}(\epsilon,s).\] By (5.7), we have \[\|\mathcal{K}\hat{H}\|_{\alpha}\leq\theta\|\hat{H}\|_{\alpha}<\infty,\] then \({\cal K}:W\to W\) is well defined for \(t\geq\tau\). Similarly to the calculation above, we attain \[\|{\cal K}\hat{H}_{1}-{\cal K}\hat{H}_{2}\|_{\alpha}\leq\theta\|\hat{H}_{1}-\hat {H}_{2}\|_{\alpha},\quad\hat{H}_{1},\hat{H}_{2}\in W.\] Because of hypothesis (5.8), \({\cal K}\) is a contraction. Thus, there is a unique \(\hat{H}\in W\) such that \({\cal K}\hat{H}=\hat{H}\). On the other hand, we know that \(0\in W\) satisfies (5.12) and \({\cal K}0=0\). By Lemma 5.1, we assert \(H=\hat{H}=0\) for \(t\geq\tau\geq s\) in \({\mathbb{R}}_{+}\). Therefore, Lemma 5.2 is proved. \(\Box\) Step 2: Establishment of projections \(\hat{P}(t)\) in (5.9). Given constant \(\iota\in{\mathbb{R}}_{+}\), for any \(t\geq\iota\) in \({\mathbb{R}}_{+}\), we consider the following linear operator \[\hat{P}(t):=\hat{T}(t,\iota)U(\iota,\iota)\hat{T}(\iota,t), \tag{5.13}\] where \(\hat{T}(t,s)\) is the evolution operator associated to (5.5). Clearly, the operator \(\hat{P}(t)\) may depend on \(\iota\), and \(U(\iota,\iota)U(\iota,\iota)=U(\iota,\iota)\) by Lemma 5.2. The following lemma illustrates the commutativity of projections \(\hat{P}(t)\) as formula (5.9). **Lemma 5.3**: _For any \(t\in{\mathbb{R}}_{+}\), the operator \(\hat{P}(t)\) is a projection satisfying (5.9)._ **Proof.** By the details above and **(F1)-(F2)**, we derive \[\hat{P}(t)\hat{P}(t) = \hat{T}(t,\iota)U(\iota,\iota)\hat{T}(\iota,t)\hat{T}(t,\iota)U( \iota,\iota)\hat{T}(\iota,t)\] \[= \hat{T}(t,\iota)U(\iota,\iota)U(\iota,\iota)\hat{T}(\iota,t)=\hat {P}(t),\] then \(\hat{P}(t)\) is a projection. Furthermore, for \(t\geq s\) we can calculate that \[\hat{T}(t,s)\hat{P}(s)=\hat{T}(t,s)\hat{T}(s,\iota)U(\iota,\iota)\hat{T}( \iota,t)\hat{T}(t,s)=\hat{P}(t)\hat{T}(t,s).\] This completes the proof of Lemma 5.3. \(\Box\) Step 3: Characterization of bounded solutions. The following two lemmas propose the nonuniform projection integral equation and its property respectively. **Lemma 5.4**: _For some \(s\in{\mathbb{R}}_{+}\), if \(z\in C_{b}([s,+\infty),Z)\) is a solution of (5.5) with \(z(s)=z_{s}\), then_ \[z(t)=T(t,s)P(s)z_{s}+{\cal I}_{s}^{\alpha}T(t,\cdot)P(\cdot)B(\cdot)z(\cdot)+{ \cal I}_{+\infty}^{\alpha}T(t,\cdot)({\rm Id}-P(\cdot))B(\cdot)z(\cdot).\] The proof of this lemma is similar to the method of Lemma 3.1 when \(\epsilon<\min\{\hat{\beta}_{1},\hat{\beta}_{2}\}\) holds. **Lemma 5.5**: _For some \(s\in\mathbb{R}_{+}\), if the function \(\hat{P}(\cdot)\hat{T}(\cdot,s)\in C_{b}([s,+\infty),\mathcal{B}(Z))\), then_ \[\begin{split}\hat{P}(t)\hat{T}(t,s)=& T(t,s)P(s)\hat{P}(s)+ \mathcal{I}_{s}^{\alpha}T(t,\cdot)P(\cdot)B(\cdot)\hat{P}(\cdot)\hat{T}(\cdot,s)\\ &+\mathcal{I}_{+\infty}^{\alpha}T(t,\cdot)(\mathrm{Id}-P(\cdot)) B(\cdot)\hat{P}(\cdot)\hat{T}(\cdot,s).\end{split} \tag{5.14}\] **Proof.** For a given \(\iota\in\mathbb{R}_{+}\), it follows from Lemma 5.1 that the function \(U(t,\iota)\xi\) is a solution of (5.5) with initial value \(U(\iota,\iota)\xi\) for any \(\xi\in Z\) and \(t\geq\iota\). By (5.13) and (5.9), we gain \(U(t,\iota)=\hat{T}(t,\iota)U(\iota,\iota)\), and \[\begin{split}\hat{P}(t)\hat{T}(t,s)&=\hat{T}(t,s) \hat{P}(s)=\hat{T}(t,s)\hat{T}(s,\iota)U(\iota,\iota)\hat{T}(\iota,s)\\ &=\hat{T}(t,\iota)U(\iota,\iota)\hat{T}(\iota,s)=U(t,\iota)\hat{ T}(\iota,s).\end{split}\] Thus, the equation (5.5) has solution in the form of \(U(t,\iota)\xi\) as follows \[z(t)=\hat{P}(t)\hat{T}(t,s)\xi=U(t,\iota)\hat{T}(\iota,s)\xi,\quad\xi\in Z.\] Observing that the above solution is bounded for \(t\geq s\), and \[z(s)=U(s,\iota)\hat{T}(\iota,s)\xi=\hat{P}(s)\hat{T}(s,s)\xi=\hat{P}(s)\xi,\] we employ Lemma 5.4 to complete the proof of Lemma 5.5. \(\Box\) The following Lemma is the projected integral inequality in the case of nonuniform Mittag-Leffler dichotomy, and the method of its proof can be referred to the Lemma 3.2 and Corollary 3.1. **Lemma 5.6**: _Given \(s\in\mathbb{R}_{+}\). Assume that the functions \(u\in C_{b}([s,+\infty),\mathbb{R}_{+})\) and \(v\in C_{b}([0,s],\mathbb{R}_{+})\) respectively satisfy the following inequalities_ \[\begin{split} u(t)\leq&\hat{N}_{1}\frac{E_{\alpha}( \hat{\beta}_{1},s)}{E_{\alpha}(\hat{\beta}_{1},t)}E_{\alpha}(\epsilon,s)u_{s}+ \frac{\delta\hat{N}_{1}}{E_{\alpha}(\hat{\beta}_{1},t)}\mathcal{I}_{s}^{ \alpha}E_{\alpha}(\hat{\beta}_{1},t)u(t)\\ &-\delta\hat{N}_{2}E_{\alpha}(\hat{\beta}_{2},t)\mathcal{I}_{+ \infty}^{\alpha}\frac{u(t)}{E_{\alpha}(\hat{\beta}_{2},t)},\quad t\geq s\geq 0, \end{split} \tag{5.15}\] \[\begin{split} v(t)\leq&\hat{N}_{2}\frac{E_{\alpha}( \hat{\beta}_{2},t)}{E_{\alpha}(\hat{\beta}_{2},s)}E_{\alpha}(\epsilon,s)v_{s} +\frac{\delta\hat{N}_{1}}{E_{\alpha}(\hat{\beta}_{1},t)}\mathcal{I}_{0}^{ \alpha}E_{\alpha}(\hat{\beta}_{1},t)v(t)\\ &-\delta\hat{N}_{2}E_{\alpha}(\hat{\beta}_{2},t)\mathcal{I}_{s}^{ \alpha}\frac{v(t)}{E_{\alpha}(\hat{\beta}_{2},t)},\quad s\geq t\geq 0,\end{split} \tag{5.16}\] _where \(u_{s}:=u(s)\) and \(v_{s}:=v(s)\). If_ \[\theta:=\delta\Big{(}\frac{\hat{N}_{1}}{\hat{\beta}_{1}}+\frac{\hat{N}_{2}}{ \hat{\beta}_{2}}\Big{)}<1,\] _then there exist positive constants \(K_{i}\) and \(\lambda_{i}(i=1,2)\) such that_ \[u(t)\leq K_{1}\frac{E_{\alpha}(\lambda_{1},s)}{E_{\alpha}(\lambda_{1},t)}E_{ \alpha}(\epsilon,s)u_{s},\quad t\geq s\geq 0,\] \[v(t)\leq K_{2}\frac{E_{\alpha}(\lambda_{2},t)}{E_{\alpha}(\lambda_{2},s)}E_{ \alpha}(\epsilon,s)v_{s},\quad s\geq t\geq 0,\] _where \(K_{i}=\frac{\hat{N}_{i}}{1-\theta}\), \(\lambda_{i}=\hat{\beta}_{i}-\frac{\delta\hat{N}_{i}}{1-\theta}\)._ Step 4: Norm bounds of evolution operator. We verify that the norms of the operators \(\hat{T}(t,s)|\mathrm{Im}\hat{P}(s)\) and \(\hat{T}(t,s)|\mathrm{Im}(\mathrm{Id}-\hat{P}(s))\) are bounded. **Lemma 5.7**: _For any \(t\geq s\) in \(\mathbb{R}_{+}\), the first inequality in (5.10) holds._ **Proof.** Given \(\xi\in Z\), and for \(t\geq s\geq 0\), assume that \[u(t):=\|\hat{P}(t)\hat{T}(t,s)\xi\|,\] then \(u_{s}=\|\hat{P}(s)\xi\|\). By Lemma 5.5, we know that \(u(t)\) is bounded and satisfies (5.15). It follows from Lemma 5.6 that \[\|\hat{P}(t)\hat{T}(t,s)\xi\|\leq K_{1}\frac{E_{\alpha}(\lambda_{1},s)}{E_{ \alpha}(\lambda_{1},t)}E_{\alpha}(\epsilon,s)\|\hat{P}(s)\xi\|,\quad t\geq s \geq 0,\] where \(K_{1}\) and \(\lambda_{1}\) are given in Lemma 5.6. Again by Lemma 5.3, we gain \[\hat{P}(t)\hat{T}(t,s)=\hat{T}(t,s)\hat{P}(s)=\hat{T}(t,s)\hat{P}(s)\hat{P}(s).\] Taking \(\mu:=\hat{P}(s)\xi\), it yields that \[\|\hat{T}(t,s)\hat{P}(s)\mu\|\leq K_{1}\frac{E_{\alpha}(\lambda_{1},s)}{E_{ \alpha}(\lambda_{1},t)}E_{\alpha}(\epsilon,s)\|\mu\|,\quad t\geq s\geq 0.\] Therefore, we can obtain the desired inequality. \(\Box\) **Lemma 5.8**: _For any \(s\geq t\) in \(\mathbb{R}_{+}\), the second inequality in (5.10) holds._ **Proof.** By analogy with Lemma 5.5, we need to attain an equation for \((\mathrm{Id}-\hat{P}(t))\hat{T}(t,s)\) via Lemma 5.3. Actually, from the variation of constants formula (2.19), we have \[\hat{T}(t,s)=T(t,s)+\mathcal{I}_{s}^{\alpha}T(t,\cdot)B(\cdot)\hat{T}(\cdot,s).\] Let function \(w(t):=\hat{T}(t,\iota)(\mathrm{Id}-\hat{P}(\iota))\) for some \(\iota\in\mathbb{R}_{+}\), then \[w(t)=T(t,\iota)(\mathrm{Id}-\hat{P}(\iota))+\mathcal{I}_{\iota}^{ \alpha}T(t,\cdot)B(\cdot)w(\cdot). \tag{5.17}\] From (5.11) and (5.13) with \(t=s=\iota\), we calculate that \[\hat{P}(\iota)=U(\iota,\iota)=P(\iota)+\mathcal{I}_{+\infty}^{ \alpha}T(\iota,\cdot)(\mathrm{Id}-P(\cdot))B(\cdot)U(\cdot,\iota).\] Pre-projecting \(P(\iota)\) on both hands sides of the above identity, we acquire \(P(\iota)\hat{P}(\iota)=P(\iota)\), and \[(\mathrm{Id}-P(\iota))(\mathrm{Id}-\hat{P}(\iota))=\mathrm{Id} -\hat{P}(\iota). \tag{5.18}\] Combining (5.17) with (5.18), and replacing \(t\) with \(s\), we derive \[T(t,s)(\mathrm{Id}-P(s))w(s)= T(t,\iota)(\mathrm{Id}-P(\iota))(\mathrm{Id}-\hat{P}(\iota))\] \[+\mathcal{I}_{\iota}^{\alpha}T(t,s)(\mathrm{Id}-P(s))B(s)w(s)\] \[= T(t,\iota)(\mathrm{Id}-\hat{P}(\iota))+\mathcal{I}_{\iota}^{ \alpha}T(t,s)(\mathrm{Id}-P(s))B(s)w(s).\] It follows from (5.17) and the identity above that \[w(t)= T(t,s)(\mathrm{Id}-P(s))w(s)+\mathcal{I}_{\iota}^{\alpha}T(t, \cdot)B(\cdot)w(\cdot)\] \[-\mathcal{I}_{\iota}^{\alpha}T(t,s)(\mathrm{Id}-P(s))B(s)w(s)\] \[= T(t,s)(\mathrm{Id}-P(s))w(s)+\mathcal{I}_{\iota}^{\alpha}T(t, \cdot)P(\cdot)B(\cdot)w(\cdot)\] \[+\mathcal{I}_{s}^{\alpha}T(t,\cdot)(\mathrm{Id}-P(\cdot))B( \cdot)w(\cdot). \tag{5.19}\] On the other hand, by Lemma 5.3, we attain \[(\mathrm{Id}-\hat{P}(t))\hat{T}(t,s)=\hat{T}(t,s)(\mathrm{Id}- \hat{P}(s)). \tag{5.20}\] Recalling the function \(w(\tau)\), we get \(w(\tau)\hat{T}(\iota,s)=(\mathrm{Id}-\hat{P}(\tau))\hat{T}(\tau,s)\). Post-multiplying \(\hat{T}(\iota,s)\) on both hands sides of (5.19), this implies \[(\mathrm{Id}-\hat{P}(t))\hat{T}(t,s)= T(t,s)(\mathrm{Id}-P(s))(\mathrm{Id}-\hat{P}(s))\] \[+\mathcal{I}_{\iota}^{\alpha}T(t,\cdot)P(\cdot)B(\cdot)(\mathrm{ Id}-\hat{P}(\cdot))\hat{T}(\cdot,s) \tag{5.21}\] \[+\mathcal{I}_{s}^{\alpha}T(t,\cdot)(\mathrm{Id}-P(\cdot))B(\cdot )(\mathrm{Id}-\hat{P}(\cdot))\hat{T}(\cdot,s).\] Fixed \(\xi\in Z\), we consider \(v(t):=\|\hat{T}(t,s)(\mathrm{Id}-\hat{P}(s))\xi\|\) for \(s\geq t\geq 0\) and \(v_{s}=\|(\mathrm{Id}-\hat{P}(s))\xi\|\). According to (5.19) and (5.20), it is well known that the function \(v(t)\) satisfies the inequality (5.16). Employing Lemma 5.6 and the similar proof to Lemma 5.7, we easily acquire desired inequality and complete the proof. \(\Box\) In conclusion, Lemmas 5.3, 5.7 and 5.8 all derive Theorem 5.2 together. \(\Box\) The following Lemma will help to prove Theorem 5.3. **Lemma 5.9**: _For any \(t\in\mathbb{R}_{+}\), if constant \(\delta\) described as in Theorem 5.3 is small enough, then_ \[\|\hat{P}(t)\|\leq 4\hat{N}E_{\alpha}(\epsilon,t),\quad\|{\rm Id}-\hat{P}(t)\| \leq 4\hat{N}E_{\alpha}(\epsilon,t). \tag{5.22}\] **Proof.** Replacing \(s\) by \(t\) and pre-multiplying \(({\rm Id}-P(t))\) on both hands sides of (5.14), we have \[({\rm Id}-P(t))\hat{P}(t)={\cal I}^{\alpha}_{+\infty}T(t,\cdot)({\rm Id}-P( \cdot))B(\cdot)\hat{P}(\cdot)\hat{T}(\cdot,t). \tag{5.23}\] It follows from Lemmas 5.7 and 5.3 that for \(\tau\geq t\geq 0\), \[\|\hat{P}(\tau)\hat{T}(\tau,t)\|=\|\hat{T}(\tau,t)\hat{P}(t)\hat{P}(t)\|\leq K _{1}\frac{E_{\alpha}(\lambda_{1},t)}{E_{\alpha}(\lambda_{1},\tau)}E_{\alpha}( \epsilon,t)\|\hat{P}(t)\|. \tag{5.24}\] By (5.23) and (5.4) we calculate that \[\|({\rm Id}-P(t))\hat{P}(t)\| \leq -{\cal I}^{\alpha}_{+\infty}\|T(t,\cdot)({\rm Id}-P(\cdot))\|\|B( \cdot)\|\|\hat{P}(\cdot)\hat{T}(\cdot,t)\| \tag{5.25}\] \[\leq -E_{\alpha}(\hat{\beta}_{2}+\lambda_{1}+\epsilon,t)\|\hat{P}(t) \|{\cal I}^{\alpha}_{+\infty}\frac{\delta K_{1}\hat{N}_{2}}{E_{\alpha}(\hat{ \beta}_{2}+\lambda_{1}+\epsilon,t)}\] \[\leq \frac{\delta K_{1}\hat{N}_{2}}{\hat{\beta}_{2}+\lambda_{1}- \epsilon}\|\hat{P}(t)\|,\] where constant \(\epsilon\) was chosen as satisfying \(\epsilon<\min\{\hat{\beta}_{1},\hat{\beta}_{2}\}\) in order to guarantee the above denominator \(\hat{\beta}_{2}+\lambda_{1}-\epsilon>0\). Analogously to (5.23), replacing \(t\) with \(s\) and pre-multiplying \(P(t)\) on both hands sides of (5.21), we attain \[P(t)({\rm Id}-\hat{P}(t))={\cal I}^{\alpha}_{\iota}T(t,\cdot)P(\cdot)B(\cdot) ({\rm Id}-\hat{P}(\cdot))\hat{T}(\cdot,t). \tag{5.26}\] Using Lemma 5.8, for \(t\geq\tau\geq 0\) this implies \[\|({\rm Id}-\hat{P}(\tau))\hat{T}(\tau,t)\|\leq K_{2}\frac{E_{\alpha}(\lambda _{2},\tau)}{E_{\alpha}(\lambda_{2},t)}E_{\alpha}(\epsilon,t)\|{\rm Id}-\hat{P }(t)\|. \tag{5.27}\] From (5.26) and (5.4) one can compute that \[\|P(t)({\rm Id}-\hat{P}(t))\| \leq {\cal I}^{\alpha}_{\iota}\|T(t,\cdot)P(\cdot)\|\|B(\cdot)\|\|({ \rm Id}-\hat{P}(\cdot))\hat{T}(\cdot,t)\| \tag{5.28}\] \[\leq \frac{\delta K_{2}\hat{N}_{1}}{E_{\alpha}(\hat{\beta}_{1}+\lambda _{2}-\epsilon,t)}\|{\rm Id}-\hat{P}(t)\|{\cal I}^{\alpha}_{\iota}E_{\alpha}( \hat{\beta}_{1}+\lambda_{2}-\epsilon,t)\] \[\leq \frac{\delta K_{2}\hat{N}_{1}}{\hat{\beta}_{1}+\lambda_{2}- \epsilon}\|{\rm Id}-\hat{P}(t)\|,\] where the chosen constant \(\epsilon<\min\{\hat{\beta}_{1},\hat{\beta}_{2}\}\) similarly. Obviously, \[\hat{P}(t)-P(t)=({\rm Id}-P(t))\hat{P}(t)-P(t)({\rm Id}-\hat{P}(t)).\] Taking \(\hat{N}:=\max\{\hat{N}_{1},\hat{N}_{2}\}\) and \(\hat{\beta}:=\min\{\hat{\beta}_{1},\hat{\beta}_{2}\}\) and combining (5.25) with (5.28), we gain \[\|\hat{P}(t)-P(t)\| \leq \frac{\delta K_{1}\hat{N}_{2}}{\hat{\beta}_{2}+\lambda_{1}- \epsilon}\|\hat{P}(t)\|+\frac{\delta K_{2}\hat{N}_{1}}{\hat{\beta}_{1}+ \lambda_{2}-\epsilon}\|\mathrm{Id}-\hat{P}(t)\| \tag{5.29}\] \[\leq \hat{\eta}(\|\hat{P}(t)\|+\|\mathrm{Id}-\hat{P}(t)\|),\] where \[\hat{\eta}=\frac{\delta\hat{N}^{2}\hat{\beta}}{2\hat{\beta}^{2}-5\delta\hat{N }\hat{\beta}-\epsilon(\hat{\beta}-2\delta\hat{N})}.\] Moreover, by (5.4) with \(t=s\), it is easy to obtain that \[\|P(t)\| \leq \hat{N}E_{\alpha}(\epsilon,t),\quad\|Q(t)\|\leq\hat{N}E_{\alpha} (\epsilon,t).\] From (5.29), this yields \[\|\hat{P}(t)\| \leq \|\hat{P}(t)-P(t)\|+\|P(t)\|\] \[\leq \hat{\eta}(\|\hat{P}(t)\|+\|\mathrm{Id}-\hat{P}(t)\|)+\hat{N}E_{ \alpha}(\epsilon,t).\] Since \(\|(\mathrm{Id}-\hat{P}(t))-(\mathrm{Id}-P(t))\|=\|\hat{P}(t)-P(t)\|\), we also derive \[\|(\mathrm{Id}-\hat{P}(t))\| \leq \|\hat{P}(t)-P(t)\|+\|\mathrm{Id}-P(t)\|\] \[\leq \hat{\eta}(\|\hat{P}(t)\|+\|\mathrm{Id}-\hat{P}(t)\|)+\hat{N}E_{ \alpha}(\epsilon,t).\] They together imply that \[\|\hat{P}(t)\|+\|\mathrm{Id}-\hat{P}(t)\|\leq 2\hat{\eta}(\|\hat{P}(t)\|+\| \mathrm{Id}-\hat{P}(t)\|)+2\hat{N}E_{\alpha}(\epsilon,t),\] and \[\|\hat{P}(t)\|+\|\mathrm{Id}-\hat{P}(t)\|\leq\frac{2\hat{N}E_{ \alpha}(\epsilon,t)}{1-2\hat{\eta}}.\] Choose \(\hat{\eta}<1/4\), then \[\|\hat{P}(t)\|+\|\mathrm{Id}-\hat{P}(t)\|\leq 4\hat{N}E_{\alpha}( \epsilon,t),\] yielding Lemma 5.9. \(\qquad\Box\) Finally, we end this paper with the proof of roughness for nonuniform Mittag-Leffler dichotomy. **Proof of Theorem 5.3.** From (5.24) and (5.22), we show that \[\|\hat{P}(\tau)\hat{T}(\tau,t)\| \leq \frac{\hat{N}\hat{\beta}}{\hat{\beta}-2\delta\hat{N}}\frac{E_{ \alpha}(\hat{\lambda},t)}{E_{\alpha}(\hat{\lambda},\tau)}E_{\alpha}(\epsilon,t )\|\hat{P}(t)\|\] \[\leq \frac{4\hat{N}^{2}\hat{\beta}}{\hat{\beta}-2\delta\hat{N}}\frac{E _{\alpha}(\hat{\lambda},t)}{E_{\alpha}(\hat{\lambda},\tau)}E_{\alpha}(2 \epsilon,t),\quad\tau\geq t\geq 0,\] where \(\hat{\lambda}=\hat{\beta}-\frac{\delta\hat{N}\hat{\beta}}{\hat{\beta}-2\delta \hat{N}}\). Analogously, it follows from (5.27) and (5.22) that \[\|(\mathrm{Id}-\hat{P}(\tau))\hat{T}(\tau,t)\|\leq\frac{4\hat{N}^{2}\hat{\beta} }{\hat{\beta}-2\delta\hat{N}}\frac{E_{\alpha}(\hat{\lambda},\tau)}{E_{\alpha} (\hat{\lambda},t)}E_{\alpha}(2\epsilon,t),\quad t\geq\tau\geq 0.\] Therefore, we can acquire the desired inequalities like (5.4), and the proof is completed. \(\Box\)
2303.07899
User Preferences of Spatio-Temporal Referencing Approaches For Immersive 3D Radar Charts
The use of head-mounted display technologies for virtual reality experiences is inherently single-user-centred, allowing for the visual immersion of its user in the computer-generated environment. This isolates them from their physical surroundings, effectively preventing external visual information cues, such as the pointing and referral to an artifact by another user. However, such input is important and desired in collaborative scenarios when exploring and analyzing data in virtual environments together with a peer. In this article, we investigate different designs for making spatio-temporal references, i.e., visually highlighting virtual data artifacts, within the context of Collaborative Immersive Analytics. The ability to make references to data is foundational for collaboration, affecting aspects such as awareness, attention, and common ground. Based on three design options, we implemented a variety of approaches to make spatial and temporal references in an immersive virtual reality environment that featured abstract visualization of spatio-temporal data as 3D Radar Charts. We conducted a user study (n=12) to empirically evaluate aspects such as aesthetic appeal, legibility, and general user preference. The results indicate a unified favour for the presented location approach as a spatial reference while revealing trends towards a preference of mixed temporal reference approaches dependent on the task configuration: pointer for elementary, and outline for synoptic references. Based on immersive data visualization complexity as well as task reference configuration, we argue that it can be beneficial to explore multiple reference approaches as collaborative information cues, as opposed to following a rather uniform user interface design.
Nico Reski, Aris Alissandrakis, Andreas Kerren
2023-03-14T13:38:20Z
http://arxiv.org/abs/2303.07899v1
# User Preferences of Spatio-Temporal Referencing Approaches For Immersive 3D Radar Charts ###### Abstract The use of head-mounted display technologies for virtual reality experiences is inherently single-user-centred, allowing for the visual immersion of its user in the computer-generated environment. This isolates them from their physical surroundings, effectively preventing external visual information cues, such as the pointing and referral to an artifact by another user. However, such input is important and desired in collaborative scenarios when exploring and analyzing data in virtual environments together with a peer. In this article, we investigate different designs for making spatio-temporal references, i.e., visually highlighting virtual data artifacts, within the context of Collaborative Immersive Analytics. The ability to make references to data is foundational for collaboration, affecting aspects such as awareness, attention, and common ground. Based on three design options, we implemented a variety of approaches to make spatial and temporal references in an immersive virtual reality environment that featured abstract visualization of spatio-temporal data as 3D Radar Charts. We conducted a user study (n=12) to empirically evaluate aspects such as aesthetic appeal, legibility, and general user preference. The results indicate a unified favour for the presented _location_ approach as a spatial reference while revealing trends towards a preference of mixed temporal reference approaches dependent on the task configuration: _pointer_ for elementary, and _outline_ for synoptic references. Based on immersive data visualization complexity as well as task reference configuration, we argue that it can be beneficial to explore multiple reference approaches as collaborative information cues, as opposed to following a rather uniform user interface design. keywords: awareness, collaborative immersive analytics, computer-supported cooperative work, empirical study, virtual reality, visual information cues, 3D radar chart + ## 1 Introduction Utilizing immersive display and interaction technologies for the purpose of data exploration, analytical reasoning, and decision making, i.e., Immersive Analytics (IA), has become an increasingly intriguing research domain as relevant hardware and software technologies advance in affordability, accessibility, and usability (Dwyer et al., 2018; Skarbez et al., 2019). Fonnet and Prie (2021) found that the amount of publications presenting systems that feature three-dimensional (3D) graphics, stereoscopic vision capabilities, and head-tracking support within the context of IA has noticeably increased since 2012. Data interpretation and subsequent analytical reasoning often rely on collaboration, i.e., multiple analysts combine knowledge and expertise to discuss their findings and to make decisions - a process that is much desired by the community (Hackathorn and Margolis, 2016; Heer and Agrawala, 2008; Isenberg et al., 2011). However, the by default single user-centred characteristics of many immersive interfaces are often in conflict with such anticipated collaboration, requiring careful design considerations to support effective collaboration of multiple users (Billinghurst et al., 2018; Ens et al., 2019; Skarbez et al., 2019). Research published within the domain of Computer-Supported Cooperative Work (CSCW) shows that the understanding, support, and evaluation of collaboration when using information and communication technologies is a complex task that demands the consideration of many different aspects and dimensions, for instance as elaborated by various descriptive models and frameworks (Gutwin and Greenberg, 2002; Isenberg et al., 2012; Johansen et al., 1988; Lee and Paine, 2015; Neumayr et al., 2018; Tang et al., 2006). Churchill and Snowdon (1998) already described in 1998 important characteristics that should be addressed when designing for collaboration in Virtual Environments (VEs) - a topic still relevant today. Among others, collaborative VEs should strive to enable users to share their context and make them aware of each other, as well as allow them to discuss their respective findings (Churchill and Snowdon, 1998). As an inherently multi-disciplinary research area, there is a lot of potential to utilize existing knowledge, guidelines, and recommendations when designing Collaborative Immersive Analytics (CIA) experiences (Dwyer et al., 2018; Skarbez et al., 2019). For instance, concepts such as Awareness (Gutwin and Greenberg, 2002; Heer and Agrawala, 2008), Communication, Cooperation, Co-ordination (Andriessen, 2003), Reference, and Deixis (Heer and Agrawala, 2008) provide foundational concepts that can be utilized to implement anticipated collaborative features to support data analysts accordingly. Support for collaboration within the context of IA has been deemed integral for its success and establishment of valuable data analysis tools, with several research challenges requiring further exploration (Ens et al., 2021). A promising direction to enable collaboration in VEs, and thus support various collaborative information cues, is the use of avatars, i.e., a virtual representation of the other user(s) in the VE, either co-located or connected remotely (Steed and Schroeder, 2015; Xia et al., 2018). The visual design of such avatars has been investigated in a multitude of studies, for instance, to determine differences between real istic and abstract avatar representations (Sun et al., 2019), to explore effects of nonverbal expression using highly expressive avatars (Wu et al., 2021), or to determine how avatar appearances influence aspects of communication and interaction (Heidicker et al., 2017). Such collaboration and the use of avatars often imply that all collaborators are part of the same immersive VE, i.e., using homogeneous device types and technologies that enable them to share their 3D information space. However, such an approach is not always feasible, for instance when using heterogeneous display and interaction technologies to collaborate on the same data across different applications. Particularly within the context of CIA, the mixture and integration of data analysis tools that are based on different technologies are highly anticipated, in order to build workflows that synergize with interactive Information Visualization (InfoVis) and Visual Analytics (VA) tools (Ens et al., 2021; Isenberg, 2014; Wang et al., 2019). With this in mind, we are interested in the investigation of design approaches for making visual references in immersive data visualizations, assisting the immersed analyst to focus their attention towards a point of reference as indicated through a non-immersed collaborator. Based on different design options, we implemented various approaches to make spatial and temporal references inside a Virtual Reality (VR) environment that enables an analyst to explore a multivariate dataset using a head-mounted display (HMD). We envision that such visual references facilitate natural communication and coordination with the immersed user, especially in scenarios where analysts use heterogeneous display and interaction technologies (hybrid) in order to have individual perspectives and roles during the data analysis (asymmetric) (Ens et al., 2021; Isenberg, 2014; Wang et al., 2019). As the HMD user is visually isolated from their real-world surroundings, common means of referencing through a collaborator (e.g., pointing to an artifact) are no longer available, even in co-located work spaces. Consequently, there is a need for adequate means of referencing as a sufficient replacement. Our research focus is therefore concerned with the design of visual references as nonverbal information cues, particularly within the context of immersive spatio-temporal data visualization. This allows us to contribute to the emerging field of CIA as follows: * We describe generalized options for the design of visual references in order to make spatial and temporal references in immersive VR data visualizations. * We report on the results of an empirical evaluation that used subjective methods to examine several implemented visual referencing approaches in regard to aesthetics, legibility, and general user preference. * We discuss the results and reflect on the implemented referencing approaches, providing directions that can guide the design of similar collaborative information cues. The structure of the article is as follows: Section 2 provides an overview of relevant CSCW terminology, describes existing work of design approaches to integrate collaborative information cues in VR applications, and provides additional motivation for the support of referencing within the context of CIA. We describe the reference design options and subsequent implementation for the various spatio-temporal reference approaches in Section 3. We conducted an empirical evaluation to assess aesthetics, legibility, and general user preference for all presented reference approaches. Section 4 describes the applied evaluation methodology, providing details on the study design and measures. Based on the results as presented in Section 5, we discuss the gathered data thereafter in Section 6, contributing with reflections on the visual reference design for their consideration in similar applications. Finally, we conclude by summarizing our work and providing some directions for future work in Section 7. ## 2 Related Work ### Understanding Relevant CSCW Terminology Rich CSCW terminology exists to describe various aspects and dimensions of collaboration, especially for scenarios of synchronous collaboration, i.e., multiple users working on a subject matter at the same time. Arguably, it can be difficult to differentiate terms such as awareness, attention, common ground, focus, referencing, and so forth, especially as some might be used ambiguously (Schmidt, 2002). We adopt CSCW terminology as follows within the scope of our investigation. Schmidt (2002) elaborates on the many facets of _Awareness_, describing it as a collaborator's ability to align and integrate their own actions with those of others, without interruption and major efforts, in an organic way. Heer & Agrawala (2008) expand on that by stating that _Awareness_ should allow for the assessment of work and task completion, enabling each user to make decisions on where to allocate their next efforts. Closely aligned with awareness are also the concepts of _Common Ground_, describing the state of the collaborators' shared understanding as the foundation to enable communication among them accordingly (Clark & Brennan, 1991), and _Grounding_ as the process of achieving that state (Heer & Agrawala, 2008). Andriessen (2003) provides a differentiation between _Communication_, _Co-operation_, and _Co-ordination_ as general group processes. They categorize _Communication_ as an interpersonal exchange process, utilizing tools for the exchange of (verbal and nonverbal) signals (Andriessen, 2003). _Co-operation_ is described as the task-oriented process of collaborators actually working together, making decisions, and co-manipulating artifacts. Also categorized as a task-oriented process is the concept of _Co-ordination_, enabling the collaborators to adjust their individual and group efforts to solve a given task. _Attention_(Kristoffersen & Ljungberg, 1999; Shneiderman et al., 2017) and _Focus_(Schmidt, 2002; Snowdon et al., 2001) within the context of collaboration are often seen ambiguously, generally referring to a user's visual alignment to a specific area or point of interest in the shared workspace. When working together, a frequently used communication action is to make a _Reference_, for instance in order to indicate a specific object, area, person, or time, often through a mixture of verbal (e.g., talk) and nonverbal (e.g., point) signals (Heer & Agrawala, 2008). Dissecting collaborative work is a complex endeavor, as many states and actions are inherently interconnected and dependent on each other. To describe just one example, the ability to make a reference as means of communication allows a collaborator to focus and pay attention to respectively referred artifacts, impacting aspects of each individual's awareness as well as the team's coordination and common ground. Such aspects are essential for the team's joint exploration, interpretation, and discussion of data. It is apparent that careful design considerations are required when creating systems and environments that aim to support such collaborative work. ### Collaborative Information Cues in Virtual Reality Providing means that aid users in their communication is vital for their collaboration, as emphasized in Section 2.1. The ability to make visual references is arguably even more important in hybrid technology scenarios where not every collaborator is immersed in VR, requiring features to bridge such shortcomings. Some interesting work has been conducted to provide collaborative information cues in immersive VR environments. Considering an asymmetric user role setup, Peter et al. (2018) presented a tool for a non-immersed _VR-Guide_, providing features to support the guidance of an immersed VR user. Their implemented proof-of-concept prototype included three visual approaches as non-verbal information cues, aiming to catch the VR user's attention and thus guide them towards an artifact as selected through the non-immersed guide. First, they implemented an outline effect, visually highlighting the border of the selected artifact independent of its occlusion, i.e., the outline is visible even with other objects in the VE between the user and the targeted artifact, leaving it otherwise hidden from the user's view. Second, they implemented a realistic-looking light beam technique, similar to a spotlight, allowing the highlighted artifact to be visually distinguished from others. And third, a virtual drone was implemented using a laser beam to point to the selected artifact. An interesting aspect of their tool is to allow the VR-Guide to customize these signals, for instance by adjusting the color or thickness of the outline effect, or the size and intensity of the spotlight. Their evaluation revealed trends towards a higher acceptance of the outline technique compared to the light beam one, with participants interestingly expressing a desire for a combination of outline and virtual drone techniques. A visually different approach was presented by Sugiura et al. (2018), implementing a virtual hand in a pointing hand posture that literally points to the selected artifact. Their approach was partially inspired by previous work of Stafford et al. (2006) but adopted to support a setup that involved the non-immersed user operating an interactive tabletop application, allowing touch interaction to select individual objects from a top-down view that are referred to accordingly in VR (Sugiura et al., 2018). The results of their preliminary user study suggest the overall usability of their prototype but lack a formal evaluation of the effectiveness of the implemented guiding technique. Welsford-Ackroyd et al. (2020) evaluated a prototype that allowed interaction between an HMD user and a non-HMD one that operates a large-scale immersive display. A laser pointer technique enabled the non-HMD user, in a more "spectator" role, to make a visual reference that would create a virtual marker in VR accordingly for the HMD user. The results of their evaluation indicate that the users could communicate effectively using the provided features. Lacoche et al. (2017) investigated different visual approaches to raise awareness towards other users in a co-located collaborative VR environment, aiming to prevent physical collisions. First, an extended grid representation visualizes the other user's position through a grid-shaped cylinder, allowing a VR user to avoid navigating to the same position, while still being able to "look beyond" (preventing occlusion issues). Second, another user was indicated through means of a "Ghost Avatar", displaying some features of that user's position and orientation, e.g., a semi-transparent model of their HMD and hand controllers. Third, a safe navigation floor utilizes a heat map-inspired approach to visualize on the virtual floor where it is safe to go and where not, avoiding collision with the other user as well as a movement beyond the physical boundaries of the VR area. Under the introduction of a fourth approach (separated tracked spaces) that limited each user's designated VR area as a typical bounding box grid, an evaluation was conducted to investigate the effectiveness and user preference for the different approaches. Their results point towards better performance of the extended grid and ghost avatar compared to the safe navigation floor approach. Chen et al. (2018) investigated different modalities (visual, auditory, vibrotactile) to make directional information cues during a multi-task scenario in VR. Based on task completion time and accuracy measurements, the results of their study indicate a favor towards direction cues based on visual and vibrotactile stimuli over auditory ones. Casarin et al. (2018) described a developed toolkit that allows synchronous interaction in VEs through multiple users. To avoid simultaneous manipulation of the same artifact, the authors implemented an abstract interaction filter to visually indicate whether or not an artifact is available for interaction. Three different states (with applied color coding) provided visual feedback in real-time: An artifact is available for interaction (original color), hovered (green color), or currently being manipulated (orange color). Their toolkit was validated based on the results of an evaluation that featured a collaborative authoring task, but further investigations are necessary in order to address individual aspects of their toolkit, such as for instance the design of the collaborative information cues. ### Motivation to Support Referencing in CIA A recent survey on IA-related research reveals a lack of work in regard to collaborative systems (Fonnet and Prie, 2021). Considering the importance of collaboration within that context (Ens et al., 2021), further investigations are needed to provide design recommendations for CIA experiences. The work presented throughout Sections 2.1 and 2.2 provides important directions and guidance to further examine similar matters. In particular, we are motivated to investigate the design of collaborative information cues within the context of CIA as follows. As a starting point, we assume that an analyst is utilizing HMD technologies to explore data in immersive VR. To support collaborative exploration and interpretation of data, the VE needs to provide features that aid the VR user to follow their collaborator's signals, e.g., _nonverbal_ references. Within the scope of this work, we focus on _visual_ references, as we anticipate active verbal communication between the analysts during their synchronous collaboration. Such references aim to facilitate the VR user's ability to focus their attention toward an indicated artifact in the VE, allowing the subsequent establishment of common ground between the collaborators to further discuss and interpret the data. As best to our knowledge, similar referencing and guiding approaches are yet to be investigated in regard to abstract data visualization. Figure 1 illustrates the overall context and scenario of the described research focus. ## 3 Spatio-Temporal Referencing in VR Visualizing spatio-temporal data using immersive technologies is a comparatively common use case (Fonnet and Prie, 2021). It is arguably easy to conceive the utilization of the additional (third) dimension for the mapping of data variables. For instance, aspects of a multivariate dataset in regard to spatial dimensions can be placed in respective relation to each other, whereas common two-dimensional (2D) visualization techniques have the potential to be expanded into the 3D space, e.g., displaying time-oriented data (Aigner et al., 2011, Chapter 7). Several toolkits have been recently presented, aiming to assist the practical implementation of (abstract) 3D visualizations within the context of IA (Potcher et al., 2019; Cordeil et al., 2019; Sicat et al., 2019). Within the scope of our investigation, we adopt the approach of _3D Radar Charts_ as immersive time visualization as presented by Reski et al. (2020). In this approach, different time-series data variables are placed as 2D frequency polygons in a radial arrangement around a time axis in 3D, allowing for the visual exploration of the data dimensions (Reski et al., 2020). Individual values of these displayed data variables can be connected to each other, representing the more traditional radar chart pattern (Kolence and Kiviat, 1973), enabling further interpretation and interaction with the time-series data (Reski et al., 2020). In practice, the virtual 3D space can be populated through the placement of multiple such 3D Figure 1: Context and scenario of the research focus. Radar Chart visualizations, each representing a spatial dimension in a dataset (e.g., country, municipality, city, or GPS coordinates). This allows an immersed analyst to explore data in regard to both spatial and temporal dimensions. Figure 2 illustrates the setup of such a virtual analysis environment, serving as the foundation for the practical implementation of our spatial and temporal reference design approaches according to the described design options and task types presented throughout the following sections. ### Design Options To provide a reference that catches the user's attention and thus guides them toward an artifact, a signal is required that allows to be distinguished from the conventional environment. Within the scope of our investigation, we focus on _visual_ sensory input, i.e., the manipulation of the immersive data visualization through means that allow its user to visually perceive references by Figure 2: Spatio-Temporal Data Visualization in VR, as adopted from Reski et al. (2020). **Top Left**: 3D Radar Chart visualization. **Top Right**: Excerpt of the VE, taken from an angled-top down perspective to provide an overview impression. **Bottom**: VE from the immersed user’s point of view (POV). **General description**: Individual 3D Radar Charts are spatially arranged to represent individual European countries, which are visually differentiated on the virtual floor as extruded polygons. Each 3D Radar Chart here features five data variables as color-coded semi-transparent frequency polygons, each including 150 consecutive time events as a time series. The presented data has been generated artificially for illustration purposes. looking around.1 We defined three design options to guide the creation of visual references as collaborative information cues: _Modify Artifact_, _Add Artifact_, and _Modify Environment_. These three design options are held purposefully generic and low-level to allow application across many different scenarios and use cases (see Table 1). Footnote 1: At this stage, we do not consider other sensory input, such as for instance from auditory or haptic interfaces. Modify Artifact (MA)The MA design option follows the concept of temporary _modifying the visual appearance of the referred artifact_, aiming to distinguish it from all others accordingly. It is important that such a modification allows the user to detect the referred artifact, even if the process of the actual modification was not observed, i.e., the transition from normal to referred visual appearance. We believe that this is particularly important within the context of VR, as it can not be guaranteed that the referred artifact is within the immersed user's field of view at all times, thus the change in the visual state might not be observed. Depending on the complexity of the implemented modification to the (existing) artifact, we consider this option to be comparatively friendly in regard to the required computational resources, as it is likely that no new artifacts and geometry need to be added to the scene. At the same time, the visual alteration of the referred artifact should be carefully considered, as potential visual mapping and data encoding may be lost through the modification of its original appearance. Add Artifact (AA)The AA design option follows the concept of temporarily _adding a visual artifact in close proximity to the referred artifact_, serving as a visual annotation. Such an added artifact should strive to (1) enable clear identification of the associated artifact it is intended to signal to, allowing the VR user to effectively focus on the referred artifact, (2) be easily detectable and distinguishable from all other artifacts in the scene but at the same time (3) not \begin{table} \begin{tabular}{p{71.1pt} p{113.8pt} p{113.8pt}} \hline Design Option & Categorization & Examples from Related Work \\ \hline Modify Artifact & node (Figure 3), highlight (Figures 4 and 5) & interaction availability filter (Casarin et al., 2018), “ghost” avatar (Lacoche et al., 2017) \\ \hline Add Artifact & pillar (Figure 3), pointer (Figures 4, 5 and 6), symbol (Figures 4, 5 and 7) & extended grid (Lacoche et al., 2017), light beam (Peter et al., 2018), virtual drone (Peter et al., 2018), virtual hand (Sugiura et al., 2018), marker (Welsford-Ackroyd et al., 2020) \\ \hline Modify Environment & location (Figure 3), outline (Figures 4 and 5) & safe navigation floor (Lacoche et al., 2017), outline (Peter et al., 2018) \\ \hline \end{tabular} \end{table} Table 1: Design Options – Categorization of the implemented reference design approaches according to the presented work (see Sections 3.3 and 3.4) as well as the related work (see Section 2.2). obstruct or occlude other important information in the scene. Using this option, the visual appearance and integrity of the referred artifact are maintained in its original state, thus not losing any potentially applied visual mapping and data encoding, which are likely to be particularly relevant within analytical use cases. Adding an artifact to the scene, although temporary, requires some practical considerations. For instance, depending on the added artifact's complexity, such as its geometry, its addition to the scene will likely demand additional computational resources. Thus, it is important to ensure that this is implemented in a way as to avoid a noticeable impact on the immersive application's performance. Furthermore, based on the designer's and developer's assessments, temporarily adding (and removing) artifacts should be possible implementation-wise in a comparatively effortless and reasonable way. Modify Environment (ME)The ME design option follows the concept of temporarily _modifying existing artifacts of the environment_ that are in close proximity or can otherwise be directly associated with the referred artifact. As opposed to the AA option, rather than introducing an additional artifact to the scene, the ME option builds upon the utilization of existing elements or features in the computer-generated environment to establish a visual reference. Naturally, this requires that the overall visualization and virtual scene are complex enough to provide such modification opportunities, to begin with. If this requirement is met and the environment indeed provides such artifacts or alike that also allow for a semantic inference to the referred artifact, their appearance may be modified in order to act as a visual signal accordingly. Generally, we believe this approach is likely to have a comparatively low computational impact as it utilizes existing artifacts and geometry that already exist. At the same time, the original visual integrity of the referred artifact is maintained (as opposed to the MA design option). ### Reference Design Preface and Task Types The overall setup of the applied immersive data visualization is presented in the introduction of Section 3. Naturally, the reference design as collaborative information cues is inherently dependent on the applied visualization, its purpose, and its use case, e.g., what type of data is displayed using what technique. In other words, what is the complexity and composition of the virtual 3D space? This is important to determine what to potentially signal to, allowing for the application of the presented reference design option accordingly. Within the presented scenario of spatio-temporal data exploration, we follow the _task type_ definitions as described by Andrienko & Andrienko (2006, Chapter 3), differentiating between _elementary_ and _synoptic_ tasks. Elementary tasks are concerned with the reference to one individual data entity, e.g., one location (spatial) or one point in time (temporal). Synoptic tasks are concerned with the reference to multiple data entities, e.g., a group of locations (spatial) or a range of multiple points in a time series (temporal). Under consideration of the 3D Radar Chart approach and the different task types, our overall VE (see Figure 2) can be described as follows. Individual 3D Radar Charts can be uniquely identified by their geospatial location (country). Thus, in regard to the spatial reference design, signaling both to an individual as well as multiple locations should be possible. Each 3D Radar Chart features multiple time-series data variables. It should be possible to refer to a single point as well as to a range of consecutive points in time. Such time point and time range references should be possible both across all time-series data variables as well as just for individual ones. Under consideration of this scene understanding, we can set out to design and implement spatial and temporal references as collaborative information cues as described throughout the remainder of this section. Additionally, we provide a supplemental 360\({}^{\circ}\) interactive web application that can be viewed online, illustrating all reference designs from a VR user's POV.2 Footnote 2: Supplemental 360\({}^{\circ}\) interactive web application illustrating all reference designs as described throughout Sections 3.3 and 3.4: vrxar.Inu.se/apps/tdrc-ref-360/ ### Spatial Reference Design For the purpose of referring to specific 3D Radar Chart instances in the VE, we designed three different spatial reference approaches in accordance with the presented design options (see Figure 3). First, the _pillar_ design follows the AA option, creating a semi-transparent cylinder with the 3D Radar Chart at its center, surrounding it accordingly. The pillar's height is scaled to make it appear to "shine from the top down" in the VE, similar to a spotlight. Second, the _location_ design follows the ME option, modifying the color of the extruded country polygon on the floor that each 3D Radar Chart is directly associated with. Third, the _node_ design follows the MA option, visually separating the referred 3D Radar Chart from all others by uniquely coloring all its data variable axes. ### Temporal Reference Design For the purpose of referring to single points in time as well as time ranges (multiple consecutive points in a time series) within a 3D Radar Chart instance, we designed four different temporal reference approaches in accordance with the presented design options (see Figure 4). First, the _highlight_ design follows the MA option, visualizing a colored mesh for the time point across all data variables as elementary task reference, respectively coloring the time range segments in each data variable axis as synoptic task reference. Second, the _outline_ design follows the ME option, creating a closed visual loop along the outside of all included temporal data points. Third, the _pointer_ design follows the AA option and adds two artifacts to the visualization as reference: Each included temporal data point is encapsulated by a small visual sphere, further assisted through a juxtaposed 3D pointer model that directly indicates the respective time point or time range. Fourth, we implemented a _symbol_ design, also based on the AA option and following a similar approach as the pointer design. However, instead of a pointer, we decided to juxtapose the virtual sphere with a symbol that can be interpreted by the user to infer further meaning. As a practical illustration, we decided to use a magnifying glass symbol in a "_let us investigate this_ [temporal reference]" analogy. The complexity of the 3D Radar Chart visualization allows for different temporal reference configurations. Making a reference across all data variables is illustrated in Figure 4, whereas references in individual data variables are presented in Figure 5. Additionally, we are interested to explore different configurations of the pointer and symbol approaches, as presented in Figure 6 and Figure 7 respectively. For instance, the placement of the pointer indicator could encode further analysis-related information, such as through a _neutral_, _positive_, or _negative_ pointing direction, e.g., to provide a comparison to a prior data value or to indicate an overall trend across the referred time range. Similarly, the application of different symbols could indicate an additional collaborative information cue, for instance, to provide a reason why the collaborator is making a reference in the first place, such as to _investigate_ because they found something they deem _exciting_, or because they want to further _talk_ about the referred data. ### Technologies and Implementation The immersive spatio-temporal data visualization using 3D Radar Charts is developed using Unity 2019.3 and the SteamVR Plugin for the Unity package. An HTC Vive HMD (1080x1200 pixel resolution per eye, 90 Hz refresh rate) is utilized for visual immersion, allowing the VR user to naturally look around Figure 3: Spatial reference design approaches from the VR user’s POV: _pillar_, _location_, and _node_. Elementary task reference at the top, and synoptic at the bottom row. Each of the presented task reference configurations refers to the same location(s). Figure 4: Temporal reference design approaches from the VR user’s POV: _highlight_, _outline_, _pointer_, and _symbol_. Elementary task reference across all data variables at the top, and synoptic at the bottom row. Each of the presented task reference configurations refers to the same time point/range. Figure 5: Temporal reference design approaches from the VR user’s POV for individual data variables: _highlight_, _outline_, _pointer_, and _symbol_. Elementary task reference for individual data variables at the top, and synoptic at the bottom row. Each of the presented task reference configurations refers to the same time point/range. Note: Due to the nature of the _highlight_ and _outline_ approaches with respect to the 3D Radar Chart visualization, implementations for an elementary task reference in an individual data variable axis are not feasible. Figure 6: Temporal reference indicator design approaches from the VR user’s POV using _pointer_ in a _neutral_ (pointing straight), _positive_ (pointing upwards), or _negative_ (pointing downwards) manner. Elementary task reference for all data variables at the top, and synoptic at the bottom row. Each of the presented task reference configurations refers to the same time point/range. Figure 7: Temporal reference indicator design approaches from the VR user’s POV using _pointer_ in an _investigate_ (magnifying glass), _exciting_ (star), or _let’s talk_ (speech bubble) styling. Elementary task reference for all data variables at the top, and synoptic at the bottom row. Each of the presented task reference configurations refers to the same time point/range. and make observations. Within the scope of our investigation, no additional interactive features are provided. All presented spatio-temporal reference design approaches (see Section 3.3 and Section 3.4) were implemented using the base functionalities as provided in Unity. For the purpose of the user study, to make references on demand in the VR environment, we developed a complementary web application in HTML5, CSS, and JavaScript. A WebSocket Secure server based on Node.js functions as the network communication interface between Unity and the web application. A template implementation that illustrates this workflow is available online.3 This approach has several advantages: Footnote 3: GitHub repository of the _Unity - Connect via WebSocket server to JavaScript client_ project: github.com/nicoversity/unity_wss_js 1. We are able to trigger the various reference configurations in the VE, thus simulating potential input from a real-world collaborator. 2. We can conveniently prepare presets of reference configurations in anticipation of their systematic evaluation (see Section 4). 3. The implementation provides a modular and easily extendable example of an application programming interface (API) to make references in the VR environment from the outside, allowing integration with other tools and applications in the future. ## 4 Evaluation Methodology In order to assess the designed spatio-temporal reference approaches, we conducted an empirical evaluation. This section provides details about the overall study design as well as the applied measures. ### Physical Study Space and Virtual Environment The study was planned as one-on-one sessions involving only the participating user and the researcher. The researcher was responsible for the practical conduction of the study sessions, i.e., taking care of the study moderation, ensuring all involved hard- and software were functioning as intended, and collecting the data. Each study session was conducted in our research group lab, providing a two-by-two meter area for the VR user, a desk as the researcher's workstation, as well as a desk for the participating user that is physically partitioned from the researcher's workspace. The research group lab is big enough for the researcher and participant to conduct the study comfortably. The researcher remained at their workstation to moderate the study, operate the presentation of the various spatio-temporal reference design approaches using the implemented web interface, and take notes in regard to the data collection. The participant was first briefly seated at their desk to complete an informed user consent form and was then located exclusively within the designated VR user area for the remainder of the study session. We set up the VE as presented in Figure 2, placing a total of 39 3D Radar Charts across different European countries. Each 3D Radar Chart featured five data variable axes, each comprised of a time series with 150 events. The VR user was placed in Central Europe, able to move freely within the calibrated two-by-two-meter area and make observations. There was one 3D Radar Chart placed within the VR user's physical movement area, all others were beyond their reach. No additional interactive features were provided. More specifically, each 3D Radar Chart in the VE featured a total height, i.e., vertical length, corresponding to \(1.0\ meters\). All 3D Radar Charts were placed to hover \(0.4\ meters\) above the virtual floor, thus reaching an effective height of \(1.4\ meters\) in the VE. The 3D Radar Chart used to display all temporal reference configurations was placed directly at the center of the VR user's calibrated two-by-two meter area, enabling them to move freely around to investigate that 3D Radar Chart from all sides if so desired. The distance between the center of the VR user's area and the spatial reference for the elementary task was approximately \(8.48\ meters\) in the virtual space. Furthermore, the distances between the center of the VR user's area and the spatial references for the synoptic tasks were approximately \(5.73\), \(6.45\), \(8.48\), and \(10.24\ meters\). These distances resulted from the properties of the underlying dataset used within the study (see Section 3). For additional impressions, please refer to the supplemental \(360^{\circ}\) interactive web application as provided in Section 3.2 (Footnote 2), visually illustrating all reference configurations as used in the study. ### Measures To systematically evaluate the different spatio-temporal reference design approaches, we decided to apply subjective methods and center our data collection around three measures, i.e., _aesthetics_, _legibility_, and _general user preference_. Within the scope of our investigation, we define _aesthetics_ as how much one appreciates the visual design and appeal of a reference design approach, and whether or not one finds it pleasing and beautiful to look at. We define _legibility_ as how well one can understand, determine, and detect what is highlighted, and how clear it is what to focus on. Collecting quantitative ratings for these two metrics for each reference design allows for respective comparisons. Additionally, the third metric is concerned with the _user's general preference_ based on pair-wise comparisons, indicating which one of two reference design approaches they would rather work with if they were to use such an application frequently. This allows for respective tallying of the results,4 providing an overall preference indication as well as functioning as a potential tie-breaker between two approaches in the case of equal aesthetics and legibility ratings. ### Task To assess aesthetics, legibility, and general user preference for the different reference design approaches, we prepared presets for all the different configurations, and tasked the participants to rate them following the Thinking Aloud protocol. In particular, we inquired assessments for all the elementary and synoptic spatio-temporal references as presented throughout Figure 3, Figure 4, and Figure 5. Additionally, we also inquired about the user's assessment for the temporal reference design of the pointer and symbol approaches in general (one combined assessment each for pointer and symbol) as presented in Figure 6 and Figure 7. The reference presets were generally configured to make the same reference, e.g., to refer to the same location (spatial) or time range (temporal). The assessments from each participant were collected in random order throughout two stages. First, they were presented with the individual reference presets and tasked to _rate_ their perceived aesthetics and legibility on a 7-point Likert scale. They were asked: * Aesthetics: On a scale from 1 (aesthetically unpleasing) to 7 (aesthetically pleasing), how would you rate this approach of [ spatial / temporal ] referencing? * Legibility: On a scale from 1 (not at all) to 7 (very well), how clearly can you determine what is [ spatially / temporally ] referenced? Second, once they provided numerical assessments for all the presented references, we inquired about their general preference for one over the other in a series of pairwise comparisons. For each logical category, i.e., spatial elementary, spatial synoptic, temporal elementary, and so on, we created all possible pair permutations and asked: * Out of these two [ spatial / temporal ] referencing approaches, which one do you prefer? ### Study Procedure Each study session followed the same procedure of three stages: (1) introduction, (2) task: aesthetics and legibility, and (3) task: pair-wise preference comparison. Each session was aimed to take approximately 30 to 40 minutes (10 min introduction, 20-30 min immersed in VR). Each participant filled out an informed user consent, after which some demographic information (professional background, prior VR experience) was collected. The researcher presented the overall context and scenario of the immersive application in regard to its analytical and collaborative aspects, ensuring that each participant understood the purpose and composition of the VE. Participants were then provided with a brief warm-up allowing them to familiarize themselves with wearing the HMD, and with the VE. Once they felt comfortable, they proceeded to the tasks. Based on random order, the participant's aesthetics and legibility ratings for the different spatio-temporal reference approaches were inquired via the Thinking Aloud protocol. Afterward, they were asked to state their general preference for one approach over the other. Based on random order, each participant provided distinct answers for each pairwise comparison. The researcher noted all ratings and preferences on a predefined task answer sheet. Furthermore, throughout both task stages, the participants were allowed to provide additional remarks as desired that were also noted by the researcher. Finally, they were thanked for their participation in the study and sent off. ### Ethical Considerations We followed general ethical considerations for the work with human participants within the scope of human-computer interaction research (Norwegian National Committee For Research Ethics in Science and Technology, 2016; Swedish Research Council, 2017). The presented empirical evaluation was conducted between April and June 2021 during the, at the time ongoing, global COVID-19 pandemic. Consequently, additional practical precautions were implemented. We closely monitored and followed all national, regional, and local health and safety recommendations according to the respective authorities. Study sessions were only conducted when all parties (participant and researcher) were symptom-free, and also kept recommended physical distance at all times. The researcher was wearing a face mask at all times. Face masks and hand disinfection gel were freely available to each participant. All involved technical equipment was carefully sanitized between study sessions. ## 5 Results We recruited \(n=12\) participants from a mixture of different academic backgrounds (5 Computer and Information Science, 5 Linguistics and Language Studies, 2 Forestry and Wood Technology). Eight participants reported having just _a little_ prior experience with VR, three _average_, and one _a lot_. None of the participants reported any visual perception issues based on the applied color coding in the VR environment. Figure 8 presents the results of the participants' ratings for the aesthetics and legibility measures. The results of the pair-wise preference comparison (combined with the rating medians) are presented in Figure 9. Some participants provided additional remarks for the different reference design approaches. For instance, participants stated that they can envision usefulness and relevance for both pointer and symbol temporal reference design approaches (see Figure 6 and Figure 7) in a real-world scenario. According to them, the pointer approach subjectively represented more precision and urgency, while the symbol one was easier to recognize and featured better clarity. One participant stated that the pointer approach makes more sense during synchronous collaboration (as the users are likely to talk to each other), while the symbol approach may be better suited for asynchronous collaboration (encoding an additional semantic meaning). It was also noted that a visual connection to the 3D Radar Chart's origin (time axis) is missing and would be preferred in the synoptic reference, as included in the outline approach for an individual dimension reference (see Figure 5). Some participants were unsure whether the pointer for the synoptic referencing task was referring to the entire time series or to one specific time event. One participant expected the spatial pillar approach to be multiple small ones as opposed to one large one when making a synoptic reference (see pillar in Figure 3). Another one stated that it might increase the legibility of the spatial location approach by additionally slightly extruding the respective country model (see location in Figure 3). As in the investigation reported by Peter et al. (2018), participants made suggestions for some hybrid designs, combining two approaches into one, such as spatial elementary pillar+node, spatial elementary location+node, temporal elementary highlight+pointer, temporal synoptic outline+symbol without the visual spheres, and temporal synoptic pointer but without the pointer using just the visual spheres - similar to the marker approach by Welsford-Ackroyd et al. (2020). Figure 8: The rating scores (\(n=12\)) of the implemented reference design approaches, in terms of _aesthetics_ (left) and _legibility_ (right). ## 6 Discussion ### Spatial References Out of the three implemented spatial reference designs, the _location_ approach was most favored by the participants universally across both elementary and synoptic task configurations. Perceived aesthetics and legibility ratings were comparatively higher than for the _node_ and _pillar_ approaches. The realization of this approach was possible in our use case due to the availability of individual extruded country polygons. Using the features of the VE seemed to have been appreciated by the participants, allowing the identification of either one or multiple referred artifacts, while at the same time maintaining the original visual composition of the data visualizations. Within the presented CIA context, this was very much favored by the participants. Interestingly, the user preference Figure 9: Bar plots – the participants’ preferences for each of the seven categories (_spatial_, _elementary / synoptic_; _temporal_, _elementary / elementary individual / synoptic / synoptic individual / indicator_), when asked to select between the different available approaches (pairwise comparison). Given the unequal number of options in each category, the values are here presented normalized. Line plots – the (normalized) medians for the _aesthetics_ and _legibility_ ratings (see Figure 8). in favor of the implemented _location_ approach is somewhat in contrast to the results reported by Lacoche et al. (2017), where their safe navigation floor approach rated worse compared to the others. The actual approach was visually similar in both cases, but the signal's intent differed: While our purpose was to actively guide the user to the highlighted area, theirs was instead to avoid it (Lacoche et al., 2017). Comparing the _node_ and _pillar_ approaches, it is interesting to see the disparity in regard to their assessments across the elementary and synoptic task configuration. The _pillar_ approach was rated better for referring to one artifact, likely because it allowed the participants to quickly and easily identify the referred artifact (legibility), while it took them subjectively a bit longer to identify the _node_ design. On the other hand, the synoptic approach of using one large pillar to highlight a group of artifacts did not scale accordingly, as it was not clear to them what artifacts are referred to exactly. Maybe such an approach would be better suited if one was to make a reference to an otherwise not specified large spatial area, instead of a group of identifiable artifacts. These results are quite interesting, as they indicate favor for different design approaches based on the task configuration. Arguably from a user interface perspective, one would often strive to apply a uniform design strategy, implementing the same approaches for the same, or similar, tasks. Within the context of the presented scenario, a reasonable design choice follows: Should one implement the same design approach across both tasks, thus following a rather coherent design, or instead implement different approaches, each in better support for their given task? Clearly, this requires careful consideration from the application's interface designer, weighing the pros and cons of either approach given the purpose and task at hand. We believe that in the presented data analysis context, legibility and user preferences independent of the design approach are more important, allowing analysts to be as precise as possible in their referencing and subsequent collaboration. Consequently, if environmental features were not available for implementation of the _location_ approach, we would instead apply a mixture of _pillar_ for elementary referencing and _node_ for synoptic. The _pillar_ approach is conceptually similar to the light beam technique as presented by Peter et al. (2018), who reported mixed results from their evaluation, describing that sometimes the participants would consider the light beam being part of the VE instead of a being a dedicated signal. The realistic design of the light beam seemed to have conflicted with the realistic setting of their VE, which prevented participants from clearly identifying the signal through the VR-Guide (Peter et al., 2018). ### Temporal References No distinct temporal reference design approach was favored across elementary and synoptic tasks within the 3D Radar Chart scenario. For point-in-time (elementary) referencing, the participants generally preferred the _highlight_ approach. Interestingly, while it scored better aesthetics ratings, its median for legibility is the worst compared to the three other approaches. The _pointer_ approach is a close runner-up in regard to user preference, scoring generally better legibility ratings. It seems that the participants liked the analogy of literally "pointing to a point in time" - even though we used a rather abstract approach instead of a realistic one (Sugiura et al., 2018). Similar to the results presented by Peter et al. (2018), we identified trends towards the participants' favor of the _outline_ approach compared to all others for the synoptic task configuration, both across all data dimensions and for the individual one. This is particularly interesting and also somewhat odd, as the _outline_ approach received the lowest user preference for the elementary task, even though it scored comparatively good in regard to aesthetics and legibility ratings. These results reveal a similar preference disparity as for the _node_ and _pillar_ approaches (see Section 6.1), requiring careful consideration in favor of or against a uniform interface design. We implemented two approaches based on the AA option using different indicator types: _pointer_ and _symbol_. The _symbol_ approach was rated slightly better when directly compared to the _pointer_ one (see different indicator configurations as illustrated in Figure 6). However, the results indicate trends toward a rather equal preference for both approaches when examining the bigger picture. We thought the participant's comment in regard to using the _pointer_ approach during synchronous collaboration, while encoding additional meaning in the _symbol_ indicator during asynchronous collaboration, in a more annotation-like manner, was particularly interesting, encouraging some further investigation. No major advantages or disadvantages for one over the other were identified, making both potentially valid approaches depending on the reference task and purpose. Within the presented context and scenario, we argue as follows. The _outline_ approach appears to be a favored referencing approach for making time-series references (synoptic), especially under consideration of the comparatively good aesthetics and legibility ratings. However, implementation as a temporal reference for an elementary task configuration of an individual data variable was not possible using the _outline_ approach (neither for the _highlight_ one). Given the preference for the _pointer_ approach over the _symbol_ one for the elementary task, we can see its application respectively, thus again recommending a mixture of different design approaches across elementary and synoptic tasks for making temporal references. ## 7 Conclusion and Future Work We set out to investigate spatio-temporal reference design approaches within the context of CIA. Collaborative information cues, such as pointing and referring to artifacts, are important in general, but arguably even more so in scenarios that involve the use of immersive technologies where more traditional co-located input is no longer conventionally available. To address such shortcomings, we implemented different design approaches to make spatial and temporal references within an IA environment. The design approaches were guided through relevant prior work in the field and the derivation of three design options for the creation of visual references. We hope that the presented options can assist practitioners with the design of similar information cues on a general level in the future, independent of their application within the context of CIA. Under consideration of the available features in the VE as well as based on different referencing tasks (elementary and synoptic), we empirically evaluated the implemented reference approaches in regard to aesthetics, legibility, and general user preference. The participants preferred the presented _location_ approach as a spatial reference, implemented through the modification of environmental features that can be directly associated with the referred artifact. Based on the results and discussion in regard to temporal references, it appears that different design approaches are preferable depending on the task: While the _pointer_ approach seems like a valuable option for referencing to single points in time, the _outline_ approach was clearly favored for all time-series references. In multiple instances of the results (spatial _node_ versus _pillar_, temporal _outline_ versus all others), trends indicate user preference for different approaches depending on the task (elementary versus synoptic). This requires careful consideration in regard to the general user interface design, as the implementation of different approaches may be in conflict with an otherwise desired uniform design throughout various aspects of an application's interface. Based on the variety of different reference task configurations, it may be valuable to explore the design of multiple approaches, each serving best its own purpose. This can be particularly valuable within data analysis-related scenarios, referring to different types and aspects of abstract data visualization, ensuring it is clear to the collaborator what their respective partner is referring to. In fact, informed by the outcomes of our study, we utilized the location (spatial) and symbol (temporal; magnifying glass) referencing approaches in a collaborative user study where two analysts were tasked with the investigation of a spatio-temporal dataset to make confirmative analysis assessments (Reski et al., 2022). The system setup was based on heterogeneous display and interaction technologies, comprising an immersive VR environment based on the 3D Radar Chart visualization approach (as presented throughout this article) as well as a non-immersive interface. Each interface featured various collaborative information cues to enable bidirectional spatio-temporal referencing across the different display modalities in real-time.5 The results of that follow-up study, particularly within the context of a more applied real-world CIA use case, enabled us to further investigate such referencing approaches. Footnote 5: Due to the Reski et al. (2022) study setup, referencing was only elementary (country) for the spatial dimension, but could be either elementary or synoptic (time event or time range) for the temporal dimension across all data variables. We see the potential for further investigations. First, the design and evaluation of hybrid reference approaches as suggested by some participants seem intriguing, combining aspects of various approaches into new ones. Second, the application of the presented design options for the creation of reference approaches as collaborative information cues in other contexts is intriguing, allowing for further iteration and the addition of conceptual principles to the options. Third, naturally, we intend to utilize the gained insights from the evaluation
2302.08883
Approximately Bayes-Optimal Pseudo Label Selection
Semi-supervised learning by self-training heavily relies on pseudo-label selection (PLS). The selection often depends on the initial model fit on labeled data. Early overfitting might thus be propagated to the final model by selecting instances with overconfident but erroneous predictions, often referred to as confirmation bias. This paper introduces BPLS, a Bayesian framework for PLS that aims to mitigate this issue. At its core lies a criterion for selecting instances to label: an analytical approximation of the posterior predictive of pseudo-samples. We derive this selection criterion by proving Bayes optimality of the posterior predictive of pseudo-samples. We further overcome computational hurdles by approximating the criterion analytically. Its relation to the marginal likelihood allows us to come up with an approximation based on Laplace's method and the Gaussian integral. We empirically assess BPLS for parametric generalized linear and non-parametric generalized additive models on simulated and real-world data. When faced with high-dimensional data prone to overfitting, BPLS outperforms traditional PLS methods.
Julian Rodemann, Jann Goschenhofer, Emilio Dorigatti, Thomas Nagler, Thomas Augustin
2023-02-17T14:07:32Z
http://arxiv.org/abs/2302.08883v5
# Bayesian PLS! ###### Abstract Semi-supervised learning by self-training heavily relies on pseudo-label selection (PLS). The selection often depends on the initial model fit on labeled data. Early overfitting might thus be propagated to the final model by selecting instances with overconfident but erroneous predictions, often referred to as confirmation bias. This paper introduces BPLS, a Bayesian framework for PLS that aims to mitigate this issue. At its core lies a criterion for selecting instances to label: an analytical approximation of the posterior predictive of pseudo-samples. We derive this selection criterion by proving Bayes optimality of the posterior predictive of pseudo-samples. We further overcome computational hurdles by approximating the criterion analytically. Its relation to the marginal likelihood allows us to come up with an approximation based on Laplace's method and the Gaussian integral. We empirically assess BPLS for parametric generalized linear and non-parametric generalized additive models on simulated and real-world data. When faced with high-dimensional data prone to overfitting, BPLS outperforms traditional PLS methods.1 Footnote 1: Code available at: [https://anonymous.4open.science/r/Bayesian-pls](https://anonymous.4open.science/r/Bayesian-pls) ## 1 Introduction Labeled data are scarce in many learning settings. This can be due to a variety of reasons such as restrictions on time, knowledge, or financial resources. Unlabeled data, however, are often much more accessible. This has given rise to the paradigm of semi-supervised learning (SSL), where information from unlabeled data is integrated into model training to improve predictions in a supervised learning framework. Within SSL, an intuitive and widely used approach is referred to as self-training or pseudo-labeling (Shi et al., 2018; Lee et al., 2013; McClosky et al., 2006). The idea is to fit an initial model to labeled data and iteratively assign pseudo-labels to unlabeled data according to the model's predictions. The latter requires a criterion (sometimes called confidence measure) for pseudo-label selection (PLS), that is, the selection of instances to be pseudo-labeled and added to the training data. By design, self-training strongly relies on the initial model fit and the way instances are selected to be pseudo-labeled. Everything hinges upon the interplay between the selection criterion and the initial model's generalization performance. If the initial model generalizes poorly, initial misconceptions can propagate throughout the process, only making things worse. High-dimensional data prone to overfitting are particularly sensitive to such confirmation bias (Arazo et al., 2020). Usually, self-training's sweet spot lies somewhere else: When the labeled data allow the model to learn sufficiently well while still leaving some room for improvement. Generally, the poorer the initial generalization, the harder it is to select sensible pseudo-labels to improve generalization, i.e., the more crucial the role of the selection criterion. Note that SSL is applied to data with high shares (typically over \(80\%\)(Sohn et al., 2020; Arazo et al., 2020)) of unlabeled data, where initial overfitting is likely for high-dimensional models, while final overfitting is not. ### Motivation Accordingly, we strive for a selection criterion that is robust with respect to the initial model fit, i.e., its learned parameters. At the same time, it should still exploit the information in the labeled data. Such a measure calls for disentangling the uncertainty contributions of the data and the model's parameters. This is in line with recent work in uncertainty quantification (UQ) that suggests decomposing epistemic uncertainty into approximation uncertainty driven by (a lack of) data and modeling uncertainty driven by (primarily parametric) assumptions [12]. Bayesian inference offers a sound and consistent framework for this distinction. Its rationale of technically modeling not only data but also parameters as random variables has proven to offer much insight into UQ for machine learning [12] and deep learning [1, 13]. We exploit the Bayesian framework for pinpointing uncertainty with regard to data and parameters in PLS. Our approach of Bayesian pseudo-label selection (BPLS) enables us to choose pseudo-labels that are likely given the observed labeled data but not necessarily likely given the estimated parameters of the fitted model. What is more, BPLS allows to include prior information not only for predicting but also for selecting pseudo-labels. Notably, BPLS is flexible enough to be applied to any kind of predictive model whose likelihood and Fisher-information are accessible, including non-Bayesian models. BPLS entails a Bayes optimal selection criterion, the _pseudo posterior predictive_ (PPP). Its intuition is straightforward yet effective: By averaging over all parameter values, PPP is more robust towards the initial fit compared to the predictive distribution based on a single optimal parameter vector. Our approximate version of the PPP is simple and computationally cheap to evaluate: \(\ell(\hat{\theta})-\frac{1}{2}\log[\mathcal{I}(\hat{\theta})]\) with \(\ell(\hat{\theta})\) being the likelihood and \(\mathcal{I}(\hat{\theta})\) the Fisher-information matrix at the fitted parameter vector \(\hat{\theta}\). As an approximation of the joint PPP, it does not require an _i.i.d._ assumption, rendering it applicable to a wide range of applied learning setups. ### Main Contributions **(1)** We derive PPP by formalizing PLS as a decision problem and show1 that PPP corresponds to the Bayes criterion, rendering selecting instances with regard to it Bayes optimal, see sections 2.1 and 2.2. Footnote 1: Proofs of all theorems in this paper can be found in the supplementary material. **(2)** Since our selection criterion includes a possibly intractable integral, we provide analytical approximations, exploiting Laplace's method and the Gaussian integral, both for uninformative and informative priors. Using varying levels of accuracy, we balance the trade-off between computational feasibility and precision, see Section 3. **(3)** We provide empirical evidence2 for BPLS' superiority over traditionally predominant PLS methods in case of semi-supervised generalized additive models (GAMs) and generalized linear models (GLMs) faced with high-dimensional data prone to overfitting, see Section 4. Footnote 2: Implementations of the proposed methods as well as reproducible scripts for the experiments are provided in the anonymous repository named **Bayesian-pls** (“_Bayesian, please!_”), see abstract. ## 2 Bayesian Most semi-supervised methods deal with classification or clustering tasks [20, 19]. Loosely leaning on [17], we formalize SSL as follows. Consider labeled data \[\mathcal{D}=\left\{\left(x_{i},y_{i}\right)\right\}_{i=1}^{n}\in\left(\mathcal{ X}\times\mathcal{Y}\right)^{n} \tag{1}\] and unlabeled data \[\mathcal{U}=\left\{\left(x_{i},\mathcal{Y}\right)\right\}_{i=n+1}^{m}\in\left( \mathcal{X}\times 2^{\mathcal{Y}}\right)^{m-n} \tag{2}\] from the same data generation process, where \(\mathcal{X}\) is the feature space and \(\mathcal{Y}\) is the categorical target space. The aim of SSL is to learn a predictive classification function \(f\) such that \(f(x)=\hat{y}\in\mathcal{Y}\) utilizing both \(\mathcal{D}\) and \(\mathcal{U}\). As is customary in self-training, we start by fitting a model with unknown parameter vector \(\theta\in\Theta\), \(\Theta\) compact with \(\dim(\Theta)=q\), on labeled data \(\mathcal{D}=\left\{\left(x_{i},y_{i}\right)\right\}_{i=1}^{n}\). Our goal is - as usual - to learn the conditional distribution of \(p(y\mid x)\) through \(\theta\) from observing features \(x=\left(x_{1},\ldots,x_{n}\right)\in\mathcal{X}^{n}\), and responses \(y=\left(y_{1},\ldots,y_{n}\right)\in\mathcal{Y}^{n}\) in \(\mathcal{D}\). Adopting the Bayesian perspective, we can state a prior function over \(\theta\) as \(\pi(\theta)\). The prior can represent information on \(\theta\) but may also be uninformative. Within existing frameworks for self-training (see Section 5) in SSL, one could deploy such a Bayesian setting for _predicting_ unknown labels of \(\mathcal{U}=\left\{\left(x_{i},\mathcal{Y}\right)\right\}_{i=1}^{m}\) as well as for the final predictions on unseen test data. However, we aim at a Bayesian framework for _selecting_ pseudo-labels. This is beneficial for two reasons. First and foremost, considering the Bayesian posterior predictive distribution in PLS will turn out to be more robust towards the initial fit on \(\mathcal{D}\) than classical selection criteria. Second, the Bayesian engine brings along the usual benefit of allowing to explicitly account for prior knowledge when selecting instances to be labeled. Notably, our framework of Bayesian pseudo-label _selection_ is unrelated to how pseudo-labels are _predicted_. ### The Case for the Posterior Predictive in PLS For any model with parameters \(\theta\in\Theta\), the likelihood function for observed features \(x\) and labels \(y\) is commonly defined as \(\mathcal{L}_{y\mid x}(\theta)=f_{\theta}(y\mid x),\) where \(f_{\theta}(\cdot)\) is from a parameterized family of probability density functions. In the Bayesian universe, parameters \(\theta\) are more than just functional arguments [14]. They are random quantities themselves, allowing us to condition on them: \(\mathcal{L}_{y\mid x}(\theta)=p(y\mid x,\theta)\). Recall that we have specified a prior \(\pi(\theta)\) on the parameters beforehand. After observing data, it can be updated to a posterior following Bayes' Theorem \(p(\theta\mid y,x)=p(y\mid x,\theta)\,\pi(\theta)/p(y\mid x)\), where the denominator is the marginal likelihood \[p(y\mid x)=\int_{\Theta}p(y\mid x,\theta)\,\pi(\theta)\,d\theta, \tag{3}\] or, more colloquially, "Bayesian evidence" (Lotti et al., 2022, Barber, 2012). For previously unseen data \((\tilde{y},\tilde{x})\), the posterior predictive distribution is defined as \[p(\tilde{y}\mid\tilde{x},y,x)=\int_{\Theta}p(\tilde{y}\mid\tilde{x},\theta)\,p (\theta\mid y,x)\,d\theta. \tag{4}\] The posterior predictive closely resembles the marginal likelihood in case we include \((\tilde{y},\tilde{x})\) in the data - a fact that we will exploit for our approximations in Section 3. Both marginalize the likelihood over \(\theta\). The difference is the weight: The marginal likelihood integrates out \(\theta\) with regard to the prior, while the posterior predictive integrates out \(\theta\) with regard to the posterior. Accordingly, both can be considered PLS criteria that are robust towards the initial fit: They average over all possible \(\theta\)-values instead of relying on one estimated \(\hat{\theta}\) from the trained model. 3 Computational issues aside, the posterior predictive of pseudo-labeled data thus encapsulates a perfectly natural selection criterion for self-training: It selects pseudo-labels that are most likely conditioned on the true observed \(\mathcal{D}\), the assumed model and all plausible parameters from the prior or posterior, respectively. Footnote 3: The probabilistic interpretation of the marginal likelihood – in the words of (Lotti et al., 2022) – is: “The probability that we would generate a dataset with a model if we randomly sample from a prior over its parameters”. The posterior predictive, analogously, is the probability that we would generate data with a model if we randomly sample from a posterior over its parameters. Both the data and the estimated parameters (as functions of the data) will change throughout the process of self-training. We argue that conditioning the choice of unlabeled instances solely on the estimated parameters in early iterations over-emphasizes the influence of the initial model. This optimistic reliance can be harmful in case of small \(n\) and high \(q\), where overfitting is likely. Selecting instances by the posterior predictive mitigates this. ### Bayes Optimality of Pseudo Posterior Predictive In the following, we show that selecting pseudo-labels with regard to their posterior predictive is Bayes optimal. We further show the same holds for selection with regard to the marginal likelihood in case of a non-updated prior. To this end, we formalize the selection of data points to be pseudo-labeled as a canonical decision problem, where an action corresponds to the selection of an instance from the set of unlabeled data \(\mathcal{U}\). **Definition 1** (PLS as Decision Problem): _Consider the decision-theoretic triple \((\mathcal{U},\Theta,u(\cdot))\) with an action space of unlabeled data4 to be selected, i.e., instances \((x_{i},\mathcal{Y})\) as actions, a space of unknown states of nature (parameters) \(\Theta\) and a utility function \(u:\mathcal{U}\times\Theta\rightarrow\mathbb{R}\)._ Footnote 4: We assume absence of tied observations for simplicity such that we can understand \(\mathcal{U}\) as set. Loosely inspired by (Cattaneo, 2007), we now define the utility of a selected data point \((x_{i},\mathcal{Y})\) as the plausibility of being generated jointly with \(\mathcal{D}\) by a model with parameters \(\theta\in\Theta\) if we include it with pseudo-label \(\hat{y}_{i}\in\mathcal{Y}\) (obtained through any predictive model) in \(\mathcal{D}\cup(x_{i},\hat{y}_{i})\). This is incorporated by the likelihood of \(\mathcal{D}\cup(x_{i},\hat{y}_{i})\), which shall be called _pseudo-label likelihood_ and written as \(p(\mathcal{D}\cup(x_{i},\hat{y}_{i})\mid\theta)\). We thus condition the selection problem on a model class as well as on already predicted pseudo-labels. The former conditioning is not required (see the extension in Section 6) for the well-definedness of the pseudo-likelihood while the latter is. **Definition 2** (Pseudo-Label Likelihood as Utility): _Let \((x_{i},\mathcal{Y})\) be any decision (selection) from \(\mathcal{U}\). We assign utility to each \((x_{i},\mathcal{Y})\) given \(\mathcal{D}\) and pseudo-labels \(\hat{y}\in\mathcal{Y}\) by the following measurable utility function_ \[u\colon\mathcal{U}\times\Theta \rightarrow\mathbb{R}\] \[((x_{i},\mathcal{Y}),\theta) \mapsto u((x_{i},\mathcal{Y}),\theta)=p(\mathcal{D}\cup(x_{i},\hat{y}_{i}) \mid\theta),\] _which is said to be the pseudo-label likelihood._ This utility function is a natural probabilistic choice to assign utilities to selected pseudo-labels given the predicted pseudo-labels. With a prior \(\pi(\theta)\), we get the following result. **Theorem 1**: _In the decision problem \((\mathcal{U},\Theta,u(\cdot))\) (Definition 1) with the pseudo-label likelihood as utility function (Definition 2) and a prior \(\pi(\theta)\) on \(\Theta\), the standard Bayes criterion_ \[\Phi(\cdot,\pi)\colon\mathcal{U} \rightarrow\mathbb{R}\] \[a \mapsto\Phi(a,\pi)=\mathbb{E}_{\pi}(u(a,\theta))\] _corresponds to the pseudo marginal likelihood \(p(\mathcal{D}\;\cup\;(x_{i},\hat{y}_{i}))\)._ **Corollary 1**: _For any prior \(\pi(\theta)\) on \(\Theta\), the action \(a_{m}^{*}=\arg\max_{i}p(\mathcal{D}\cup(x_{i},\hat{y}_{i}))\) is Bayes optimal._ Taking the observed labeled data \(\mathcal{D}\) into account by updating the prior \(\pi(\theta)\) to a posterior \(p(\theta\mid\mathcal{D})\), we end up with an analogous result for the _pseudo posterior predictive_. The Theorem requires only the Proposition by (Berger, 1985, section 4.4.1) stating that posterior loss equals prior risk. That is, conditional Bayes optimality equals unconditional Bayes optimality. **Theorem 2**: _In the decision problem \((\mathcal{U},\Theta,u(\cdot))\) and the pseudo-label likelihood as utility function as in Theorem 1 but with the prior updated by the posterior \(\pi(\theta)=p(\theta\mid\mathcal{D})\) on \(\Theta\), the standard Bayes criterion \(\Phi(\cdot,\pi)\colon\mathcal{U}\to\mathbb{R};\,a\mapsto\Phi(a,\pi)=\mathbb{E}_ {\pi}(u(a,\theta))\) corresponds to the pseudo posterior predictive \(p(\mathcal{D}\cup(x_{i},\hat{y}_{i})\mid\mathcal{D})\)._ **Corollary 2**: _Action \(a_{p}^{*}=\arg\max_{i}p(\mathcal{D}\cup(x_{i},\hat{y}_{i})\mid\mathcal{D})\) is Bayes optimal for any updated prior \(\pi(\theta)=p(\theta\mid\mathcal{D})\)._ Further note that directly maximizing the likelihood with regard to \(a\) corresponds to the optimistic max-max-criterion, see Theorem 3. **Theorem 3**: _In the decision problem \((\mathcal{U},\Theta,u(\cdot))\) with the pseudo-label likelihood as utility function as in Theorem 1, the max-max criterion_ \[\Phi(\cdot)\colon\mathcal{U} \to\mathbb{R};\] \[a \mapsto\Phi(a)=\max_{\theta}(u(a,\theta))\] _corresponds to the (full) likelihood._ The max-max-criterion advocates deciding for an action (here: selection of pseudo-labeled data) with the highest utility (here: likelihood) according to the most favorable state of nature \(\theta\), e.g. see (Rapoport, 1998). It can hardly be seen as a rational criterion, as it reflects "wishful thinking" (Rapoport, 1998, page 57). We thus abstain from it in what follows. Our roughest approximation of the PPP in Section 3, however, will correspond to this case as well as the more general concept of optimistic superset learning (OSL) (Hulermeier, 2014; Rodemann et al., 2022). ## 3 Approximate Bayes Optimal Pls Since the _pseudo posterior predictive_ (PPP) \(p(\mathcal{D}\cup(x_{i},\hat{y}_{i})\mid\mathcal{D})=p(\hat{y}\mid x,y,x)\) (Theorem 2) is computationally costly to evaluate via Markov Chain Monte Carlo (MCMC), we aim at approximating it analytically. In light of the general computational complexity of BPLS involving model refitting, see Section 4, this appears particularly crucial. We will approximate the joint PPP directly.5 Our method hence does not need an _i.i.d._ assumption, which makes it very versatile. Footnote 5: For _i.i.d._ data we could focus on the single PPP contributions \(p(y_{i}\mid x_{i},\mathcal{D})\) instead of the joint. Still, we would have to deal with a possibly intractable integral and end up with similar computational hustle. We thus opt for approximating the joint directly. Moreover, considering the joint quantities instead of the distributions implies no loss of generality, with possible extensions for dependent data in mind. Due to the aforementioned similarity of the PPP and the marginal likelihood, we are in the fortunate position of borrowing from some classical marginal likelihood approximations, see (Llorente et al., 2023). Especially popular are approximations based on Laplace's method as in (Schwarz, 1978). Our main motivation, however, is to obtain a Gaussian integral (Gauss, 1877), which we can then compute explicitly. ### Approximation of the PPP We will start by transferring Laplace's method to the PPP. Recall that the predictive posterior of a pseudo-sample \((x_{i},\hat{y}_{i})\) (the PPP) given data \(\mathcal{D}\) is defined as \[p(\mathcal{D}\cup(x_{i},\hat{y}_{i})|\mathcal{D})=\int_{\Theta}p(\mathcal{D} \cup(x_{i},\hat{y}_{i})\mid\theta)p(\theta\mid\mathcal{D})d\theta,\] where Bayes' theorem gives \[p(\theta\mid\mathcal{D})=p(\mathcal{D}\mid\theta)\pi(\theta)/p(\mathcal{D}).\] Denoting \(\ell_{\mathcal{D}}(\theta)=\log p(\mathcal{D}\mid\theta)\) and \(\tilde{\ell}(\theta)=\ell_{\mathcal{D}\cup(x_{i},\hat{y}_{i})}(\theta)+\ell_{ \mathcal{D}}(\theta)\), we can write the integrand as \[p(\mathcal{D}\cup(x_{i},\hat{y}_{i})\mid\theta)p(\theta\mid\mathcal{D})=\exp[ \tilde{\ell}(\theta))]\pi(\theta)/p(\mathcal{D}).\] Let \(\mathcal{I}(\theta)=-\tilde{\ell}^{\prime\prime}(\theta)/n\) denote the observed Fisher information matrix. Further denote by \(\tilde{\theta}=\arg\max_{\theta}\tilde{\ell}(\theta)\) the maximizer of \(\tilde{\ell}(\theta)\). It holds \(\tilde{\ell}^{\prime}(\theta)=0\) by definition of \(\tilde{\theta}\). A Taylor expansion around \(\tilde{\theta}\) thus gives \[\tilde{\ell}(\theta)\approx\tilde{\ell}(\tilde{\theta})-\frac{n}{2}(\theta- \tilde{\theta})^{\prime}\mathcal{I}(\tilde{\theta})(\theta-\tilde{\theta}).\] The integrand decays exponentially in \(n\|\theta-\tilde{\theta}\|\), so we can approximate it locally around \(\tilde{\theta}\) by also taking \(\pi(\theta)\ \approx\ \pi(\tilde{\theta})\) inside the integral with an analogous Taylor series. We refer to (Miller, 2006, Section 3.7) and (Lapinski, 2019, Theorem 2) for a rigorous treatment of the remainder terms and regularity conditions. We can eventually approximate \(p(\mathcal{D}\cup(x_{i},\hat{y}_{i})|\mathcal{D})\) by \[\frac{\exp[\tilde{\ell}(\tilde{\theta})]\pi(\tilde{\theta})}{p(\mathcal{D})} \int_{\Theta}\exp\biggl{[}-\frac{n}{2}(\theta-\tilde{\theta})^{\prime} \mathcal{I}(\tilde{\theta})(\theta-\tilde{\theta})\biggr{]}d\theta,\] The integral on the right is a Gaussian integral. Defining \(\Sigma=[n\mathcal{I}(\tilde{\theta})]^{-1}\) and \(\phi_{\Sigma}\) as the density of the \(\mathcal{N}(0,\Sigma)\) distribution, it equals \[(2\pi)^{q/2}|\Sigma|^{1/2}\int_{\Theta}\phi_{\Sigma}(\theta)d\theta=\left( \frac{2\pi}{n}\right)^{q/2}|\mathcal{I}(\tilde{\theta})|^{-1/2}.\] Altogether, we have shown that \[p(\mathcal{D}\cup(x_{i},\hat{y}_{i})|\mathcal{D})\approx\left(\frac{2\pi}{n} \right)^{q/2}\frac{\exp[\tilde{\ell}(\tilde{\theta})]\pi(\tilde{\theta})}{| \mathcal{I}(\tilde{\theta})|^{1/2}p(\mathcal{D})}. \tag{5}\] ### Approximate Selection Criteria To find the pseudo-sample \((x_{i},\hat{y}_{i})\) maximizing the PPP, we can equivalently maximize its logarithm, i.e. maximize \[\frac{q}{2}\,\log\!\left(\frac{2\pi}{n}\right)+\tilde{\ell}(\tilde{\theta})+\log \pi(\tilde{\theta})-\frac{1}{2}\log|\mathcal{I}(\tilde{\theta})|-\log p( \mathcal{D}).\] Dropping all terms that do not depend on \((x_{i},\hat{y}_{i})\) leads to the selection criterion \[\tilde{\ell}(\tilde{\theta})-\frac{1}{2}\log|\mathcal{I}(\tilde{\theta})|+\log \pi(\tilde{\theta}). \tag{6}\] The term \[\tilde{\ell}(\theta)=\ell_{\mathcal{D}\cup(x_{i},\hat{y}_{i})}(\theta)+\ell_{ \mathcal{D}}(\theta),\] quantifies how well the pseudo-sample \((x_{i},\hat{y}_{i})\) conforms with the data set \(\mathcal{D}\) given a parameter \(\theta\), e.g. the optimal (argmax) parameter \(\tilde{\theta}\) in Equation (6). It is curious that samples in \(\mathcal{D}\) contribute twice to \(\tilde{\ell}\), but \((x_{i},\hat{y}_{i})\) only once. However, this is irrelevant when comparing two pseudo-samples \((x_{i},\hat{y}_{i})\) and \((x_{j},\hat{y}_{j})\). To see this, we expand \(\ell_{\mathcal{D}}\) around its maximizer \(\hat{\theta}\), so that \(\ell_{\mathcal{D}}(\tilde{\theta})=\ell_{\mathcal{D}}(\hat{\theta})+O(\|\hat{ \theta}-\tilde{\theta}\|^{2})\). Since \(\mathcal{D}\cup(x_{i},\hat{y}_{i})\) and \(\mathcal{D}\) differ in only one sample, the difference \(\hat{\theta}-\tilde{\theta}\) is of order \(O(n^{-1})\). Thus, \[\tilde{\ell}(\theta)=\ell_{\mathcal{D}\cup(x_{i},\hat{y}_{i})}(\theta)+\ell_{ \mathcal{D}}(\hat{\theta})+O(n^{-2}).\] The remainder is negligible compared to the other terms in (6) and \(\ell_{\mathcal{D}}(\hat{\theta})\) does not depend on the pseudo-sample \((x_{i},\hat{y}_{i})\). This suggests the simplified _informative BPLS criterion_ \[\mathrm{iBPLS}=\ell_{\mathcal{D}\cup(x_{i},\hat{y}_{i})}(\tilde{\theta})- \frac{1}{2}\log|\mathcal{I}(\tilde{\theta})|+\log\pi(\tilde{\theta}). \tag{7}\] Equivalence of (6) and (7) is verified numerically for small \(n\) by experiments on real-world and simulated data in Supplement F. The ability to incorporate prior information into the selection is generally a strength of our criterion. By default, however, we cannot assume that such information is available. We can instead choose an uninformative prior where \(\pi(\theta)\) is constant with respect to \(\theta\). Recall that we assume \(\Theta\) to be compact, which allows us to specify a uniform prior as uninformative prior. Then (7) simplifies to the _uninformative BPLS criterion_ \[\mathrm{uBPLS}=\ell_{\mathcal{D}\cup(x_{i},\hat{y}_{i})}(\tilde{\theta})- \frac{1}{2}\log|\mathcal{I}(\tilde{\theta})|. \tag{8}\] Our novel PLS criteria provide great intuition. * The first term is the joint likelihood of the pseudo-sample \((x_{i},\hat{y}_{i})\) and \(\mathcal{D}\) under the optimal parameter \(\tilde{\theta}\). It measures how well the pseudo-sample complies with the previous model and previously seen data \(\mathcal{D}\). It tells the value of this joint likelihood at its maximum. Loosely speaking, this maximum height of the likelihood can be seen as a very rough approximation of the area under it, i.e., the integral with uniform weights.6 Footnote 6: Technically, we also need that \(\lambda(\Theta)=1\) with \(\lambda\) a Lebesgue-measure for this interpretation. * The second term penalizes high curvature of the pseudo-label likelihood function \(\ell_{\mathcal{D}\cup(x_{i},\hat{y}_{i})}(\theta)\) at its peak \(\tilde{\theta}\), since the Fisher-information is its second derivative. Due to the negative sign, the criterion prefers pseudo-samples that lead to flatter maxima of the likelihood. In line with recent insights into sharp and flat minima of loss surfaces (Dinh et al., 2017; Li et al., 2018; Andriushchenko and Flammarin, 2022), such a penalty can be expected to improve generalization. The lower the curvature, the more probability mass (area under the likelihood) is expected on \(\Theta\setminus B_{\epsilon}(\tilde{\theta})\) with \(B_{\epsilon}=\{\theta\in\Theta\mid\|\theta-\tilde{\theta}\|<\epsilon\}\) an \(\epsilon\)-ball for fixed \(\epsilon>0\) around \(\tilde{\theta}\) in the uninformative case. Intuitively, this corrects the very rough approximation of the area under the likelihood by the likelihood's maximal height, see above. * The third term in the informative BPLS criterion adjusts the selection for our prior beliefs \(\pi\) about \(\theta\). Here, the effect of \((x_{i},\hat{y}_{i})\) is only implicit, because it affects the maximizer \(\tilde{\theta}\). The more likely the updated parameter \(\tilde{\theta}\) is under \(\pi\), the higher the PPP. In summary, our approximation of the PPP grows in the absolute value of the likelihood's peak, decreases in its curvature at this point, and increases in the prior likelihood of the updated parameter. When \(n\rightarrow\infty\), the criteria iBPLS and uBPLS are dominated by the likelihood, thus \[\log p(\mathcal{D}\cup(x_{i},\hat{y}_{i})|\mathcal{D})\stackrel{{ n\rightarrow\infty}}{{\propto}}\ell_{\mathcal{D}\cup(x_{i},\hat{y}_{i})}( \tilde{\theta}).\] This approximation is computationally cheaper to evaluate, as it does not involve the Fisher-information. However, this comes at the cost of poor accuracy in case of small \(n\). Selection with regard to this rough approximation of the PPP corresponds to selection with regard to the likelihood. As pointed out in Section 2.2, this corresponds to the overly optimistic max-max-criterion. ## 4 Experiments **Algorithmic Procedure:** For all predicted pseudo-labels, we refit the model on \(\mathcal{D}\ \cup\ (x_{i},\hat{y}_{i})\) and evaluate its PPP by means of the derived approximations iBPLS and uBPLS to select one instance to be added to the training data. Detailed pseudo code for BPLS can be found in Supplement A. The computational complexity depends on the evaluation of the PPP. With \(|\mathcal{U}|=m\) unlabeled data points and no stopping criterion, \(m+(m-1)+\cdots+1=\frac{m^{2}+m}{2}\) PPPs have to be evaluated (that is, approximated). Hence, BPLS' complexity depends on the model's complexity and the amount of unlabeled data. **Hypothesis 1**: _(a) PPP with uninformative prior outperforms traditional PLS on data prone to initial overfitting (i.e., with high ratio of features to data \(\frac{q}{n}\) and poor initial generalization). (b) For low \(\frac{q}{n}\) and high initial generalization, BPLS is outperformed by traditional PLS._ **Hypothesis 2**: _(a) Among all PLS methods, the pseudo-label likelihood (max-max-action) reinforces the initial model fit the most and (b) hardly improves generalization._ **Hypothesis 3**: _PPP with informative prior outperforms traditional PLS methods universally._ **Experimental Setup:** We formulate three hypotheses beforehand. Hypothesis 1 corresponds to the main motivation behind BPLS; its second part is a logical consequence thereof: If we are sceptical towards the initial model in case it generalizes well, we expect to select pseudo-labels in a worse way than when trusting the initial model. Hypothesis 2 is based on the decision-theoretic insights regarding PLS by the likelihood, see section 2.2: It embodies an optimistic reliance on the initial model and is thus expected to pick data that fits best into that model. We further expect (Hypothesis 3) BPLS to unambiguously outperform non-Bayesian selection methods in case the prior provides actual information about the data generating process - the latter is simply not available for non-Bayesian PLS. We benchmark semi-supervised (parametric) generalized linear models (GLMs) and (non-parametric) generalized additive models (GAMs) [17, 16] with PPP and pseudo-label likelihood against two common selection criteria (probability score and predictive variance) [14] as well as a supervised baseline. For the latter, we abstain from self-training and only use the labeled data for training. Experiments are run on simulated binomially distributed data as well as on eight data sets for binary classification from the UCI repository [15]. The binomially distributed data was simulated through a linear predictor consisting of normally distributed features. Details on the simulations as well as on the data sets can be found in Supplement C and Supplement H. The share of unlabeled data was set to \(0.8\) and \(0.9\). PLS methods were compared w.r.t. to ("inductive") accuracy of prediction on unseen test. All data sets were found to be fairly balanced except for the EEG data (minority share: \(0.29\)). **Results:** Figures 1 and 2 as well as Table 1 summarize the results in the uninformative case (grey figures) for real Figure 1: Results from 8 classification tasks based on real-world data [15] in descending difficulty (measured by supervised test accuracy), where \(p\) denotes the number of features here and the share of unlabeled data is 0.8. Accuracy averaged over 100 repetitions. world and simulated data, respectively. "Oracle stopping" in Table 1 refers to comparing PLS methods with regard to their overall best accuracy as opposed to "final" comparisons after the whole data set was labeled. Figure 2 sheds further light on results for simulated data, while Figure 3 displays results from benchmarking BPLS to classical PLS methods in the informative case (black figures). Detailed figures displaying results from all experiments can be found in the supplementary material. **Interpretation:** At first sight, comparing the accuracy gains in Figure 1 on different data sets (in order of ascending baseline performance) clearly supports Hypothesis 1: For harder tasks like EEG or sonar with relatively high ratio of features to data \(\frac{q}{n}\), Bayesian PPP outperforms traditional PLS, whilst being dominated by the probability score in case of easier tasks like banknote or breast cancer. For data sets with intermediate difficulty (mushrooms and ionosphere), PPP and other PLS methods compete head-to-head. The results on abalone data underpin a general fact in SSL (see section 1): Successful self-training requires at least some baseline supervised performance. Results on simulated data (Table 1) further support the role of \(\frac{q}{n}\) in Hypothesis 1. Their visualization (Figure 2) nicely illustrates the inner working of selection by PPP: By not trusting the initial model, PPP affects the model's test accuracy the most. While \(n=400\) leaves some room for improvement through mitigating the overfitting by pseudo-labeled data, PPP leads to a noisy performance in case of \(n=100\) close to \(p\). Here, even the final model still overfits. These promising results should not hide an inconsistency: The fact that PPP is superior on the cars task but not on the ionosphere task contradicts Hypothesis 1 (a), since cars is harder than ionosphere, while having almost identical \(\frac{q}{n}\). We find Hypothesis 2 to be partially supported by the results. While 2 (a) holds for both the majority of simulated (see supplementary material) and real-world data (likelihood generally the closest to supervised performance), 2 (b) is challenged by considerable generalization performance gain on ionosphere and breast cancer data. Figure 3 clearly supports Hypothesis 3: When using informative priors based on the true data-generating process, BPLS clearly outperforms traditional PLS methods. Results in Supplement D.3 further back this finding. This comes at no big surprise, since non-Bayesian PLS simply lack ways to incorporate such prior knowledge. From this perspective, the uninformative case (Hypothesis 1) corresponds to raising the bar and clearly is the theoretically more interesting benchmarking setup. However, many practical applications of SSL entail a myriad of pre-existing knowledge, e.g., radio spectrum identification [1]. For practical purposes, thus, the informative situation might even be more relevant. ## 5 Related Work **Robust PLS:** Robustness of PLS is a widely discussed issue in the self-training literature. [1] propose information-theoretic PLS robust towards covariate shift. [11] label instances in the form of sets of probability distributions (credal sets), weakening the reliance on a single distribution. [23] aim at robustness to modeling assumptions by allowing model selection through the deviance information criterion during semi-supervised learning. [15] propose uncertainty-aware pseudo-label selection which proves to compete with state-of-the-art SSL based on consistency regulation. The idea is to select pseudo-labeled instances whose probability score and predictive uncertainty are above (tunable) thresholds. The latter is operationalized by the prediction's variance, and thus, unlike BPLS, fails to decompose approximation and modeling uncertainty, see Section 1. Both predictive variance and probability score serve as benchmarks in Section 4. **Bayesian Self-Training:** There is a broad body of research on deploying Bayesian _predictions_ in SSL and particularly in self-training [1, 14, 15, 16]. The same holds for explicit likelihood-based inference, such as weighted likelihood [21], conditional likelihood [1], and joint mixture likelihood [1]. Most of them use Bayesian models for _predicting_ pseudo-labels. In contrast, we prove that the argmax of the PPP is the Bayes optimal _selection_ of pseudo-labels given **any** predictive model. Regard \begin{table} \begin{tabular}{c||c||l l} \(\mathbf{n}\) & \(\mathbf{q}\) & **ORACLE STOPPING** & **FINAL** \\ \hline \hline 60 & 60 & PPP & PPP \\ 100 & 60 & PPP & Supervised Learning \\ 400 & 60 & PPP & PPP \\ 1000 & 60 & Probability Score & Probability Score \\ \end{tabular} \end{table} Table 1: Best performing PLS method (uninformative) on simulated data Figure 2: Results from simulated data. Accuracy averaged over 100 repetitions. Legend: see Figure 1. ing Bayesian or likelihood-based _selection_ of pseudo-labels, there exists only little (Bayesian) or hardly any (likelihood-based) work. [11] quantify the uncertainties of pseudo-labels by mixtures of predictive distributions of a neural net, applying MC dropout. This could be seen as an expensive MC-based approximation of the PPP. Very recently, [13] proposed PLS with regard to (a sampling-based approximation of) the entropy of the pseudo-labels' posterior predictive distribution. The entropy is considered a measure of total uncertainty (aleatoric and epistemic) and often considered as regularization for PLS, see [10, 12] for instance. Abstaining from the entropy - as we do - effectively means not considering the aleatoric uncertainty. While including aleatoric uncertainty (e.g. measurement noise) generally makes sense, we consider it of minor importance in the concrete problem of initial overfitting, where we aim at disentangling epistemic uncertainty with regard to _data_ and _parameters_: We want to choose pseudo-labels that are likely given the _observed labeled data_ but not necessarily likely given the estimated _parameters of the (over-)fitted model_. ## 6 Discussion **Extensions:** We briefly discuss two venues for future work. The first extension loosens the restriction on one particular model class by performing model selection and PLS simultaneously. The idea would be to select these instances that can be best explained by the simplest learner (i.e., the one with least parameters). Further recall that both the framework of BPLS and our approximation of the PPP do not require data to be _i.i.d_ distributed. Applying BPLS on dependent observations, such as in auto-correlated data like time series, is thus another promising line of further research. **Limitations:** BPLS' strength of being applicable to any learner can imply high computational costs in case of expensive-to-train models such as neural nets, because PPP approximations require refitting the model. Additionally, it might be difficult for practitioners to assess the risk of overfitting to the initial data set beforehand and opt for BPLS in response. Given the fact that BPLS is outperformed by traditional PLS in cases with no overfitting, this might be considered a drawback for practical application. However, Section 4 demonstrated that \(\frac{q}{n}\) and the baseline supervised performance (both easily accessible) provide sound proxies for initial overfitting scenarios that can induce a confirmation bias in PLS. These proxies can (alongside cross-validation) help practitioners to identify such scenarios. **Conclusion:** BPLS renders self-training more robust with respect to the initial model. This improves final performance if the latter overfits and harms it if not. Identifying overfitting scenarios is thus crucial for BPLS' usage. What is more, BPLS allows incorporating prior knowledge, with the help of which substantial performance gains can be achieved. Besides, our insights from formalizing PLS as a decision problem clear the way for promising future work exploiting rich literature on Bayesian decision theory. Ultimately, we conclude that a Bayesian view can add great value not only to predicting but also to selecting data for self-training. Figure 3: Results of PPP with informative priors and non-parametric GAMs on simulated data with different shares of unlabeled data. Accuracy averaged over 100 repetitions. ## Acknowledgements Thomas Augustin gratefully acknowledges support by the Federal Statistical Office of Germany within the cooperation project "Machine Learning in Official Statistics".
2301.11882
Privacy-Preserving Methods for Outlier-Resistant Average Consensus and Shallow Ranked Vote Leader Election
Consensus and leader election are fundamental problems in distributed systems. Consensus is the problem in which all processes in a distributed computation must agree on some value. Average consensus is a popular form of consensus, where the agreed upon value is the average of the initial values of all the processes. In a typical solution for consensus, each process learns the value of others' to determine the final decision. However, this is undesirable if processes want to keep their values secret from others. With this motivation, we present a solution to privacy-preserving average consensus, where no process can learn the initial value of any other process. Additionally, we augment our approach to provide outlier resistance, where extreme values are not included in the average calculation. Privacy is fully preserved at every stage, including preventing any process from learning the identities of processes that hold outlier values. To our knowledge, this is the first privacy-preserving average consensus algorithm featuring outlier resistance. In the context of leader election, each process votes for the one that it wants to be the leader. The goal is to ensure that the leader is elected in such a way that each vote remains secret and the sum of votes remain secret during the election. Only the final vote tally is available to all processes. This ensures that processes that vote early are not able to influence the votes of other processes. We augment our approach with shallow ranked voting by allowing processes to not only vote for a single process, but to designate a secondary process to vote towards in the event that their primary vote's candidate does not win the election.
Luke Sperling, Sandeep S Kulkarni
2023-01-27T17:54:16Z
http://arxiv.org/abs/2301.11882v1
Privacy-Preserving Methods for Outlier-Resistant Average Consensus and Shallow Ranked Vote Leader Election ###### Abstract Consensus and leader election are fundamental problems in distributed systems. Consensus is the problem in which all processes in a distributed computation must agree on some value. Average consensus is a popular form of consensus, where the agreed upon value is the average of the initial values of all the processes. In a typical solution for consensus, each process learns the value of others' to determine the final decision. However, this is undesirable if processes want to keep their values secret from others. With this motivation, we present a solution to privacy-preserving average consensus, where no process can learn the initial value of any other process. Additionally, we augment our approach to provide outlier resistance, where extreme values are not included in the average calculation. Privacy is fully preserved at every stage, including preventing any process from learning the identities of processes that hold outlier values. To our knowledge, this is the first privacy-preserving average consensus algorithm featuring outlier resistance. In the context of leader election, each process votes for the one that it wants to be the leader. The goal is to ensure that the leader is elected in such a way that each vote remains secret and the sum of votes remain secret during the election. Only the final vote tally is available to all processes. This ensures that processes that vote early are not able to influence the votes of other processes. We augment our approach with shallow ranked voting by allowing processes to not only vote for a single process, but to designate a secondary process to vote towards in the event that their primary vote's candidate does not win the election. ## 1 Introduction This paper focuses on two fundamental problems in distributed computing, consensus and leader election, and presents algorithms that preserve privacy while solving these problems. Consensus [20] is a fundamental problem in distributed computation. In consensus, all processes participating in a computation must agree on a shared value. This has applications in many scenarios, such as clock synchronization, leader election, and cloud computing. In this paper, we focus on the case where the initial input is real-valued. And, the goal is that all processes decide on the average of those inputs (possibly, excluding some outliers). Leader election [16] is another fundamental problem in distributed computation. The goal is for each process to eventually decide on a single process it thinks/wants as the leader. In the end, all processes should agree on which process is the leader. A method of performing leader election is via ballotcasting, where each process designates a single other process it wishes to elect. Whichever process receives the most votes is elected as the leader. There needs to be a method of tie-breaking in the case that two processes receive the plurality of votes. One concern in consensus and leader election is privacy. Traditional consensus assumes that votes are public. There are many reasons for wanting to keep votes private. In practical applications, these initial states may represent sensitive information. Similarly, in the leader election where ballot-casting is used, the votes should be kept private as well. Situations where accurate reporting is desired but the information being reported is sensitive are concrete applications for privacy-preserving consensus algorithms. For example, the United States government may wish to collect data from large tech companies regarding how many security breaches they've faced in the past year. To encourage accurate reporting of this information, privacy-preserving solutions may be employed where each entity would be guaranteed that their data will remain private from all other entities as well as the government and the government can only learn the average value. Another application of privacy-preserving consensus is the multi-agent rendezvous problem, where multiple agents wish to agree on a location to meet but do not want to disclose their initial location [23]. There are two types of average consensus. In the first type, the goal is for the participant processes to compute the av erage for themselves. And, the goal is to prevent participant processes from learning others' contributions. In the second type, we have a trusted/collector process that is interested in computing the average. And, the average value should be known only to this trusted process. However, none (including the trusted process) should learn the original values of other processes. In this paper, we introduce new algorithms that solve privacy-preserving average consensus and privacy-preserving leader election. Our approach includes a solution to privacy-preserving outlier-resistant consensus, which is a previously open problem, as far as we can tell. We leverage the properties of Homomorphic Encryption (HE) that allow computations over encrypted data without access to the secret key needed for decryption [18]. Our leader election algorithm allows shallow ranked voting, such that each process not only may vote for their most-wanted candidate but also may indicate a secondary candidate to vote for if their primary candidate is not elected. The intuition of our approach is as follows: In the absence of a trusted process, each process encrypts its initial state with its own public key. These ciphertexts are passed around the other processes and contributed to before being returned to the keyholder for decryption. In the presence of a trusted process, that process holds exclusive access to the secret key. The other processes use the public key to encrypt their initial values that are homomorphically pooled together via message-passing. After aggregating all of the votes, the average may be calculated without decrypting the value. No process, not even the trusted process, learns the initial value of any process. **Contributions of the paper.** Our contributions in this paper are as follows: * We present a novel algorithm solving privacy-preserving average consensus in the presence of a trusted third party where no process, not even the trusted process, is able to determine the initial value of any other process. * We modify the above algorithm for solving privacy-preserving average consensus with no trusted third party. Here, the initial state of each process is kept secret from each other process. * We present a novel algorithm solving outlier-resistant privacy-preserving average consensus, which is a previously unsolved problem to our knowledge. Initial values which are considered outliers (determined as several standard deviations away from the mean) are not included in the mean calculation. Additionally, the identity of the processes holding extreme values cannot be deduced by any processes. * We identify a novel algorithm for solving privacy-preserving leader election. In this algorithm, the vote of each process is kept private and no process is able to gain any information that indicates which processes are more likely to win the election until the results are determined. Additionally, we provide the option for processes to designate a secondary vote for shallow-ranked voting. **Organization of the paper.** Section 2 contains necessary background on homomorphic encryption. Section 3 introduces and defines the system specifications. Sections 4 and 5 detail our solutions to average consensus and outlier-resistant average consensus, respectively. Section 6 describes our privacy-preserving leader election algorithm. Section 7 discusses related work in the literature. Finally, section 8 provides concluding remarks. ## 2 Background - Homomorphic Encryption To ensure the privacy of the initial values of the processes, we employ the Cheon-Kim-Kim-Song (CKKS) encryption scheme [8]. The CKKS scheme allows for approximate computations over encrypted values without access to the secret key needed for decryption. Its hardness is based on the Ring Learning with Errors problem. Entire real-valued vectors are encoded as plaintexts which are of the form \(R=\mathbb{Z}[x]/(x^{N}+1)\) before being encrypted via public key as ciphertexts. Ciphertexts are pairs of polynomial rings in the form \(R_{q}^{2}=\mathbb{Z}_{q}[x]/(x^{N}+1)\) where \(R_{q}\) represents polynomials of degree less than \(N\) and coefficients modulo \(q\). The following operations are supported: **Key Generation**: Given a set of parameters (such as level of security), generates a public key and secret key. **Encryption**: Encrypt a plaintext into a ciphertext using the public key. **Decryption**: From a ciphertext and the secret key, recover the original plaintext. **Addition**: Two ciphertexts, or a ciphertext and a plaintext, are added together to result in a new ciphertext. This corresponds to the element-wise addition of the underlying vectors. Formally, \(add(Enc[x_{1},x_{2},...,x_{n}],Enc[y_{1},y_{2},...,y_{n}])=Enc[x_{1}+y_{1},x _{2}+y_{2},...,x_{n}+y_{n}]\). **Multiplication**: Homomorphic multiplication translates to element-wise multiplication of the underlying vectors. Relinearization is needed after homomorphic multiplication. Formally, \(mult(Enc[x_{1},x_{2},...,x_{n}],Enc[y_{1},y_{2},...,y_{n}])=Enc[x_{1}y_{1},x_{2 }y_{2},...,x_{n}y_{n}]\) **Relinearization**: Reduces the number of polynomials in a ciphertext from three to two to prevent the size of the ciphertext from growing from repeated multiplications. **Rotation**: Using an optionally-generated set of rotation keys, ciphertexts may be cyclically rotated. In particular, it is possible to compute \(Enc(x_{2},x_{3},\cdots,x_{n},x_{1})\) from \(Enc(x_{1},x_{2},x_{3},\cdots,x_{n})\) without decryption. This function takes a parameter that identifies the order of rotation (left or right) as well as the amount of rotation. We note that this encryption scheme does not suffer from small-domain attacks. Specifically, encrypting the same string (say 0) can generate multiple possible outputs. Thus, one cannot attack it by generating all ciphertexts when the domain of votes is small. ## 3 System Specifications **System Model:** We consider an asynchronous distributed system where processes communicate with one another via message-passing. We do not assume that all processes can send a message to any other process, but rather can only send messages to their neighbors. This relationship can be described by a communication graph \(G=\{\Pi,E\}\) where \(\Pi\) denotes the processes (acting as nodes of the graph) and \(E\subseteq\{\{p_{i},p_{j}\}|p_{i},p_{j}\in\Pi,i\neq j\}\) denotes edges of the graph that identify the neighbor relation. All communication between processes is assumed to be encrypted (with standard encryption techniques) with the receiver's public key to prevent any possibility of eavesdropping and causing a privacy violation. Although all sensitive information is homomorphically encrypted, this is done to prevent the homomorphic keyholder from eavesdropping to learn the private information (which would otherwise be a possible attack). **Adversary Capability:** We assume that any adversaries are Honest-But-Curious, meaning they follow the protocol exactly but save a copy of any value they observe and try to deduce any information possible. In this work, we do not consider byzantine processes, but rather focus on the aspect of privacy preservation. ## 4 Privacy-Preserving Consensus Our approach leverages the qualities of HE to provide strong privacy guarantees while still arriving at average consensus among all processes. No process is able to learn the starting value of any other process. We provide two algorithms to handle two different scenarios: the situation where a trusted third party is present and the situation where no process is trusted by all other processes. ### Problem Statement Each process \(p_{i}\) holds an initial value \(v_{i}\in\mathbb{R}\). By the end of the computation each process should decide on a value satisfying the following conditions, where validity and agreement are changed to reflect the need for deciding on a final value that is an average of all votes, and a requirement for privacy is added. * Validity and Agreement: The decided value is the average of the initial values of all \(n\) processes. Formally, the decided value is \(\frac{1}{n}\sum_{i=1}^{n}v_{i}\). * Termination: Every correct process eventually terminates. * Privacy: No process is able to learn the initial value of any other process. ### Privacy-Preserving Consensus - Trusted Third Party The trusted party creates two homomorphic keys: a public key, \(key_{p}\), used to encrypt data and a secret key, \(key_{s}\), for use in decrypting data. All processes in the computation have access to \(key_{p}\), but only the trusted process knows \(key_{s}\). In this algorithm, initially, a process creates two vectors, \(Votes\) and \(Counts\). \(Votes\) represents the global state of the system and will have a vote from each process in its indices. \(Counts\), represents how many times each process has voted. The trusted process aims to learn the average of the initial values of the other processes, but should not be able to learn the initial values of any other process. The trusted process alone holds \(key_{s}\), the secret key needed to decrypt the \(Votes\) ciphertexts of the other processes, ensuring that no other processes may learn the initial states of other processes. After setup, process \(i\) sends \(Votes\), where \(Votes_{i}=v_{i}\) and \(Votes_{j}=0,j\neq i\). It also sends \(Counts\), where \(Counts_{i}=1\) and \(Counts_{j}=0,j\neq i\), \(Votes\) is encrypted with \(key_{p}\) and \(Counts\) is sent as plaintext. This ensures that each process knows how many times each process has voted but it does not know its vote. Each process waits until it receives a message. Each time a message is received it sums the message's \(Votes\) with its local \(Votes\) (which translates to element-wise addition of the underlying vectors) and the message's \(Counts\) with its local \(Counts\)1. Formally, the vector underlying \(Votes\) becomes \([Votes_{1}+m.Votes_{1},Votes_{2}+m.Votes_{2},...,Votes_{n}+m.Votes_{n}]\). It then broadcasts these updated variables to its neighbors. This procedure continues until the local Counts of a process contains no nonzero elements. Footnote 1: It is possible that a message provides no new information to the process. If a message’s Counts’ set of nonzero indices is a subset of the process’ local Counts, then the message is ignored. When Algorithm 1 terminates, \(Votes\) is of the form \(Enc([d_{1}v_{1},d_{2}v_{2},...,d_{n}v_{n}])\) for some integer vector d. and \(Counts\) is of the form \([d_{1},d_{2},...d_{n}]\), where \(\forall j,d_{j}>0\) As shown in Figure 1, a process's vote may have been added to \(Votes\) multiple times due to the protocol of each process adding each neighbor's \(Votes\) value to its own. \(Counts\) keeps track of how many times this has happened per vote to undo multiple voting in the next step. We need to compute element-wise division of \(Votes\) by \(Counts\) so that vote of each process is counted exactly once. Unfortunately, Element-wise division is not supported by CKKS. Hence, we first compute \(\frac{1}{d_{1}n},\frac{1}{d_{2}n},....\). This is possible since \(d_{1},d_{2},...\) are available as plaintext in \(Counts\). Then, we multiply \([\frac{1}{d_{1}n},\frac{1}{d_{2}n},...]\) with \(Votes\) which is of the form \(Enc([d_{1}v_{1},d_{2}v_{2},...,d_{n}v_{n}])\). This results in \(Enc([\frac{\gamma_{1}}{n},\frac{\gamma_{2}}{n},...,\frac{\gamma_{n}}{n}])\). Next, we need to sum up all the elements of this modified _Votes_. By rotating the ciphertext left by one, the underlying vector becomes \([\frac{v_{2}}{n},\frac{v_{1}}{n},...,\frac{v_{n}}{n},\frac{v_{1}}{n}]\). Doing this \(n\) times and adding them all together produces the sum of all the elements. This requires \(n\) rotations and additions. But this can be achieved with \(log(n)-1\) rotations and additions using Algorithm 1 in [3]. Upon performing this computation, the underlying vector of _Votes_ will contain the average in each index. Now, this message can be communicated to the trusted party. The trusted party can then learn the average by decrypting the message. While other processes cannot decrypt this message, they can broadcast it to others so that each process will have access to the encrypted average value for further homomorphic computation. ``` functionInitConsensus( ) \(Votes\gets Enc([0,...,v_{i},...,0])\) \(Counts\leftarrow[0,...,1,...,0]\) for\(p_{j}\in N_{i}\)do \(Send((Votes,Counts),p_{j})\) endfor endfunctionRecieve(message m) \(Votes\gets Votes+m.Votes\) \(\forall j:Counts_{j}\gets Counts_{j}+m.Counts_{j}\) for\(p_{j}\in N_{i}\)do \(Send((Votes,Counts),p_{j})\) endfor if\(0\notin Counts\)then decide\(Prepare(Votes,Counts)\) endif endfunctionfunctionPreparate(\(Votes,Counts\)) \(Counts\leftarrow[\frac{Count_{i}^{-1}}{n},\frac{Count_{i}^{-1}}{n},...,\frac{ Count_{n}^{-1}}{n}]\) \(Votes\leftarrow\textit{mult}(Votes,Counts)\) for\(i=log(n)-1\) to \(0\)do \(Votes\gets add(Votes,rotate(Votes,2^{i}))\) endfor return\(Votes\) endfunction ``` **Algorithm 1** Privacy-preserving average consensus ### Privacy-Preserving Consensus - No Trusted Parties In situations where no third party can be trusted, the previous algorithm can be modified to enable privacy-preserving average consensus without a trusted party. The main idea to achieve this is that (1) each process creates its own public/private homomorphic key, and (2) each process encrypts its own vote with _its own_ homomorphic public key and sends that to its neighbors. Algorithm 1 is then run with the initiator (keyholder) process being treated as the trusted process. This keyholder does not participate in the rest of the computation and is not sent any messages during the protocol until after the Prepare steps are taken. This is done to ensure no privacy breach can occur, as before the Prepare phase decrypting the ciphertext reveals the initial states of the other processes. This protocol is done \(n\) times concurrently, one for each process such that there are \(n\) different public keys in use. At the end of the protocol, every process will learn the average but is unable to learn the initial state of any other process. The above protocol will work correctly if the removal of the initiator does not partition the network. If the removal of the initiator partitions the network then no partition will have all the votes thereby preventing any process from calculating the average. However, if the original graph is connected, the protocol will succeed in computing the consensus value for some initiators. For example, if the underlying structure is a tree then the protocol will succeed when the leaves initiate the protocol. An initiator that succeeds can broadcast the average so others can learn it. The protocol can be easily revised so only select processes initiate the protocol. However, in this case, it would be necessary to select the relevant initiators and special care would be needed to deal with the case where the initiators fail. Figure 1: Time series chart of sample execution. Given the communication graph (above), a sample execution is shown (below). Processes 2 and 3 each send their local state to process 4, which can tell by the Counts portion of the message that the message contains new information. However, the vote of process 1 must be counted twice in order to incorporate all of this information. The Prepare phase resolves this issue. ### Analysis In this section, we show that Algorithm 1 satisfies the desired properties of termination and privacy preservation. We also show correctness, i.e., we show that each process computes the average of the initial votes of processes. Finally, we discuss fault-tolerance aspects of Algorithm 1. #### 4.4.1 Termination **Theorem 1**.: _If the communication graph \(G\) is connected and no process fails, all participating processes in Algorithm 1 terminate in finite time._ Proof.: Processes following Algorithm 1 terminate when all other processes have shared their initial states with it. Information passes between neighbors each time a process sends messages. For information to travel to all processes in the graph, diameter(G) hops must occur. Therefore, after diameter(G) hops, the algorithm terminates. #### 4.4.2 Correctness **Theorem 2**.: _If the communication graph \(G\) is connected, all participating processes in Algorithm 1 decide on \(\frac{1}{n}\sum_{i=1}^{n}v_{i}\)._ Proof.: The value of the vector encrypted as \(Votes\) is a linear combination of the initial Votes variables from each process. As such, the vector underlying \(Votes\) can be expressed as \([d_{1}v_{1},d_{2}v_{2},...,d_{n}v_{n}]\) for some integer vector \(\mathbf{d}\). Because \(Counts\) is calculated the same way as Votes but with each process' initial \(Counts\) vector, \(Counts\) can be expressed as \([d_{1},d_{2},...,d_{n}]\) for the same integer vector \(\mathbf{d}\). Performing element-wise division of \(Votes\) by \(Counts\) yields \([v_{1},v_{2},...,v_{n}]\). Dividing each element by \(\mathbf{n}\) and summing all elements yields the result. #### 4.4.3 Complexity A single message must make diameter(G) hops before Algorithm 1 terminates. This represents information from all processes transferring to all other processes. The communication complexity is therefore \(O(diameter(G)|N_{i}|p)\) per process \(\mathbf{i}\), with \(\mathbf{p}\) denoting the size in memory of a single ciphertext. Similarly, each process performs \(O(diameter(G)|N_{i}|)\) homomorphic additions. The Prepare phase performs \(O(n)\) inversions, \(O(1)\) ciphertext-plaintext multiplications, and \(O(log(n))\) ciphertext rotations and ciphertext additions. The version of the algorithm with no trusted parties is similar in both communication and computational complexity, although messages must return to the keyholder once Prepared. This results in a communication complexity of \(O(2diameter(G)|N_{i}|p)=O(diameter(G)|N_{i}|p)\). #### 4.4.4 Privacy **Theorem 3**.: _If all processes follow Algorithm 1, no process is able to learn the initial value of any other process._ Proof.: The only way to learn the initial value of another process in this algorithm is to obtain decryption of the Votes ciphertext before being put through the Prepare steps of rotation, addition, etc. However, no process other than the trusted process has access to the secret key needed for decryption. Furthermore, the trusted process has no access to the Votes ciphertext before the prepare phase, so not even the trusted process may violate the privacy of the other processes. #### 4.4.5 Fault Tolerance **Theorem 4**.: _Algorithm 1 terminates under any number of detectable process faults, as long as the graph remains connected from the removal of any subset of faulty processes._ Proof.: Once the information from a faulty process reaches a correct process that is connected to every other correct process, the algorithm will terminate due to Theorem 1. This is true for any number of faulty processes as long as the condition holds that they deliver a message to a correct process connected to every other correct process before faulting. ## 5 Outlier Resistant Privacy-Preserving Consensus In this section we modify the previous algorithms to allow for processes whose initial value is considered an outlier to be excluded from the average calculation. Our algorithm neither reveals the initial value of any process nor does it reveal the identities of the processes that hold the outlier values. ### Modified Problem Statement Outlier-resistant average consensus as a problem is very similar to average consensus with the following two conditions added: * Outlier Resistance: Initial values that are deemed by some criteria to be outliers are not included in the calculation of the mean. * Outlier Privacy: No process learns which other processes' initial values are outliers. Outliers tend to be values that deviate substantially from the mean. We define outliers to be any values that lie outside \(\mu\pm c\sigma\), where \(c\) is the parameter to the algorithm, \(\mu\) is the mean and \(\sigma\) is the standard deviation. ### Outlier Resistant Algorithm The basic idea of the algorithm is to perform the Algorithm 1 three times: once to calculate the mean, once to calculate the standard deviation, and once to calculate the mean without outliers. These must be done in this order to protect privacy. The standard deviation is needed to determine if a process' initial value is an outlier. The mean is needed to determine the standard deviation. Mean and standard deviation cannot be calculated together using this method, leading to the necessity of three rounds. The first two rounds, computing the mean and standard deviation, use the same approach used in Algorithm 1. In the first phase, a process uses \(v_{i}\) as its initial value in Algorithm 1. In the second phase, it uses \((v_{i}-mean)^{2}\) as its initial value to compute the standard deviation. There are two ways to do this. One way is for the trusted process to decrypt the mean value from round 1 and send it to all processes. In this case, the second round will be identical to the first round except for the initial value is different. Another way does not require the participation of the trusted process between the first and second round. Specifically, the first round computes \(Enc(mean,mean,\cdots)\). We can multiply this itself to compute \(Enc(mean^{2},mean^{2},\cdots)\). Each process can also multiply this with \([0,0,0,2v_{i},0,0]\), i.e., a vector where \(\hat{r}^{h}\) entry is \(2v_{i}\) and other entries are zero. Finally, processes can user Algorithm 1 to compute the mean of \(v_{i}^{2}\). Linear combination of these three entries will allow us to compute the average of \((v_{i}-mean)^{2}\), which is the same as the sum of averages of \(v_{i}^{2}\), \(2v_{i}mean\), and \(mean^{2}\). Thus, the standard deviation can also be computed without involving the trusted third party. The third phase, however, requires participation from the third party to identify the bounds necessary to determine which processes are outliers. We describe the third phase, next. In the third round, conceptually, we run two concurrent instances of algorithm 1. The first instance uses _Votes_ and _Counts_, where a process votes \(v_{i}\) if it is not an outlier and votes 0 if it is an outlier. The second instance uses _Participating_ and _Counts_. (The _Counts_ value is shared and needs to be included only once). For the first instance, the Algorithm 2 will compute \(\frac{\Sigma_{\hat{r}(Outliers\,v_{i})}}{n}\), where \(n\) is the number of processes. The second instance will compute \(\frac{\Sigma_{\hat{r}(Outliers\,1}}{n}\). The ratio of these numbers will provide the average without outliers. Here, process \(i\) uses the initial value of 1 if it is not an outlier and 0 if it is an outlier for its local value of _Participating_. _Counts_ is a plaintext whereas _Votes_ and _Participating_ are encrypted. ### Fault Tolerance Detectable fault tolerance can be added by adding the rule that in between each of the three rounds, all processes adjust the value of \(n\) to be the current number of correct processes. Additionally, instead of terminating each round when _Counts_ has no nonzero elements, it terminates when _Counts_ has a nonzero element in every index corresponding to a correct process. ### Analysis In this section, we discuss the properties of Algorithm 2. Namely, we prove theorems related to termination, correctness, and privacy. We also analyze the algorithm for communication and computational complexity. #### 5.4.1 Termination **Theorem 5**.: _If the communication graph \(G\) is connected and no process faults, all participating processes in Algorithm 2 terminate in finite time._ Proof.: Algorithm 2 runs Algorithm 1 three times. Theorem 1 proves that all three iterations terminate. Iteration 3 is modified but has the same termination condition. #### 5.4.2 Correctness **Theorem 6**.: _If the communication graph \(G\) is connected, all participating processes in Algorithm 2 decide on \(\frac{\Sigma_{\hat{r}(Outliers\,v_{i})}}{\Sigma_{\hat{r}(Outliers\,1}}\)._ Proof.: In Algorithm 2, two values are computed concurrently in iteration 3: \(\frac{\Sigma_{\hat{r}(Outliers\,v_{i})}}{n}\) from _Votes_, and \(\frac{\Sigma_{\hat{r}(Outliers\,1}}{n}\) from _Participating_. The ratio of these numbers will provide the average without outliers. #### 5.4.3 Complexity Because Algorithm 2 repeats the previous algorithm three times, its communication and computational complexity remain the same as well. #### 5.4.4 Privacy **Theorem 7**.: _If all processes follow Algorithm 2, no process can learn which processes hold initial values that are considered outliers._ Proof.: Decrypting _Votes_ or _Participating_ prior to the Prepare phase reveals which processes hold outlier values. Similar to the previous algorithms, processes that have access to these variables have no access to the secret key, and the process that holds the secret key has no access to the variables before the Prepare phase is complete. ## 6 Privacy-Preserving Leader Election In this section we discuss privacy-preserving leader election using a similar approach to the previous sections. Our approach prevents any process from learning the vote of any other process and introduces novel approach for handling tie-breaking while preserving privacy. Specifically, we introduce ranked voting where, if a vote's primary choice is eliminated from the election, the vote counts for a secondary choice. This is achieved in a way that no process learns the primary or secondary votes of other processes. Additionally, no process learns which process is likely to win the election prior to a leader being elected. Specifically, the identities of the processes that are leading in with first votes is kept secret. This ensures that the secondary votes do not depend upon the identities of the leaders after considering the first vote. ### Problem Statement In classical leader election, each process must decide if it is a leader or not. The problem is solved when exactly one process decides that it is the leader and all other processes know the identify of this leader. We also add two conditions based on privacy. * Validity: The process with the most votes win. (This requirement can be fine-tuned. We consider two versions of validity. We also discuss other versions of validity requirement.) * Each process casts only a primary ballot. A process that receives the maximum votes wins. Tiebreaks are _independent_ of process ID to ensure candidate privacy. This ensures that all processes that receive the same number of votes have an equal chance to be the leader. * Each process casts a primary and secondary ballot. If no process receives a majority with primary ballots alone, the secondary ballots are used to determine the winner. * Uniqueness: Only a single process is selected as the leader. * Agreement: All processes agree on which process is the leader. * Termination: Every correct process eventually terminates. * Voter Privacy: No process learns the initial state (the vote) of any other process. * Candidate Privacy: No process is able to learn any information that can help in guessing which process may be elected until a leader is determined. An adversary that tries to make other processes fault may attempt to target a process that is likely to become leader in order to disrupt the computation. Candidate privacy prevents adversaries from knowing which process to target until the computation is complete. In our work, we rely on a trusted process that provides a homomorphic public/private key, \(\mathit{key}_{p}\) and \(\mathit{key}_{s}\), respectively. All communication during the election is encrypted with \(\mathit{key}_{p}\) (in addition to public key of the receiver). At the end, this message is given to the keyholder to reveal the identify of the leader. ### Privacy-Preserving Ballot-Casting An approach similar to that of privacy-preserving consensus can be applied to solve leader election. We need to make two changes to Algorithm 1. The first change is that each process creates its initial \(\mathit{Votes}\) ciphertext to represent which process it wants to elect leader. Specifically, if process \(i\) wants to vote for process \(j\), it creates the vector underlying \(\mathit{Votes}\) to be \([0,...,1,...,0]\), where only \(j^{th}\) entry is nonzero. It creates a \(\mathit{Counts}\) vector to be the same as the \(\mathit{Counts}\) vector from the previous algorithms in that the \(i^{th}\) entry is set to 1 and all other entries are set to 0. Thus, the \(\mathit{Counts}\) vector indicates that process \(i\) has voted whereas \(\mathit{Votes}\) indicates that there is one vote for \(j\). Similar to Algorithm 1, \(\mathit{Votes}\) is encrypted and \(\mathit{Counts}\) is in plaintext. The second change is the removal of the Prepare phase. When all processes have contributed, decryption of the ciphertext reveals which process is chosen to be leader. Before this time, no process has any way of knowing which process will be elected. Additionally, no process can learn the vote of any other process, even after decryption. This is because each index of the encrypted vector represents how many votes that process received rather than the initial state of that process. In other words, no additional preparation must be done on a ciphertext after all votes are accrued in order to preserve the privacy of the participating processes. It just needs to be decrypted by the keyholder. An important difference in this new algorithm for leader election is the inability to aggregate \(\mathit{Votes}\) ciphertexts of the same key. Instead, processes must add their vote to each ciphertext they receive, and must only add their vote to a ciphertext a single time. In other words,, a process adds its \(\mathit{Vote}\) only if the \(\mathit{Counts}\) vector indicates that it has not voted before. ### Breaking Ties with Ranked Voting A problem when performing leader election is what to do in case of a tie. A popular method is to elect the process with a higher ID number, but that reveals that processes with higher IDs are more likely to be elected than lower IDs, violating candidate privacy. Ranked voting is an ideal solution for tie-breaking, however it is difficult to implement in a privacy-preserving way. Ranked voting calls for each vote to be tracked individually. This may result in a loss of privacy if each vote is its own ciphertext, as processes may figure out which ciphertext originated from which process. Additionally, ciphertexts are large in memory size, causing communication complexity to become very large if every vote were to be a ciphertext. We introduce "shallow" ranked voting by allowing each process to submit a secondary, vote, but not rank every process. We use a secondary vote for use in the case of a tie. Each process not only casts a vote for their primary candidate, but additionally indicates a candidate that should receive the process's vote in the event that their primary candidate doesn't win the election. We achieve this with a secondary vote matrix. This matrix contains the secondary votes of each process, keeping track of which process was selected as the primary vote and the secondary vote, while keeping private the identity of the process casting the ballot. The way the leader is determined is as follows: If a candidate has the majority of the votes, they are elected. The candidate with the fewest votes is eliminated. Any votes that initially went to this candidate now are transferred the the voters' second choice for leader (using the matrix). This process repeats with the new candidate with the fewest votes until a single process remains. In traditional ranked voting, ballots rank each candidate but that requires \(n^{n}\) vector size. Our approach simply uses a secondary vote, meaning that if a voter's primary and secondary candidates are eliminated, that vote will have no further bearing on the election. Figure 2: Privacy-preserving ballot-casting with ranked voting. When a process contributes to the \(\mathit{Votes}\) ciphertext, it selects the index of the process it wishes to vote for (i) and the index of the process it designates as its secondary vote if its primary candidate cannot win the election (j). It then adds 1 to the index (i,j) in the matrix. Indices shown in red must remain zero, as the primary and secondary vote of a process cannot designate the same candidate. In order to encrypt this as a ciphertext, the matrix is flattened by appending each row of the matrix end-to-end. In this example, process 0 wins the election due to the secondary vote of the vote that initially counted towards process 1 transferring to process 0 to break the tie between processes 0 and 2. Although only vectors may be encoded and encrypted homomorphically, matrices may be encoded by encoding the vector containing the concatenation of the rows of the matrix. An example of a secondary vote matrix is shown in Figure 2. **Breaking a tie.** If a tie still exists after the ranked voting, it may be pseudo-randomly broken independent of process ID. Consider \(k\) processes tie with some number of votes \(v_{tie}\). To break the tie, we compute \(v_{tie}\) mod \(k\), which yields a number between \(0\) and \(k-1\) (inclusive). Suppose this number is \(r\) then we can choose the process with \(r^{th}\) ID to be elected as the leader. This allows all processes with highest number of votes to be elected as the leader. ### Analysis In this section we prove termination, uniqueness, agreement, and privacy for our leader election protocol. #### 6.4.1 Termination **Theorem 8**.: _If the communication graph \(G\) is connected and no process faults, all participating processes in our leader election algorithm terminate in finite time._ Proof.: This algorithm has the same termination condition as 1 and thus also terminates by Theorem 1. #### 6.4.2 Uniqueness and Agreement **Theorem 9**.: _If all processes follow our leader election protocol, exactly one process will be selected as the leader and every process will agree which process is the leader._ Proof.: The leader is determined upon the decrypting of the _Votes_ ciphertext. Our protocol breaks any ties that may result from multiple processes receiving the same number of votes. Therefore, only one process will win the election. The keyholder then communicates this information to every other process. This ensures agreement as well. #### 6.4.3 Privacy **Theorem 10**.: _If all processes follow our leader election protocol, no process is able to learn the initial vote of any other process._ Proof.: The _Votes_ ciphertext, when decrypted, does not reveal which processes contributed which votes, only how many votes each process received. If the ciphertext isn't decrypted until all processes have contributed, no process may determine which process contributed which vote. This holds true in the algorithm and thus the theorem is proved. ## 7 Related Work Consensus is a fundamental problem in distributed computation and as such has seen many works dedicated to solving it and its variations. Early work focused on solving consensus in multiple contexts such as in the presence of faults [13]. In a fully asynchronous setting, the presence of a even single undetectable fault was shown to produce the possibility of nontermination [14], leading to consensus algorithms assuming some level of faults being detectable [6] or focuses on reducing synchrony needed [11] to achieve consensus. Much recent work in the field has been dedicated to consensus in the context of blockchain [34]. Other work has been focused on optimization of convex problems as a constraint of consensus [30][27]. Still, other work focuses on mitigating attacks during consensus [24][12]. Leader election is a fundamental problem in distributed systems from the early work in [22]. Early papers focused on topics such as mobile ad hoc networks [25], link failure tolerance [31], and broadcast networks [5]. Methods of stable leader election and self-stabilizing leader election have been the topic of recent research [7][9][32][1]. Additional topics include space optimal solutions [17][2] and applications into internet of things [33][28]. Privacy-related concerns have driven research on privacy-preserving solutions in distributed systems in recent years. Privacy-preserving maximum consensus has been solved by generating and transmitting random numbers before transmitting initial states to hide the actual initial state [10][4]. These approaches must spend time transmitting random values prior to communication that includes useful information, incurring a drawback of communication complexity and time spent transmitting false values. Homomorphic encryption (HE) is popular for its ability to provide strong privacy guarantees in distributed systems. The problem of consensus has been solved using HE [21][29][19]. These approaches focus on gossip communication and short-term use of HE, i.e. processes encrypt data, communicate with their neighbor which modifies the ciphertext before sending it back for decryption. Our work is more general in that it allows computation over an arbitrary network to compute consensus with removal of outliers as well as ranked leader election. Blockchain has seen privacy improvement through HE via secret leader election [15] and trustworthy random number generation [26]. These algorithms, although designed for use with blockchain, are generic to distributed computations. ## 8 Conclusion Consensus and leader election are two fundamental problems in distributed computing. While there are several algorithms for solving these problems [16, 20], they assume that the votes/preferences of each process can be known to others. This is undesirable where each participant wants to preserve the privacy of their own vote. We focused on the problem of average-consensus, where the goal to compute the average votes of individual processes. The goal is to ensure that the average is computed while keeping the original votes secret. We considered two variations of the consensus problem, one where there is a trusted party that wants to compute the average and one where such a party does not exist. In the first variation, we only assume that the trusted party creates the necessary homomorphic public/private keys and provides the public key to everyone. This party is not trusted with any of the votes and we ensure that the actual votes are kept private from this party as well as other participants. A typical use of this model would be one where an entity, e.g., government, wants to collect statistical data on vitally important characteristics but wants to overcome resistance from entities, e.g., private companies, from sharing data with others. A typical instance of this one where the government may want to collect information about the number of average security attacks but each company wants to keep the number of security attacks on them private. Our algorithm does not consider collusion between a user and trusted entity. This assumption is reasonable since a user, say \(i\), must send their votes to some other user \(j\). If \(j\) is colluding with the trusted entity, privacy of \(i\) could be violated. However, it would be possible address some aspect of collusion. To achieve this, we observe that our algorithm can be easily extended to cases where the set of users are partitioned into groups where one user can be part of multiple groups. Here, the algorithm without the trusted process can be used in each group to compute the average (or sum) of all votes. This cumulative information can then be exchanged with other groups. Since our algorithm already permits a process to add its vote several times without affecting the average, this approach is feasible even if a user is part of multiple groups. With this approach, privacy of process \(i\) will be preserved even if processes in other groups (i.e., groups where \(i\) is not a part) collude with the trusted entity. In our second variation, we considered the case where a group of processes want to compute the average-consensus among themselves. A typical example of this would be a group of employees anonymously submitting ratings of their workplace between one another in order to accurately gauge sentiment. We also addressed the problem of average-consensus where outliers are omitted from calculation while still preserving privacy. Eliminating outliers in analyzing the data is often necessary so that single outlier data items do not corrupt the conclusion. Our solution guarantees that none can learn about which processes are the outliers. As part of solving the problem of average consensus while eliminating outliers, we also compute the standard deviation of the original votes. This can be of use in various other applications as well. For instance, it may be desirable to learn the average number of security breaches each major tech company faced in the previous year, but companies may not desire to share this information. Additionally, if a single company has faced many more breaches than the others due to a security flaw, a more accurate picture may be formed by excluding the outlier. In this case, that company may be unwilling to share that they themselves are an outlier because it could imply a high number of security breaches. We also developed a privacy-preserving algorithm for leader election. Here, the goal was not only to ensure that votes are kept private but to ensure that the identities of potential leader candidates is also private. This candidate privacy guarantees that none is able to discern any information during the voting process. Candidate privacy ensures that the early voters are not able to affect votes of late voters. It also prevents _strategic_ voting where a person chooses to vote for a less preferred candidate because it can cause their preferred candidate to win. Our algorithm also does not do tie-breaking on ID, as tie-breaking on ID causes some processes (e.g., processes with higher ID) to have a higher chance of being elected than others.
2310.16674
Connecting Exceptional Orthogonal Polynomials of Different Kind
The known asymptotic relations interconnecting Jacobi, Laguerre, and Hermite classical orthogonal polynomials are generalized to the corresponding exceptional orthogonal polynomials of codimension $m$. It is proved that $X_m$-Laguerre exceptional orthogonal polynomials of type I, II, or III can be obtained as limits of $X_m$-Jacobi exceptional orthogonal polynomials of the same type. Similarly, $X_m$-Hermite exceptional orthogonal polynomials of type III can be derived from $X_m$-Jacobi or $X_m$-Laguerre ones. The quadratic transformations expressing Hermite classical orthogonal polynomials in terms of Laguerre ones is also extended to even $X_{2m}$-Hermite exceptional orthogonal polynomials.
Christiane Quesne
2023-10-25T14:39:05Z
http://arxiv.org/abs/2310.16674v2
# Connecting Exceptional Orthogonal Polynomials of Different Kind ###### Abstract The known asymptotic relations interconnecting Jacobi, Laguerre, and Hermite classical orthogonal polynomials are generalized to the corresponding exceptional orthogonal polynomials of codimension \(m\). It is proved that \(X_{m}\)-Laguerre exceptional orthogonal polynomials of type I, II, or III can be obtained as limits of \(X_{m}\)-Jacobi exceptional orthogonal polynomials of the same type. Similarly, \(X_{m}\)-Hermite exceptional orthogonal polynomials of type III can be derived from \(X_{m}\)-Jacobi or \(X_{m}\)-Laguerre ones. The quadratic transformations expressing Hermite classical orthogonal polynomials in terms of Laguerre ones is also extended to even \(X_{2m}\)-Hermite exceptional orthogonal polynomials. Key words: exceptional orthogonal polynomials; asymptotic relations; quadratic transformations 2020 Mathematics Subject Classification: 33C47; 34A05 ## 1 Introduction Exceptional orthogonal polynomials (EOPs) are complete families of orthogonal polynomials that arise as eigenfunctions of a Sturm-Liouville eigenvalue problem [6]. In contrast with the classical orthogonal polynomials (COPs) of Jacobi, Laguerre, and Hermite types [10], there are some gaps in the sequence of their degrees, the total number of missing "exceptional" degrees being known as the codimension. In addition, the corresponding differential equation contains rational coefficients instead of polynomial ones. Due to these facts, EOPs circumvent the strong limitations of Bochner's classification theorem, which characterizes Sturm-Liouville COP systems. During the last few years, a lot of research activity has been devoted to the study of EOPs both from a mathematical viewpoint and for their applications in mathematical physics (for a recent account see [3] and references quoted therein). In quantum mechanics, for instance, they have been shown to lead to some exactly solvable rational extensions of well-known quantum potentials. The \(X_{1}\) EOPs of codimension one of Ref. [6] were indeed shown to be related to the Darboux transformation in the context of shape invariant potentials in supersymmetric quantum mechanics [20]. Soon after the introduction of the first \(X_{2}\) EOPs of codimension two [21], infinite families of shape invariant potentials were introduced in relation of \(X_{m}\) EOPs of arbitrary codimension \(m\)[18]. Multi-indexed families of \(X_{m_{1}m_{2}\ldots m_{k}}\) EOPs, connected with multi-step Darboux transformations, were then considered [7, 19]. Very recently, confluent Darboux transformations have been used to generate EOP families with an arbitrary number of real parameters [4]. From a mathematical viewpoint, three main questions have been the subject of research activity in relation to EOPs. The first one is the study of the interlacing and asymptotic behaviour of their zeros (see, e.g., [8, 11]). The second one has to do with the recurrence relations they satisfy and which may be of two different types: either recurrence relations of order \(2N+1\), where \(N\) is the number of Darboux steps, with coefficients that are functions of \(x\) and \(n\)[5, 16], or recurrence relations of order \(2m+3\), where \(m\) is the codimension, with coefficients only dependent on \(n\)[2, 15, 17]. Finally, the quest for a complete classification of EOPs is a fundamental problem [3], which, as shown in [4], remains rather open. The purpose of the present paper is to look into the counterpart of a well-known property of Jacobi, Laguerre, and Hermite COPs, namely that they are interconnected through limit relations [10]. As a first step in this inquiry, we plan to consider here \(X_{m}\)-Jacobi, \(X_{m}\)-Laguerre, and \(X_{m}\)-Hermite EOPs, the first two existing in three different types I, II, and III, while the last ones only belong to type III. In Section 2, we list the sets of EOPs that we are going to consider, we provide their explicit expressions in terms of COPs, and we explain how the latter are related to some other definitions found in the literature. In Section 3, we present and prove limit relations connecting these EOPs. In Section 4, we briefly comment on another type of relations generalizing the quadratic transformations relating Laguerre and Hermite COPs. Finally, Section 5 contains the conclusion. ## 2 Sets of \(X_{m}\) EOPs and their expressions in terms of COPs The sets of \(X_{m}\) EOPs that we are going to consider here mainly come from Ref. [15] (except for type II \(X_{m}\)-Laguerre EOPs, which are not considered there), where they are given in monic form. To establish relations with other definitions where standard EOPs and COPs are used, let us remind that one goes from standard Hermite, Laguerre, and Jacobi COPs to monic ones by the following transformations \[H_{n}(x) \to 2^{n}H_{n}(x),\] \[L_{n}^{(\alpha)}(x) \to \frac{(-1)^{n}}{n!}L_{n}^{(\alpha)}(x), \tag{2.1}\] \[P_{n}^{(\alpha,\beta)}(x) \to \frac{\Gamma(2n+\alpha+\beta+1)}{2^{n}n!\Gamma(n+\alpha+\beta+1)} P_{n}^{(\alpha,\beta)}(x).\] For future use, we list some relations satisfied by monic COPs in Appendix A. ### \(X_{m}\)-Hermite polynomials It is well known [5, 14] that only type III \(X_{m}\)-Hermite EOPs exist and that they are restricted to even \(m\) values. They may be defined as \[\hat{H}_{n}^{(\mathrm{III},m)}(x) \tag{2.2}\] \[\quad=\begin{cases}1&\text{if }n=0,\\ \mathrm{i}^{m}\left[H_{m}(\mathrm{i}x)H_{n-m}(x)+\mathrm{i}\frac{m}{2}H_{m-1} (\mathrm{i}x)H_{n-m-1}(x)\right]&\text{if }n=m+1,m+2.\ldots,\end{cases}\] with \(m\) missing degrees \(n=1,2,\ldots,m\). This relation comes from [15], where \[\hat{H}_{n}^{(\mathrm{III},m)}(x)\] \[\quad=\begin{cases}1&\text{if }n=0,\\ -\frac{1}{2}\mathrm{i}^{m}\left\{H_{m}(\mathrm{i}x)(\partial_{x}-2x)H_{n-m-1} (x)-[\partial_{x}H_{m}(\mathrm{i}x)]H_{n-m-1}(x)\right\}&\text{if }n=m+1,m+2,\ldots,\end{cases}\] after applying (A.1) and (A.2). It is connected to the previously used \(y_{n}^{(m)}(x)\) of Ref. [14] by the relation \[y_{n}^{(m)}(x)=\begin{cases}\hat{H}_{0}^{(\mathrm{III},m)}(x)&\text{if }n=0,\\ -2^{n}\hat{H}_{n}^{(\mathrm{III},m)}(x)&\text{if }n=m+1,m+2,\ldots.\end{cases}\] ### \(X_{m}\)-Laguerre polynomials \(X_{m}\)-Laguerre EOPS exist in three different types, which may be defined as \[\hat{L}_{n}^{(\mathrm{I},m)}(x;\alpha)=(-1)^{m}\left(L_{m}^{(\alpha+1)}(-x)L_{n} ^{(\alpha)}(x)-nL_{m}^{(\alpha)}(-x)L_{n-1}^{(\alpha+1)}(x)\right),\qquad n=0,1,2,\ldots, \tag{2.3}\] \[\hat{L}_{n}^{(\mathrm{II},m)}(x;\alpha)=\frac{1}{\alpha+n-m}\left(nxL_{m}^{(- \alpha)}(x)L_{n-1}^{(\alpha+1)}(x)-(m-\alpha)L_{m}^{(-\alpha-1)}(x)L_{n}^{( \alpha)}(x)\right),\] \[n=0,1,2,\ldots, \tag{2.4}\] and \[\hat{L}_{n}^{(\mathrm{III},m)}(x;\alpha)\] \[=\begin{cases}1&\text{if $n=0$},\\ (-1)^{m}\left(L_{m}^{(-\alpha)}(-x)L_{n-m}^{(\alpha-1)}(x)-mxL_{m-1}^{(-\alpha +1)}(-x)L_{n-m-1}^{(\alpha)}(x)\right)&\text{if $n=m+1$},\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad On the right-hand side of this equation, the factor between square brackets becomes \[L_{n-m}^{(\alpha-1)}(x)+(\alpha-x)L_{n-m-1}^{(\alpha)}(x)\] \[\quad=L_{n-m}^{(\alpha)}(x)+(n-m+\alpha-x)L_{n-m-1}^{(\alpha)}(x)\] \[\quad=-(n-m-1)\Big{[}L_{n-m-1}^{(\alpha)}(x)+(n-m+\alpha-1)L_{n-m- 2}^{(\alpha)}(x)\big{]}\] \[\quad=-(n-m-1)xL_{n-m-2}^{(\alpha+1)}(x),\] after successively using (A.4), (A.6), and (A.7). The result reads \[\hat{L}_{n}^{(\text{III},m)}(x;\alpha)\] \[\quad=\begin{cases}1&\text{if $n=0$},\\ (-1)^{m+1}\Big{[}(n-m-1)xL_{m}^{(-\alpha)}(-x)L_{n-m-2}^{(\alpha+1)}(x)&\\ \quad+L_{m+1}^{(-\alpha-1)}(-x)L_{n-m-1}^{(\alpha)}(x)\Big{]}&\text{if $n=m+1,m+2, \ldots$},\end{cases} \tag{2.7}\] which may then be compared with Eq. (5.12) of [12] after rewriting there standard Laguerre polynomials in terms of monic ones. Hence, \[L_{m,n}^{\text{III},\alpha}(x)=\frac{(-1)^{n-m-1}}{m!(n-m-1)!}\hat{L}_{n}^{( \text{III},m)}(x,\alpha+1),\qquad n=m+1,m+2,\ldots,\] and of course \(L_{m,0}^{\text{III},\alpha}(x)=\hat{L}_{0}^{(\text{III},m)}(x;\alpha+1)=1\). It is worth observing that in [12], for studying properties of EOPs, the range of the \(\alpha\) parameter has been restricted to some interval (\(\alpha>0\) for type I, \(\alpha>m-1\) for type II, and \(0<\alpha<1\) for type III). Here, as in [15], we consider their formal definitions without restriction on \(\alpha\). ### \(X_{m}\)-Jacobi polynomials \(X_{m}\)-Jacobi EOPs exist in three different types, which may be defined as \[\hat{P}_{n}^{(\text{I},m)}(x;\alpha,\beta) =\frac{1}{\beta+n-m}\Big{\{}P_{m}^{(\alpha,-\beta)}(x)\Big{[}n(1 +x)P_{n-1}^{(\alpha+1,\beta+1)}(x)+\beta P_{n}^{(\alpha,\beta)}(x)\Big{]}\] \[-m(1+x)P_{m-1}^{(\alpha+1,-\beta+1)}(x)P_{n}^{(\alpha,\beta)}(x) \Big{\}},\qquad n=0,1,2,\ldots, \tag{2.8}\] \[\hat{P}_{n}^{(\text{II},m)}(x;\alpha,\beta) =\frac{1}{m-n-\alpha}\Big{\{}P_{m}^{(-\alpha,\beta)}(x)\Big{[}n(1 -x)P_{n-1}^{(\alpha+1,\beta+1)}(x)-\alpha P_{n}^{(\alpha,\beta)}(x)\Big{]}\] \[-m(1-x)P_{m-1}^{(-\alpha+1,\beta+1)}(x)P_{n}^{(\alpha,\beta)}(x) \Big{\}},\qquad n=0,1,2,\ldots, \tag{2.9}\] and \[\hat{P}_{n}^{(\text{III},m)}(x;\alpha,\beta)\] \[\quad=\begin{cases}1&\text{if $n=0$},\\ \frac{1}{\alpha+\beta+n-2m-1}\Big{[}(\alpha+\beta+n-m-1)P_{m}^{(-\alpha,- \beta)}(x)P_{n-m}^{(\alpha-1,\beta-1)}(x)&\\ \quad+m(1-x^{2})P_{m-1}^{(-\alpha+1,-\beta+1)}(x)P_{n-m-1}^{(\alpha,\beta)}(x )\Big{]},&\text{if $n=m+1$},\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\ These relations are derived from the corresponding definitions of [15], namely1 Footnote 1: Note that in [15], Jacobi polynomials are denoted by \(J_{n}^{(\alpha,\beta)}(x)\) instead of \(P_{n}^{(\alpha,\beta)}(x)\). \[\hat{P}_{n}^{(\mathrm{I},m)}(x;\alpha,\beta) =\frac{1}{\beta+n-m}\Big{\{}P_{m}^{(\alpha,-\beta)}(x)[(1+x) \partial_{x}+\beta]P_{n}^{(\alpha,\beta)}(x)\] \[-(1+x)\big{[}\partial_{x}P_{m}^{(\alpha,-\beta)}(x)\big{]}P_{n}^{ (\alpha,\beta)}(x)\Big{\}},\qquad n=0,1,2,\ldots,\] \[\hat{P}_{n}^{(\mathrm{II},m)}(x;\alpha,\beta) =\frac{1}{m-n-\alpha}\Big{\{}P_{m}^{(-\alpha,\beta)}(x)[(1-x) \partial_{x}-\alpha]P_{n}^{(\alpha,\beta)}(x)\] \[-(1-x)\big{[}\partial_{x}P_{m}^{(-\alpha,\beta)}(x)\big{]}P_{n}^{ (\alpha,\beta)}(x)\Big{\}},\qquad n=0,1,2,\ldots,\] and \[\hat{P}_{n}^{(\mathrm{III},m)}(x;\alpha,\beta)\] \[=\begin{cases}1&\text{if $n=0$},\\ \frac{1}{\alpha+\beta+n-2m-1}\Big{\{}P_{m}^{(-\alpha,-\beta)}(x)[(x^{2}-1) \partial_{x}+(\alpha+\beta)x+\alpha-\beta]P_{n-m-1}^{(\alpha,\beta)}(x)\\ +(1-x^{2})\big{[}\partial_{x}P_{m}^{(-\alpha,-\beta)}(x)\big{]}P_{n-m-1}^{( \alpha,\beta)}(x)\Big{\}},&\text{if $n=m+1$},\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad ## 3 Limit relations satisfied by \(X_{m}\) EOPs The purpose of this Section is to extend to \(X_{m}\) EOPs the well-known limit relations satisfied by Jacobi, Laguerre, and Hermite COPs [10], which in monic form can be written as \[\lim_{\beta\rightarrow\infty}\beta^{n}P_{n}^{(\alpha,\beta)}\Big{(}1-\frac{2x} {\beta}\Big{)}=(-2)^{n}L_{n}^{(\alpha)}(x), \tag{3.1}\] \[\lim_{\alpha\rightarrow\infty}\alpha^{n/2}P_{n}^{(\alpha,\alpha)}\Big{(} \frac{x}{\sqrt{\alpha}}\Big{)}=H_{n}(x), \tag{3.2}\] and \[\lim_{\alpha\rightarrow\infty}\frac{1}{(2\alpha)^{n/2}}L_{n}^{(\alpha)} \big{(}\sqrt{2\alpha}x+\alpha\big{)}=H_{n}(x). \tag{3.3}\] ### Going from Jacobi to Laguerre EOPs Equation (3.1) can be generalized to \(X_{m}\)-Jacobi and \(X_{m}\)-Laguerre EOPs as follows: \[\lim_{\beta\rightarrow\infty}\beta^{n+m}\hat{P}_{n}^{(\mathrm{I},m)}\left(1- \frac{2x}{\beta};\alpha,\beta\right)=(-2)^{n+m}\hat{L}_{n}^{(\mathrm{I},m)}(x ;\alpha),\qquad n=0,1,2,\ldots, \tag{3.4}\] \[\lim_{\beta\rightarrow\infty}\beta^{n+m}\hat{P}_{n}^{(\mathrm{II},m)}\left(1 -\frac{2x}{\beta};\alpha,\beta\right)=(-2)^{n+m}\hat{L}_{n}^{(\mathrm{II},m)}( x;\alpha),\qquad n=0,1,2,\ldots, \tag{3.5}\] and \[\lim_{\beta\rightarrow\infty}\beta^{n}\hat{P}_{n}^{(\mathrm{III},m)}\left(1- \frac{2x}{\beta};\alpha,\beta\right)=(-2)^{n}\hat{L}_{n}^{(\mathrm{III},m)}(x ;\alpha),\qquad n=m+1,m+2,m+3,\ldots. \tag{3.6}\] Let us first consider Eq. (3.4). By using (2.8), its left-hand side can be rewritten as \[\lim_{\beta\rightarrow\infty}\beta^{n+m}\hat{P}_{n}^{(\mathrm{I},m)}\Big{(}1-\frac{2x}{\beta};\alpha,\beta\Big{)}\] \[\quad+\beta P_{n}^{(\alpha,\beta)}\Big{(}1-\frac{2x}{\beta}\Big{)} \Big{]}-2m\Big{(}1-\frac{x}{\beta}\Big{)}P_{m-1}^{(\alpha+1,-\beta+1)}\Big{(}1 -\frac{2x}{\beta}\Big{)}P_{n}^{(\alpha,\beta)}\Big{(}1-\frac{2x}{\beta}\Big{)}\Big{\}}\] \[\quad+\beta P_{n}^{(\alpha,\beta)}\Big{(}1-\frac{2x}{\beta}\Big{)} \Big{]}-2m\Big{(}1-\frac{x}{\beta}\Big{)}P_{m-1}^{(\alpha+1,-\beta+1)}\Big{(}1 -\frac{2x}{\beta}\Big{)}P_{n}^{(\alpha,\beta)}\Big{(}1-\frac{2x}{\beta}\Big{)}\Big{\}}\] \[\quad-2mP_{m-1}^{(\alpha+1,-\beta+1)}\Big{(}1-\frac{2x}{\beta} \Big{)}P_{n}^{(\alpha,\beta)}\Big{(}1-\frac{2x}{\beta}\Big{)}\Big{\}}\] \[\quad=\Big{[}\lim_{\beta\rightarrow\infty}\beta^{m}P_{m}^{( \alpha,-\beta)}\Big{(}1-\frac{2x}{\beta}\Big{)}\Big{]}\Big{[}2n\lim_{\beta \rightarrow\infty}\beta^{n-1}P_{n-1}^{(\alpha+1,+1)}\Big{(}1-\frac{2x}{\beta} \Big{)}\] \[\quad+\lim_{\beta\rightarrow\infty}\beta^{n}P_{n}^{(\alpha,\beta)} \Big{(}1-\frac{2x}{\beta}\Big{)}\Big{]}-2m\Big{[}\lim_{\beta\rightarrow\infty }\beta^{m-1}P_{m-1}^{(\alpha+1,-\beta+1)}\Big{(}1-\frac{2x}{\beta}\Big{)}\Big{]}\] \[\quad\times\Big{[}\lim_{\beta\rightarrow\infty}\beta^{n}P_{n}^{( \alpha,\beta)}\Big{(}1-\frac{2x}{\beta}\Big{)}\Big{]}.\] On employing now (3.1), as well as its corollary \[\lim_{\beta\rightarrow\infty}\beta^{n}P_{n}^{(\alpha,-\beta)}\Big{(}1-\frac{2 x}{\beta}\Big{)}=2^{n}L_{n}^{(\alpha)}(-x), \tag{3.7}\] we get \[\lim_{\beta\to\infty}\beta^{n+m}\hat{P}_{n}^{(\mathrm{I},m)}\Big{(}1- \frac{2x}{\beta};\alpha,\beta\Big{)}\] \[\quad=2^{m}L_{m}^{(\alpha)}(-x)\Big{[}2n(-2)^{n-1}L_{n-1}^{(\alpha +1)}(x)+(-2)^{n}L_{n}^{(\alpha)}(x)\Big{]}\] \[\qquad-2m2^{m-1}L_{m-1}^{(\alpha+1)}(-x)(-2)^{n}L_{n}^{(\alpha)}(x)\] \[\qquad=(-2)^{n+m}(-1)^{m}\Big{\{}\Big{[}L_{m}^{(\alpha)}(-x)-mL_{ m-1}^{(\alpha+1)}(-x)\Big{]}L_{n}^{(\alpha)}(x)-nL_{m}^{(\alpha)}(-x)L_{n-1}^{( \alpha+1)}(x)\Big{\}}\] \[\qquad=(-2)^{n+m}(-1)^{m}\Big{[}L_{m}^{(\alpha+1)}(-x)L_{n}^{( \alpha)}(x)-nL_{m}^{(\alpha)}(-x)L_{n-1}^{(\alpha+1)}(x)\Big{]},\] where in the last step use is made of (A.4). Comparison with (2.3) completes the proof of (3.4). Next, on inserting (2.9) in the left-hand side of (3.5) and proceeding as in the previous case, we arrive at \[\lim_{\beta\to\infty}\beta^{n+m}\hat{P}_{n}^{(\mathrm{II},m)} \Big{(}1-\frac{2x}{\beta};\alpha,\beta\Big{)}\] \[\quad=\frac{1}{m-n-\alpha}\Big{\{}2nx\Big{[}\lim_{\beta\to\infty }\beta^{m}P_{m}^{(-\alpha,\beta)}\Big{(}1-\frac{2x}{\beta}\Big{)}\Big{]}\Big{[} \lim_{\beta\to\infty}\beta^{n-1}P_{n-1}^{(\alpha+1,\beta+1)}\Big{(}1-\frac{2 x}{\beta}\Big{)}\Big{]}\] \[\qquad-2mx\Big{[}\lim_{\beta\to\infty}\beta^{m-1}P_{m-1}^{(- \alpha+1,\beta+1)}\Big{(}1-\frac{2x}{\beta}\Big{)}\Big{]}\Big{[}\lim_{\beta \to\infty}\beta^{n}P_{n}^{(\alpha,\beta)}\Big{(}1-\frac{2x}{\beta}\Big{)} \Big{]}\Big{\}}\] \[\qquad=\frac{(-2)^{n+m}}{m-n-\alpha}\Big{[}-nxL_{m}^{(-\alpha)}( x)L_{n-1}^{(\alpha+1)}(x)-\alpha L_{m}^{(-\alpha)}(x)L_{n}^{(\alpha)}(x)+mxL_{m-1}^{ (-\alpha+1)}(x)L_{n}^{(\alpha)}(x)\Big{]}\] \[\qquad=\frac{(-2)^{n+m}}{\alpha+n-m}\Big{\{}nxL_{m}^{(-\alpha)}( x)L_{n-1}^{(\alpha+1)}(x)+\Big{[}\alpha L_{m}^{(-\alpha)}(x)-mxL_{m-1}^{(- \alpha+1)}(x)\Big{]}L_{n}^{(\alpha)}(x)\Big{\}}\] on using (3.1). On comparing with (2.4), to prove (3.5) it remains to check that \[mxL_{m-1}^{(-\alpha+1)}(x)-\alpha L_{m}^{(-\alpha)}(x)=(m-\alpha)L_{m}^{(- \alpha-1)}(x).\] From (A.4), it results that such a relation is equivalent to \[mxL_{m-1}^{(-\alpha+1)}(x)-\alpha L_{m}^{(-\alpha)}(x)=(m-\alpha)\Big{[}L_{m}^ {(-\alpha)}(x)+mL_{m-1}^{(-\alpha)}(x)\Big{]}\] or \[xL_{m-1}^{(-\alpha+1)}(x)=(m-\alpha)L_{m-1}^{(-\alpha)}(x)+L_{m}^{(-\alpha)}( x),\] which is indeed satisfied owing to (A.7). This completes the proof of (3.5). Finally, on inserting (2.10) in the left-hand side of (3.6), we obtain that for \(n=m+1,m+2,\ldots\) \[\lim_{\beta\to\infty}\beta^{n}\hat{P}_{n}^{(\mathrm{III},m)}\Big{(} 1-\frac{2x}{\beta};\alpha,\beta\Big{)}\] \[\qquad=\Big{[}\lim_{\beta\to\infty}\beta^{m}P_{m}^{(-\alpha,- \beta)}\Big{(}1-\frac{2x}{\beta}\Big{)}\Big{]}\Big{[}\lim_{\beta\to\infty}\beta ^{n-m}P_{n-m}^{(\alpha-1,\beta-1)}\Big{(}1-\frac{2x}{\beta}\Big{)}\Big{]}\] \[\qquad\quad+4mx\Big{[}\lim_{\beta\to\infty}\beta^{m-1}P_{m-1}^{(- \alpha+1,-\beta+1)}\Big{(}1-\frac{2x}{\beta}\Big{)}\Big{]}\Big{[}\lim_{\beta \to\infty}\beta^{n-m-1}P_{n-m-1}^{(\alpha,\beta)}\Big{(}1-\frac{2x}{\beta} \Big{)}\Big{]}.\] On using (3.1) and (3.7), we obtain \[\lim_{\beta\to\infty}\beta^{n}\hat{P}_{n}^{(\mathrm{III},m)}\Big{(} 1-\frac{2x}{\beta};\alpha,\beta\Big{)}\] \[\qquad=(-1)^{n-m}2^{n}\Big{[}L_{m}^{(-\alpha)}(-x)L_{n-m}^{( \alpha-1)}(x)-mxL_{m-1}^{(-\alpha+1)}(-x)L_{n-m-1}^{(\alpha)}(x)\Big{]}.\] A comparison with (2.5) then shows that Eq. (3.6) is satisfied, which completes the proof. ### Going from Jacobi or Laguerre to Hermite EOPs For type III EOPs and even \(m\) values, Eqs. (3.2) and (3.3) can be generalized as follows: \[\lim_{\alpha\to\infty}\alpha^{n/2}\hat{P}_{n}^{(\mathrm{III},m)}\Big{(}\frac{x}{ \sqrt{\alpha}};\alpha,\alpha\Big{)}=\hat{H}_{n}^{(\mathrm{III},m)}(x),\qquad n =m+1,m+2,\ldots, \tag{3.8}\] and \[\lim_{\alpha\to\infty}\frac{1}{(2\alpha)^{n/2}}\hat{L}_{n}^{(\mathrm{III},m)} \big{(}\sqrt{2\alpha}x+\alpha;\alpha\big{)}=\hat{H}_{n}^{(\mathrm{III},m)}(x), \qquad n=m+1,m+2,\ldots. \tag{3.9}\] Let us start with the proof of (3.8). From (2.10), we obtain that for \(n=m+1,m+2,\ldots\), \[\lim_{\alpha\to\infty}\alpha^{n/2}\hat{P}_{n}^{(\mathrm{III},m)} \Big{(}\frac{x}{\sqrt{\alpha}};\alpha,\alpha\Big{)}\] \[\quad+m\Big{(}1-\frac{x^{2}}{\alpha^{2}}\Big{)}P_{m-1}^{(-\alpha +1,-\alpha+1)}\Big{(}\frac{x}{\sqrt{\alpha}}\Big{)}P_{n-m-1}^{(\alpha,\alpha) }\Big{(}\frac{x}{\sqrt{\alpha}}\Big{)}\Big{]}\] \[\quad+\frac{m}{2}\Big{[}\lim_{\alpha\to\infty}\alpha^{(m-1)/2}P_ {m-1}^{(-\alpha+1,-\alpha+1)}\Big{(}\frac{x}{\sqrt{\alpha}}\Big{)}\Big{]} \Big{[}\lim_{\alpha\to\infty}\alpha^{(n-m-1)/2}P_{n-m-1}^{(\alpha,\alpha)}\Big{(} \frac{x}{\sqrt{\alpha}}\Big{)}\Big{]}\] \[\quad+\frac{m}{2}\Big{[}\lim_{\alpha\to\infty}\alpha^{(m-1)/2}P_ {m-1}^{(-\alpha+1,-\alpha+1)}\Big{(}\frac{x}{\sqrt{\alpha}}\Big{)}\Big{]} \Big{[}\lim_{\alpha\to\infty}\alpha^{(n-m-1)/2}P_{n-m-1}^{(\alpha,\alpha)}\Big{(} \frac{x}{\sqrt{\alpha}}\Big{)}\Big{]}\] \[\quad=(-\mathrm{i})^{m}H_{m}(\mathrm{i}x)H_{n-m}(x)+\frac{m}{2}(- \mathrm{i})^{m-1}H_{m-1}(\mathrm{i}x)H_{n-m-1}(x)\] \[\quad=\hat{H}_{n}^{(\mathrm{III},m)}(x),\] where Eq. (3.2) and its corollary \[\lim_{\alpha\to\infty}(-\alpha)^{n/2}P_{n}^{(-\alpha,-\alpha)}\Big{(}\frac{x} {\sqrt{\alpha}}\Big{)}=H_{n}(\mathrm{i}x)\] or \[\lim_{\alpha\to\infty}\alpha^{n/2}P_{n}^{(-\alpha,-\alpha)}\Big{(}\frac{x}{ \sqrt{\alpha}}\Big{)}=(-\mathrm{i})^{n}H_{n}(\mathrm{i}x)\] are used, as well as definition (2.2). Let us now consider the proof of (3.9). This time, we start from Eq. (2.5) and obtain for \(n=m+1,m+2,\ldots\), \[\lim_{\alpha\to\infty}\frac{1}{(2\alpha)^{n/2}}\hat{L}_{n}^{( \mathrm{III},m)}\big{(}\sqrt{2\alpha}x+\alpha;\alpha\big{)}\] \[\quad-m\big{(}\sqrt{2\alpha}x+\alpha\big{)}L_{m-1}^{(-\alpha+1)} \big{(}-\sqrt{2\alpha}x-\alpha\big{)}L_{n-m-1}^{(\alpha)}\big{(}\sqrt{2 \alpha}x+\alpha\big{)}\Big{]}\] \[\quad-\frac{m}{2}\Big{[}\lim_{\alpha\to\infty}\frac{1}{(2\alpha )^{(m-1)/2}}L_{m-1}^{(-\alpha+1)}\big{(}-\sqrt{2\alpha}x-\alpha\big{)}\Big{]} \Big{[}\lim_{\alpha\to\infty}\frac{1}{(2\alpha)^{(n-m-1)/2}}L_{n-m-1}^{( \alpha)}\big{(}\sqrt{2\alpha}x+\alpha\big{)}\Big{]}.\] Here, we employ (3.3), as well as its corollary \[\lim_{\alpha\to\infty}\frac{1}{(2\alpha)^{n/2}}L_{n}^{(-\alpha)}\big{(}-\sqrt {2\alpha}x-\alpha\big{)}=\mathrm{i}^{n}H_{n}(\mathrm{i}x),\] thus leading to \[\lim_{\alpha\to\infty}\frac{1}{(2\alpha)^{n/2}}\hat{L}_{n}^{(\text{III},m)}\big{(} \sqrt{2\alpha}x+\alpha;\alpha\big{)}=\text{i}^{m}H_{m}(\text{i}x)H_{n-m}(x)- \frac{m}{2}\text{i}^{m-1}H_{m-1}(\text{i}x)H_{n-m-1}(x),\] which amounts to \(\hat{H}_{n}^{(\text{III},m)}(x)\), as given in (2.2). In Appendix C, we provide some examples of the limit relations proved in this Section. ## 4 Quadratic transformations relating Hermite to Laguerre EOPs The purpose of this Section is to examine whether the quadratic transformations relating Hermite to Laguerre COPs [10], which in monic form can be written as \[H_{2n}(x)=L_{n}^{(-1/2)}(x^{2}),\qquad n=0,1,2,\dots, \tag{4.1}\] and \[H_{2n+1}(x)=xL_{n}^{(1/2)}(x^{2}),\qquad n=0,1,2,\dots, \tag{4.2}\] can be generalized to type III EOPs. We wish to prove that the answer to this question is positive for even \(X_{2m}\)-Hermite EOPs and that in such a case the relation reads \[\hat{H}_{2n}^{(\text{III},2m)}(x)=\hat{L}_{n}^{(\text{III},m)}\Big{(}x^{2}; \frac{1}{2}\Big{)},\qquad n=0,m+1,m+2,\dots. \tag{4.3}\] To start with, let us note that if we formally set \(m=0\) and take Eqs. (C.1) and (C.2) into account, Eq. (4.3) reduces to the known relation (4.1). It is also obvious that Eq. (4.3) is fulfilled for \(m=1,2,3,\dots\) and \(n=0\). Let us therefore consider \(m=1,2,3,\dots\) and \(n=m+1,m+2,\dots\). Then from (2.2) it follows that \[\hat{H}_{2n}^{(\text{III},2m)}(x)=(-1)^{m}[H_{2m}(\text{i}x)H_{2n-2m}(x)+ \text{i}mH_{2m-1}(\text{i}x)H_{2n-2m-1}(x)].\] Next, Eqs. (4.1) and (4.2) imply that \[\hat{H}_{2n}^{(\text{III},2m)}(x)=(-1)^{m}\Big{[}L_{m}^{(-1/2)}(-x^{2})L_{n-m} ^{(-1/2)}(x^{2})-mx^{2}L_{m-1}^{(1/2)}(-x^{2})L_{n-m-1}^{(1/2)}(x^{2})\Big{]}.\] On comparing with \[\hat{L}_{n}^{(\text{III},m)}(x^{2};\alpha)=(-1)^{m}\Big{[}L_{m}^{(-\alpha)}(- x^{2})L_{n-m}^{(\alpha-1)}(x^{2})-mx^{2}L_{m-1}^{(-\alpha+1)}(-x^{2})L_{n-m-1 }^{(\alpha)}(x^{2})\Big{]} \tag{4.4}\] resulting from (2.5), it follows that the right-hand sides of these relations coincide provided we set \(\alpha=1/2\) in the second one. This completes the proof of (4.3). Let us now turn ourselves to odd \(X_{2m}\)-Hermite EOPs, which from (2.2) can be written as \[\hat{H}_{2n+1}^{(\text{III},2m)}(x)=(-1)^{m}[H_{2m}(\text{i}x)H_{2n-2m+1}(x)+ \text{i}mH_{2m-1}(\text{i}x)H_{2n-2m}(x)]\] for \(n=m,m+1,m+2,\dots\). On taking (4.1) and (4.2) into account, this equation reduces to \[\hat{H}_{2n+1}^{(\text{III},2m)}(x)=(-1)^{m}x\Big{[}L_{m}^{(-1/2)}(-x^{2})L_{n -m}^{(1/2)}(x^{2})-mL_{m-1}^{(1/2)}(-x^{2})L_{n-m}^{(-1/2)}(x^{2})\Big{]}.\] Here the square bracket on the right-hand side does not look to that in (4.4) for any value of \(\alpha\) In the special case where \(n=m\), however, we directly get that \[\hat{H}_{2m+1}^{(\text{III},2m)}(x)=(-1)^{m}[xH_{2m}(\text{i}x)+\text{i}mH_{2m-1} (\text{i}x)],\] which by a straightforward application of (A.1) and (A.2) reduces to \[\hat{H}_{2m+1}^{(\text{III},2m)}(x)=(-1)^{m+1}\text{i}H_{2m+1}(\text{i}x),\] or, from (4.2), \[\hat{H}_{2m+1}^{(\text{III},2m)}(x)=(-1)^{m}xL_{m}^{(1/2)}(-x^{2}).\] This result may be compared with \[\hat{L}_{m}^{(\text{III},m-1)}(x^{2};\alpha)=(-1)^{m}L_{m}^{(-\alpha-1)}(-x^{2 }),\] which is a direct consequence of (2.7). Hence one may write \[\hat{H}_{2m+1}^{(\text{III},2m)}(x)=x\hat{L}_{m}^{(\text{III},m-1)}\Big{(}x^{2 };-\frac{3}{2}\Big{)},\qquad m=1,2,3,\dots.\] Apart from this result, the generalization of the Hermite-Laguerre quadratic transformation for odd \(n\) values remains an open problem. ## 5 Conclusion In the present paper, we have shown that the known asymptotic relations interconnecting Jacobi, Laguerre, and Hermite COPs can be generalized to the corresponding EOPs of codimension \(m\). For such a purpose, we have first listed the sets of EOPs to be considered, together with their explicit expressions in terms of COPs. We have also provided their link with some other definitions found in the literature. We have then stated and proved limit relations allowing to get \(X_{m}\)-Laguerre EOPs of type I, II, or III from \(X_{m}\)-Jacobi EOPs of the same type, as well as \(X_{m}\)-Hermite EOPs of type III from \(X_{m}\)-Jacobi or \(X_{m}\)-Laguerre EOPs of type III. Finally, we have established that a quadratic transformation applied to \(X_{m}\)-Laguerre EOPs of type III enables one to obtain even \(X_{2m}\)-Hermite EOPs. Whether a similar transformation would lead to odd \(X_{2m}\)-Hermite EOPs remains an open problem for future investigation. Studying an extension of the limit relations presented here to more involved kinds of EOPs is also an open question for future work. ## Appendix A: Relations satisfied by COPs In this Appendix, we list some relations satisfied by COPs [9, 10], rewritten in monic form: \[\partial_{x}H_{n}(x)=nH_{n-1}(x),\] (A.1) \[(\partial_{x}-2x)H_{n}(x)=-2H_{n+1}(x),\] (A.2) \[\partial_{x}L_{n}^{(\alpha)}(x)=nL_{n-1}^{(\alpha+1)}(x),\] (A.3) \[L_{n}^{(\alpha-1)}(x)=L_{n}^{(\alpha)}(x)+nL_{n-1}^{(\alpha)}(x),\] (A.4) \[(x\partial_{x}+\alpha-x)L_{n}^{(\alpha)}(x)=-L_{n+1}^{(\alpha-1)}(x),\] (A.5) \[L_{n+1}^{(\alpha)}(x)+(2n+\alpha+1-x)L_{n}^{(\alpha)}(x)+n(n+\alpha)L_{n-1}^{ (\alpha)}(x)=0,\] (A.6) \[xL_{n}^{(\alpha+1)}(x)=(n+\alpha+1)L_{n}^{(\alpha)}(x)+L_{n+1}^{(\alpha)}(x),\] (A.7) \[\partial_{x}P_{n}^{(\alpha,\beta)}(x)=nP_{n-1}^{(\alpha+1,\beta+1)}(x),\] (A.8) \[(1-x^{2})\partial_{x}P_{n}^{(\alpha,\beta)}(x)+[\beta-\alpha-(\alpha+\beta)x]P _{n}^{(\alpha,\beta)}(x)=-(n+\alpha+\beta)P_{n+1}^{(\alpha-1,\beta-1)}(x),\] (A.9) \[(2n+\alpha+\beta+1)(2n+\alpha+\beta+2)\Big{[}(1-x)P_{n}^{(\alpha+1,\beta)}(x)+ P_{n+1}^{(\alpha,\beta)}(x)\Big{]}\] \[=2(n+\alpha+1)(n+\alpha+\beta+1)P_{n}^{(\alpha,\beta)}(x),\] (A.10) \[(2n+\alpha+\beta-1)(2n+\alpha+\beta)\Big{[}P_{n}^{(\alpha,\beta)}(x)-P_{n}^{( \alpha-1,\beta)}(x)\Big{]}=2n(n+\beta)P_{n-1}^{(\alpha,\beta)}(x),\] (A.11) \[(2n+\alpha+\beta-1)\Big{[}P_{n}^{(\alpha,\beta-1)}(x)-P_{n}^{(\alpha-1,\beta)} (x)\Big{]}=2nP_{n-1}^{(\alpha,\beta)}(x).\] (A.12) ## Appendix B: Proof of an identity satisfied by Jacobi polynomials The purpose of this Appendix is to prove Eq. (2.12). On using Eq. (A.10) for \(n\to m-1\), \(\alpha\to-\alpha-1\), \(\beta\to\beta\), we obtain \[(1-x)P_{m-1}^{(-\alpha,\beta)}(x)=\frac{2(m-\alpha-1)(m-\alpha+\beta-1)}{(2m -\alpha+\beta-2)(2m-\alpha+\beta-1)}P_{m-1}^{(-\alpha-1,\beta)}(x)-P_{m}^{(- \alpha-1,\beta)}(x),\] so the left-hand side of (2.12) becomes \[(\alpha+1)P_{m}^{(-\alpha-1,\beta-1)}(x)+m(1-x)P_{m-1}^{(-\alpha,\beta)}(x)=(\alpha+1)P_{m}^{(-\alpha-1,\beta-1)}(x)\] \[\quad+\frac{2m(m-\alpha-1)(m-\alpha+\beta-1)}{(2m-\alpha+\beta- 2)(2m-\alpha+\beta-1)}P_{m-1}^{(-\alpha-1,\beta)}(x)-mP_{m}^{(-\alpha-1,\beta )}(x).\] Equation (A.12) for \(n\to m\), \(\alpha\to-\alpha-1\), \(\beta\to\beta\), namely \[P_{m}^{(-\alpha-1,\beta-1)}(x)=P_{m}^{(-\alpha-2,\beta)}(x)+\frac{2m}{2m- \alpha+\beta-2}P_{m-1}^{(-\alpha-1,\beta)}(x),\] enables one to transform it into \[(\alpha+1)P_{m}^{(-\alpha-1,\beta-1)}(x)+m(1-x)P_{m-1}^{(-\alpha,\beta)}(x)\] \[\quad=(\alpha+1)P_{m}^{(-\alpha-2,\beta)}(x)+\frac{2m^{2}(m+ \beta)}{(2m-\alpha+\beta-2)(2m-\alpha+\beta-1)}P_{m-1}^{(-\alpha-1,\beta)}(x )-mP_{m}^{(-\alpha-1,\beta)}(x).\] Finally, Eq. (A.11) for \(n\to m\), \(\alpha\to-\alpha-1\), \(\beta\to\beta\) or \[(2m-\alpha+\beta-2)(2m-\alpha+\beta-1)P_{m}^{(-\alpha-1,\beta)}(x )-2m(m+\beta)P_{m-1}^{(-\alpha-1,\beta)}(x)\] \[\qquad=(2m-\alpha+\beta-2)(2m-\alpha+\beta-1)P_{m}^{(-\alpha-2, \beta)}(x)\] leads to the desired result (2.12). ## Appendix C: Examples of limit relations To start with, let us note that if we formally set \(m=0\) in the EOPs definitions, we obtain that for \(n=0,1,2,\ldots\), \[\hat{H}_{n}^{(\mathrm{III},0)}(x)=H_{n}(x),\] (C.1) \[\hat{L}_{n}^{(\mathrm{I},0)}(x;\alpha)=L_{n}^{(\alpha+1)}(x),\qquad \hat{L}_{n}^{(\mathrm{II},0)}(x;\alpha)=L_{n}^{(\alpha-1)}(x),\qquad\hat{L}_{ n}^{(\mathrm{III},0)}(x;\alpha)=L_{n}^{(\alpha-1)}(x),\] (C.2) \[\hat{P}_{n}^{(\mathrm{I},0)}(x;\alpha,\beta)=P_{n}^{(\alpha+1, \beta-1)}(x),\qquad\hat{P}_{n}^{(\mathrm{II},0)}(x;\alpha,\beta)=P_{n}^{( \alpha-1,\beta+1)}(x),\] \[\hat{P}_{n}^{(\mathrm{III},0)}(x;\alpha,\beta)=P_{n}^{(\alpha-1, \beta+1)}(x),\] (C.3) after some straightforward application of identities satisfied by COPs. This shows that for \(m=0\), the new limit relations of Section 3 connecting EOPs agree with the well-known ones satisfied by COPs. For positive values of \(m\), one gets for instance \[\hat{H}_{3}^{(\mathrm{III},2)}(x)=x^{3}+\frac{3}{2}x,\] \[\hat{L}_{2}^{(\mathrm{I},1)}(x;\alpha)=x^{3}-(\alpha+4)x^{2}-( \alpha+1)(\alpha+4)x+(\alpha+1)(\alpha+2)(\alpha+4),\] \[\hat{L}_{2}^{(\mathrm{II},1)}(x;\alpha)=x^{3}-(\alpha+2)x^{2}-( \alpha-1)(\alpha+2)x+(\alpha-1)\alpha(\alpha+2),\] \[\hat{L}_{3}^{(\mathrm{III},2)}(x;\alpha)=x^{3}-3(\alpha-2)x^{2}+3 (\alpha-1)(\alpha-2)x-\alpha(\alpha-1)(\alpha-2),\] \[\hat{P}_{2}^{(\mathrm{I},1)}(x;\alpha,\beta)=x^{3}+\frac{2( \alpha-\beta+2)^{2}+(\alpha+\beta)(\alpha+\beta+6)}{(\alpha+\beta+4)(\alpha- \beta+2)}x^{2}\] \[\quad+\frac{(\alpha-\beta+2)^{2}+(\alpha+\beta)(2\alpha+2\beta+9 )}{(\alpha+\beta+3)(\alpha+\beta+4)}x+\frac{(\alpha-\beta+2)^{2}(\alpha+\beta+ 2)-(\alpha+\beta)(\alpha+\beta+6)}{(\alpha-\beta+2)(\alpha+\beta+3)(\alpha+ \beta+4)},\] \[\hat{P}_{2}^{(\mathrm{II},1)}(x;\alpha,\beta)=x^{3}+\frac{2( \alpha-\beta-2)^{2}+(\alpha+\beta)(\alpha+\beta+6)}{(\alpha+\beta+4)(\alpha- \beta-2)}x^{2}\] \[\quad+\frac{(\alpha-\beta-2)^{2}+(\alpha+\beta)(2\alpha+2\beta+9 )}{(\alpha+\beta+3)(\alpha+\beta+4)}x+\frac{(\alpha-\beta-2)^{2}(\alpha+\beta+ 2)-(\alpha+\beta)(\alpha+\beta+6)}{(\alpha-\beta-2)(\alpha+\beta+3)(\alpha+ \beta+4)},\] \[\hat{P}_{3}^{(\mathrm{III},2)}(x;\alpha,\beta)=x^{3}+\frac{3( \alpha-\beta)}{\alpha+\beta-4}x^{2}+3\frac{(\alpha-\beta)^{2}+\alpha+\beta-4 }{(\alpha+\beta-3)(\alpha+\beta-4)}x\] \[\quad+\frac{(\alpha-\beta)[(\alpha-\beta)^{2}+3\alpha+3\beta-10] }{(\alpha+\beta-2)(\alpha+\beta-3)(\alpha+\beta-4)},\] on which one can directly check the relations \[\lim_{\beta\to\infty}\beta^{3}\hat{P}_{2}^{(\mathrm{I},1)}\Big{(} 1-\frac{2x}{\beta};\alpha,\beta\Big{)}=-8\hat{L}_{2}^{(\mathrm{I},1)}(x; \alpha),\] \[\lim_{\beta\to\infty}\beta^{3}\hat{P}_{2}^{(\mathrm{II},1)}\Big{(} 1-\frac{2x}{\beta};\alpha,\beta\Big{)}=-8\hat{L}_{2}^{(\mathrm{II},1)}(x; \alpha),\] \[\lim_{\beta\to\infty}\beta^{3}\hat{P}_{3}^{(\mathrm{III},2)}\Big{(} 1-\frac{2x}{\beta};\alpha,\beta\Big{)}=-8\hat{L}_{3}^{(\mathrm{III},2)}(x; \alpha),\] \[\quad\lim_{\alpha\to\infty}\alpha^{3/2}\hat{P}_{3}^{(\mathrm{III},2)}\Big{(}\frac{x}{\sqrt{\alpha}};\alpha,\alpha\Big{)}=\hat{H}_{3}^{( \mathrm{III},2)}(x),\] \[\lim_{\alpha\to\infty}\frac{1}{(2\alpha)^{3/2}}\hat{L}_{3}^{( \mathrm{III},2)}\big{(}\sqrt{2\alpha}x+\alpha;\alpha\big{)}=\hat{H}_{3}^{( \mathrm{III},2)}(x).\] ## Acknowledgments The author was supported by the Fonds de la Recherche Scientifique-FNRS under Grant No. 4.45.10.08.